Abstract
Widespread concern has been raised about the possibility of potential biasing factors influencing the measurement of organizational variables and distorting inferences and conclusions reached about them. Recent research calls for a measure-centric approach in which every measure is independently evaluated to assess what factor(s) may uniquely bias it. This paper examines three popular stressor measures from this perspective. Across three studies, we examine factors that may bias three popular measures of job stressors: The Interpersonal Conflict at Work Scale (ICAWS), the Organizational Constraints Scale (OCS), and the Quantitative Workload Inventory (QWI). The first study used a two-wave design to survey 276 MTurk workers to assess the three stressor scales, four strains, and five measures of potential bias sources: hostile attribution bias, negative affectivity, mood, neutral objects satisfaction, and social desirability. The second study used an experimental design with 439 MTurk workers who were randomly assigned to a positive, negative, or no mood induction condition to assess effects on means of the three stressor measures and their correlations with strains. The third study surveyed 161 employee-supervisor dyads to explore the convergence of results involving the three stressor measures across sources. Based on several forms of evidence we conclude that potential biasing factors affect the three stressor measures differently, supporting the merits of a measure centric approach, even among measures in the same domain.
Keywords: Interpersonal conflict, Organizational constraints, Workload, Method variance, Construct validity, Stress
As the study of many organizational topics, including stress, has been largely dependent upon self-reports, it is important to understand the reasons that such measures relate to one another. Much has been written about extraneous biasing factors in the measurement of stressors and other variables, often discussed under the rubric of method variance, that suggests there are factors inherent in methods that affect results. Recently, Spector et al. (2019) presented what they termed the measure-centric approach to method variance which posits that each measured variable is affected by its own set of extraneous factors other than the intended trait and measurement error. They recommended conducting research on individual measures to identify extraneous factors that might distort relationships of target measures with other variables, but they did not provide evidence from specific measures to support their claims. In the current research, we adopted that perspective and present results of three studies that explored possible biasing effects of five diverse factors (i.e., hostile attribution bias (HAB), negative affectivity (NA), mood, neutral object satisfaction, and social desirability) on three commonly used stressor measures (i.e., the Interpersonal Conflict at Work Scale (ICAWS), the Organizational Constraints Scale (OCS), and the Quantitative Workload Inventory (QWI)). The goal was to determine if each measure might be affected by its own unique set of factors, which would simultaneously provide support for Spector et al.’s (2019) measure-centric theory, while also providing insight into idiosyncratic factors that might or might not bias these popular measures of organizational constructs. In doing so, we provide an empirical test of the theoretical ideas presented by Spector et al. (2019).
Potential Biasing Factors
In the context of assessment, biasing factors are extraneous and systematic factors that affect a measure in addition to the intended construct and error variance. A biasing factor affects measurement and not the underlying theoretical construct it is intended to reflect. For example, suppose the personality trait of conscientiousness relates to reports of workload. If conscientiousness relates to workload reports but not objective workload (e.g., assessed via objective measures of daily work performed), that trait would be an example of a biasing factor because it relates to workload reports independent of workload itself. In other words, conscientiousness is affecting the self-report assessment of workload and not objective workload that was the target of measurement. This is illustrated in Fig. 1 using a Venn diagram. The upper circle represents variance in the intended construct (volume of work required of an employee) and the lower circle represents variance in the measures intended to assess the intended construct (self-report scale of workload). The overlap represents the extent to which the measure reflects the intended construct and would represent construct validity. The portion of the lower circle that does not overlap the upper circle represents extraneous factors that affect the self-report measure, including random error variance and systematic biasing factors.
Fig. 1.
Illustration of biasing factors that affect the measurement of a construct and not the construct itself. The upper circle represents the variance of the intended construct, and the lower circle represents variance in the measure used to assess it. Overlap indicates the extent to which the measure reflects the intended construct
As we are using it here, the term biasing factor relates to the concept of method variance, which is sometimes referred to as common method bias (Podsakoff et al., 2012). The original idea of method variance is that extraneous factors reside in the method itself so that all measures assessed with that method would be affected (Campbell & Fiske, 1959). Thus, within a method (e.g., self-report agreement rating scales), measures of all constructs might be affected by the same biasing factor. More recent treatments have considered underlying factors that produce method variance rather than just assuming it was contained in the method (Podsakoff et al., 2012). The measure centric idea that drives this research goes farther in suggesting that each measure can be affected by one or more biasing factors that can be shared (common method variance) or not shared (uncommon method variance) with another measure. When correlating one measure with another, there can be some biasing factors that overlap and some that are unique. Overlapping factors inflate correlations because of shared variance unrelated to the underlying constructs of interest. Unique factors act like error variance and attenuate correlations because they introduce extraneous variance into measurement. As such, disentangling the reasons that two measures correlate and determining whether observed relationships reflect connections among constructs of interest begins with an understanding of the biasing factors that affect each measure, so we know which are shared and which are not.
Separating Biasing from Substantive Effect
Finding that your focal variable relates to measures of one or more extraneous variables in and of itself is not sufficient to draw conclusions about biasing effects. There are many substantive reasons that an extraneous variable might related to a focal variable (e.g., Spector et al., 2019 list several substantive mechanisms for negative affectivity). With our earlier personality-workload example, conscientious employees might seek out additional tasks that add to their load. Distinguishing when an extraneous factor plays a biasing or substantive role can be difficult and requires going beyond cross-sectional self-reports that can only show whether the factor is related to the variable of interest. Thus, we present results from three studies each using a different research design—longitudinal, experimental, and multi-source.
Specific Biasing Factors
Throughout the literature on assessment and method variance, there are discussions of specific factors that might bias measures. Some of these are characteristics of the assessment itself, whereas others are characteristics of respondents. We chose five potential respondent biasing factors (i.e., HAB, NA, mood, neutral objects satisfaction, and social desirability) that represent a broad cross-section of possibilities that tapped affect, attitudes, cognition, and personality. Each has the potential to serve as a biasing factor such that they might affect the assessment of intended constructs independent of the constructs themselves. Further, all five seemed particularly relevant to stressor measures, and readily available measures exist for each one. We did not investigate characteristics of the assessments, such as response formats which was discussed by Campbell and Fiske (1959) and Podsakoff et al. (2003), among others. Response format was hypothesized as a potential source of bias because people completing items with the same item response format (e.g., agreement on a 6-point scale) may exhibit consistent rating styles, such as acquiescence (i.e., the tendency to agree with items regardless of content). Thus, we would expect that two scales using the same item response format would correlate more strongly than if they used different response formats. However, empirical research does not support response format as a source of bias in stressor measures including the three studied here (Spector & Nixon, 2019).
Of the five potential respondent biasing factors that we consider, NA and social desirability have received the most research attention. Mood has been noted as a potential bias factor for the measurement of many variables. Neutral objects satisfaction reflects feelings but is more akin to attitudes than emotions. Nevertheless, it has been considered a measure of affective disposition (Eschleman & Bowling, 2011). HAB refers to a particular type of cognitive bias that can affect how people view the world.
Hostile Attribution Bias
HAB is the tendency for an individual to assume that when something unpleasant happens to them, it was because someone was purposely trying to harm them (Epps & Kendall, 1995). Dodge and Coie (1987) note how this individual difference variable reflects distorted cognition where people misinterpret events in the environment, assuming they are a threat when they are benign. Although it has received limited attention in organizational research (rare examples include Wu et al., 2014; Zhou et al., 2015), this tendency to make hostile attributions would likely color an individual’s perceptions of workplace events, particularly when they involve actions by others. An individual high on this tendency might tend to perceive higher levels of stressors than individuals who are low on this tendency, especially social stressors that involve interpersonal interactions, such as conflict.
Negative Affectivity
NA (or neuroticism) is a personality variable reflecting an individual’s tendency to experience negative emotion across situations and time (Watson & Clark, 1984). Watson et al. (1986) suggested that measures of stressors and strains are biased by NA, which can produce observed correlations among them. In their review of the literature, Podsakoff et al. (2003) suggested that NA can be a method variance source for organizational measures, and Podsakoff et al. (2012) listed it as a source of method variance most in need of procedural control. However, a series of studies testing the idea of NA creating spurious correlations between stressors and strains has led to conflicting conclusions (Brief et al., 1988; Burke et al., 1993; Chen & Spector, 1991; Rau et al., 2010; Spector et al., 1995). Spector et al. (2000) summarize evidence that the connection of NA to stressors is complex and involves several pathways. For example, NA has been found to relate to objective features of jobs (Spector et al., 1995). That NA might relate to the objective environment, however, does not rule out that NA might also influence the assessment of job stressors.
Mood
Podsakoff et al. (2003) argue that transient mood state might color how a person responds to survey questions. If true, one of the possible pathways by which NA might affect individuals is through their affective states and transitory mood. In most studies, mood is considered a strain variable that is the consequence of exposure to stressors. However, it is certainly possible that the mood state of the individual at the time a survey is completed would influence their response to all stressor measures, and not just those that reflect the workplace conditions that led to the emotional state. For example, if an employee has a conflict with a coworker shortly before completing a survey, it is possible that not only would the report of conflict be affected but reports of other stressors would be affected, as well.
Neutral Objects Satisfaction
The neutral objects satisfaction idea is that individuals differ in their tendency to be satisfied/dissatisfied with things in life, even innocuous neutral things like food prices or radio programs. Weitz (1952) developed a scale to assess this tendency, arguing that it serves as a background frame-of-reference for the individual’s job attitudes. Judge and Bretz (1993) considered their neutral objects satisfaction measure to reflect negative affectivity, although the items ask about satisfaction and not about emotions. Eschleman and Bowling (2011) pointed out that this scale reflects affective disposition, but in a way that minimizes the social desirability of items because items concern neutral objects and not personal feelings. As a measure of at least the affective component of attitudes, this scale might reflect a personality characteristic that biases measurement of the work environment. People who are generally satisfied in life, would perceive the world in a favorable light, and be likely to underestimate job stressors.
Social Desirability
Social desirability is perhaps the most studied potential biasing factor, at least in the study of personality. Seminal work by Edwards (1957) and Crowne and Marlowe (1960) provided measures of social desirability, reflecting the tendency for individuals to present themselves in a favorable light. Measures of this personality construct contain items that reflect socially desirable and undesirable (reverse scored) behaviors. Individuals high on social desirability will tend to report performing desirable and not performing undesirable behaviors. Measures of social desirability are often used to identify individuals who likely distort their responses on questionnaires. This can be particularly important with measures that ask about personally sensitive information. However, it should be kept in mind that social desirability is a personality characteristic, conceptualized as need for approval by Crowne and Marlowe (1964).
Social desirability is often listed as an important source of method variance (Podsakoff et al., 2003, 2012) because it has the potential to bias responses to items in a favorable direction. It should be noted, however, that social desirability relationships with many organizational variables tend to be quite small (see meta-analysis by Moorman & Podsakoff, 1992).
Overlap Among the Bias Measures
Although the five potential bias measures reflect constructs that are conceptually distinct, they are not entirely independent of one another. Mood and NA concern the experience of negative emotions, with mood reflecting current emotional state and NA reflecting a relatively stable personality characteristic. The two are related, as individual high in NA are more likely than those low to experience negative mood. Furthermore, reports of negative emotional states and traits can be subject to social desirability. Neutral objects satisfaction has been suggested to reflect affective disposition, so it too would be expected to relate to NA. This study will indicate the extent to which these five measures are distinct.
Stressor Scales
We explored how five potential sources of bias might impact three well-established stressor scales described in Spector and Jex (1998): ICAWS, OCS, and QWI. We chose these scales for three reasons. First, this suite of scales covers workplace stressors that qualitative studies have identified as frequently experienced (Narayanan et al., 1999a, b). Second, these scales have been popular with organizational scholars and have been often used. For example, in their meta-analysis of workplace harassment, Bowling and Beehr (2006) noted that the ICAWS was the most often used scale in this domain (28 studies); the next most often used scale was used in 8 studies. Likewise, in their meta-analysis of organizational constraints studies, Pindek and Spector (2016) noted that 70% of their 118 samples used the OCS. Third, these measures are frequently used together in the same study, and they tap into the same domain, yet assess clearly different constructs—the ICAWS assesses an interpersonal stressor, the QWI assesses a task-based stressor, and the OCS includes elements of both (constraints due to the environment and due to other people). Studying the three together will provide insight into the extent to which different scales potentially share the same biasing factors, or whether they have unique sets. Further it will indicate if there is a common set of potential biasing factors within a particular domain (i.e., job stress) and will allow us to determine whether we are able to generalize findings from one scale to another.
Psychometric Properties
Since all three measures have been frequently used, there is a great deal of literature that reflects on their psychometric properties, notably reliability and evidence for construct validity.
Internal Consistency
Spector and Jex (1998) provided internal consistency estimates (coefficient alpha) for all three scales across multiple studies. Internal consistency means were 0.74 for the ICAWS (13 samples), 0.85 for the OCS (8 samples), and 0.82 for the QWI (15 samples).
Factor Structure
Several published papers reported confirmatory factor analyses (CFAs) that included one or more of these measures in combination with other measures. These papers reported the fit of measurement models in which items loaded only on their own measure, and some papers compared such models to baseline models in which all items were fit to a single underlying factor. Limiting to studies conducted in English-speaking samples, there was support for the factorial validity of the ICAWS (An et al., 2016; Cohen et al., 2013; Edwards et al., 2007; Kim & Beehr, 2018), OCS (Castille et al., 2017; Kuyumcu & Dahling, 2014), and QWI (Che et al., 2017; Chen et al., 2017; DeArmond et al., 2014; Edwards et al., 2007).
Convergence of Sources
Convergence of sources provides evidence that there is a core portion of a measure that reflects something about which people would show agreement. This is particularly important for measures of environmental features, such as job stressors, so we can show that the self-report reflects something objective about the job as indicated by consensus. It should be kept in mind, however, that other reports are not necessarily accurate (Frese & Zapf, 1988), because a coworker, supervisor, or other individual might not be fully aware of the experiences that an individual might have (Gabriel et al., 2019). For example, supervisors might be aware of conflicts employees have with them but might be unaware of conflicts with others.
For the ICAWS we located five papers (Bruk-Lee & Spector, 2006; Fox et al., 2007; Penney & Spector, 2005; Skyrme, 1992; Spector et al., 1988) that reported eight correlations linking self-reports to one or more alternate sources such as coworkers or supervisors. The mean convergence was 0.27. We located four papers that reported seven correlations for the OCS (Fox et al., 2007; Liu et al., 2010; Skyrme, 1992; Spector et al., 1988). The mean convergence was 0.29. The QWI was reported in three papers that contained five correlations (Fox et al., 2007; Skyrme, 1992; Spector et al., 1988). The mean correlation was 0.38. These findings provide evidence that there is some degree of convergence among sources, with the degree of convergence being strongest for the QWI. These results provide evidence that these scales reflect constructs that have some degree of objectivity as reflected in consensus.
Criterion-Related Validity
All three of these stressor scales have been linked to a variety of variables including attitudes, personality, and strains. Perhaps most relevant to the construct validity of stressor measures would be their relationships with strains. Spector and Jex (1998) provided a small meta-analysis showing that all three stressors related to anxiety, frustration, job satisfaction, physical health symptoms and turnover intentions. Bowling and Beehr (2006) in their meta-analysis linked the ICAWS to these variables as well as burnout, depression, life satisfaction, and organizational commitment.
Strain Outcomes
In addition to assessing the three stressors to determine potential biasing effects, we included four well-studied strain variables that represented behavioral (counterproductive work behavior), physical (physical health symptoms), and psychological (job satisfaction and turnover intentions) strains. We chose a specific set of strains because they have been frequently included in occupational stress studies and have been shown to relate to stressors. Their inclusion enabled us to see if controlling for potential bias factors through our design choices affected established stressor–strain relationships. Although our focus is on the three stressor scales and not the strain outcomes, it is possible that the strain measures are also subject to bias that can inflate observed stressor–strain correlations if shared (common method variance) and deflate stressor–strain correlations if unshared (uncommon method variance). Our results will also address this possibility, although our focus is primarily on the three stressor measures.
Current Studies
Our strategy was to conduct three studies that not only assessed potential biasing factors along with the stressor scales, but also included design features that enabled us to go beyond merely investigating static self-report relationships. In study 1, we administered surveys twice, 2 to 3 weeks apart, using the same measures and same instructions for each survey. These surveys included the three stressor scales, four strain measures, and measures of the five potential biasing variables noted earlier: HAB, NA, negative mood, neutral objects satisfaction, and social desirability. The time lag was included to control for transient mood effects to see if time separation would affect stressor–strain correlations. We were not attempting to compare results at time 1 with experiences that occurred between the two time periods, so we did not include instructions at time 2 to focus on the time period since the time 1 survey. Study 2 was a randomized experiment that used a mood induction to determine the effect of transient mood on the stressor measures and their relationships with other variables. Study 3 was a multi-source field study that enabled us to compare self-reports with supervisor-reports of stressors and their relationships to potential biasing factors and employee-reported strains.
Study 1 Method
Participants and Procedure
Participants were recruited from Amazon Mechanical Turk (MTurk; Buhrmester et al., 2011). The survey was only available to individuals who worked at least 20 h per week in the USA with at least a 97% response approval rating on Mturk. Initially, 403 eligible employees completed an online survey containing measures of job stressors, strains, and potential biasing factors that might affect the stressor measures. Of those, 318 participants completed the same survey approximately 2 weeks later. Participants were paid $2.00 each time they completed the survey.
We addressed quality control by following recommendations by Wood et al. (2017) to include measures of response consistency and response speed. For response consistency, we looked at response patterns for the job satisfaction measure that had one reverse scored item. We reasoned that individuals who responded similarly to items written in opposite directions were likely not paying attention to the content of the items. We removed 42 participants who responded with consistent agreement/disagreement (agree/disagree moderately or very much) across all 3 items before reverse scoring. For response speed we incorporated the Wood et al. 1 s per item criterion (see also Aguinis et al., 2021). Only one case failed to meet the speed criterion, but they also failed the consistency criterion. Responses from 277 employees who had complete data from both time points and passed the consistency and speed checks were analyzed.
The 277 participants (53% male) ranged in age from 21 to 69 (M = 37.2, SD = 10.1). The sample was 9.8% Asian, 5.8% Black, 6.9% Hispanic, 74% White, and 3.6% Other. The majority of the sample (56.7%) worked 40 to 49 h/week, with a quarter working 30–39 h per week. They held a wide variety of jobs across diverse industries such as accountant, clerk, customer service, engineer, nurse, sales representative, and teacher. Job tenure was a categorical variable; 6.2% was 1 year or less, 20.7% was 1–3 years, 22.5% was 3–5 years, 29.4% was 5–10 years, 12.7% was 10–15 years, and 5.1% was 15 years or more.
Measures
Measures were the same for time 1 and 2. Reliability estimates noted here are from this study.
Interpersonal Conflict. Interpersonal conflict was measured with the 4-item interpersonal conflict at work scale1 (ICAWS; Spector & Jex, 1998). An example item is “How often are people rude to you at work?” Participants responded on a 5-point frequency scale (1 = never; 5 = very often). The measure had sufficient internal consistency reliability (time 1 and time 2 α = 0.90), and test–retest reliability (r = 0.82).
Organizational Constraints
Participants responded to the 11-item organizational constraints scale2 (OCS; Spector & Jex, 1998). They reported the frequency with which it is difficult or impossible to perform their jobs due to constraints such as poor equipment, incorrect instructions, and inadequate training on a 5-point scale (1 = less than once per month or never; 5 = several times per day). The measure demonstrated adequate internal consistency reliability (time 1 α = 0.93; time 2 α = 0.94), and test–retest reliability (r = 0.85).
Workload
Workload was measured with the 5-item quantitative workload inventory3 (QWI; Spector & Jex, 1998). A sample item is “How often does your job require you to work very fast?” Participants responded on a 5-point scale (1 = less than once per month or never; 5 = several times per day). The scale had acceptable internal consistency reliability (time 1 α = 0.87; time 2 α = 0.88), and test–retest reliability (r = 0.81).
Counterproductive Work Behavior
Counterproductive work behavior was measured with a 10-item counterproductive work behavior checklist (CWB-C; Spector et al., 2010). Participants rated how often they perform counterproductive work behaviors, such as coming to work late without permission, on a 5-point scale (1 = never; 5 = once or twice per month). The scale demonstrated adequate internal consistency reliability (time 1 α = 0.92; time 2 α = 0.91), and test–retest reliability (r = 0.88).
Job Satisfaction
Job satisfaction was measured with a 3-item job satisfaction subscale of the Michigan Organizational Assessment Questionnaire (MOAQ, Cammann et al., 1979). Participants responded on a 6-point scale (1 = disagree very much; 6 = agree very much). The scale demonstrated adequate internal consistency reliability (time 1 and time 2 α = 0.96), and test–retest reliability (r = 0.91).
Physical Symptoms
Participants responded to an 18-item physical symptoms inventory (PSI; Spector & Jex, 1998). They rated how often they have experienced symptoms such as headaches, fatigue, and nausea over the past 30 days on a 5-point scale (1 = not at all; 5 = every day; time 1 and time 2 α = 0.93), and test–retest reliability (r = 0.82).
Turnover Intentions
Turnover intentions were measured with an established 1-item measure: “How often have you seriously considered quitting your present job?” (Spector et al., 1988). Participants reported the frequency with which they have turnover intentions on a 6-point scale (1 = never; 6 = extremely often). The test–retest reliability was 0.81.
Hostile Attribution Bias
We administered a 7-item workplace HAB measure (Bal & O'Brien, 2010). An example item is “If people are laughing at work, I think they are laughing at me.” Participants responded on a 6-point scale (1 = strongly disagree; 6 = strongly agree). The measure demonstrated sufficient internal consistency reliability (time 1 α = 0.88; time 2 α = 0.89), and test–retest reliability (r = 0.76).
Negative Affectivity
NA was measured with a 10-item emotional stability scale (reverse-coded; Big Five Factor Markers) from the International Personality Item Pool (Goldberg et al., 2006). Participants indicated the extent to which they “worry about things,” “get easily upset,” etc. on a 5-point scale (1 = very inaccurate; 5 = very accurate). The scale had sufficient internal consistency reliability (time 1 and time 2 α = 0.94), and test–retest reliability (r = 0.92).
Negative Mood
Negative mood was measured with the five negative mood items from (Mackinnon et al., 1999). Participants indicated the extent to which they currently felt emotions (e.g., distressed, upset, and afraid) on a 5-point scale (1 = very slightly or not at all; 5 = extremely). The scale had sufficient internal consistency reliability (time 1 α = 0.93; time 2 α = 0.95), and test–retest reliability (r = 0.79).
Neutral Object Satisfaction
Participants responded to 11 items from the neutral objects satisfaction questionnaire (Weitz, 1952) shortened by Judge and Bretz (1993). The scale was shortened by removing items with standardized factor loadings lower than 0.40. Participants rated their satisfaction with objects like “today’s cars” and “local newspapers” on a 3-point scale (dissatisfied, neutral, and satisfied). The measure demonstrated sufficient internal consistency reliability (time 1 and time 2 α = 0.71), and test–retest reliability (r = 0.78).
Social Desirability
Social desirability was measured with the 10-item shortened version of the Crowne and Marlowe (1960) scale (Strahan & Gerbasi, 1972). The items used a true/false format, including items such as “I like to gossip at times.” The measure demonstrated sufficient internal consistency reliability at time 1 (α = 0.78) and time 2 (α = 0.79), and test–retest reliability (r = 0.85).
Demographics
Participants were asked to specify their age in years, gender, ethnicity, weekly work hours, industry, job title, and job tenure.
Study 1 Results
Descriptive statistics (mean, standard deviation, observed and possible range) and coefficient alpha for all variables at times 1 and 2 are shown in Table 1. As can be seen, the descriptive statistics were quite close for all variables between the two time points. Also, the range of most variables spanned the entire or nearly the entire possible range. Finally, coefficient alphas were similar across time points, and all were above the generally accepted minimum of 0.70 (Nunnally & Bernstein, 1994), with all but two exceeding 0.80. Table 2 contains correlations among all variables in the study with time 1 above and time 2 below the main diagonal. Of note is that all correlations of stressors with strains are statistically significant, with correlations for conflict and constraints being considerably larger than correlations for workload.
Table 1.
Descriptive statistics for study 1
| Variable | Time | Mean | Standard deviation | Observed range | Possible range | Coefficient alpha |
|---|---|---|---|---|---|---|
| Interpersonal conflict |
1 2 |
6.3 6.5 |
3.0 3.1 |
4–18 4–18 |
4–20 |
.90 .90 |
| Organizational constraints |
1 2 |
20.4 20.6 |
8.6 8.9 |
11–51 11–52 |
11–55 |
.93 .94 |
| Workload |
1 2 |
16.0 16.0 |
4.9 5.0 |
5–25 5–25 |
5–25 |
.87 .88 |
| CWB |
1 2 |
16.0 15.8 |
6.6 6.3 |
10–44 10–41 |
10–50 |
.92 .91 |
| Job satisfaction |
1 2 |
13.5 13.5 |
4.1 4.1 |
3–18 3–18 |
3–18 |
.96 .96 |
| Physical symptoms |
1 2 |
29.7 29.2 |
10.9 10.5 |
18–81 18–76 |
18–90 |
.93 .93 |
| Turnover intentions |
1 2 |
2.6 2.7 |
1.4 1.4 |
1–6 1–6 |
1–6 |
na na |
| Hostile attribution bias |
1 2 |
18.4 18.2 |
8.7 9.1 |
7–45 7–43 |
7–49 |
.88 .89 |
| Negative affectivity |
1 2 |
24.0 23.9 |
10.2 9.9 |
10–50 10–50 |
10–50 |
.94 .94 |
| Negative mood |
1 2 |
7.1 6.8 |
4.0 3.8 |
5–25 5–25 |
5–25 |
.93 .95 |
| Neutral object satisfaction |
1 2 |
25.4 25.3 |
4.1 4.1 |
15–33 13–33 |
11–33 |
.71 .71 |
| Social desirability |
1 2 |
15.4 15.3 |
2.8 2.8 |
10–20 10–20 |
10–20 |
.78 .79 |
n = 277
Table 2.
Correlations among study 1 variables (n = 277, time 1 above, time 2 below main diagonal)
| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1 Conflict | .82 | .71 | .25 | .75 | − .37 | .66 | .46 | .62 | .39 | .51 | − .09 | − .20 |
| 2 Constraint | .65 | .85 | .36 | .73 | − .49 | .68 | .58 | .62 | .47 | .43 | − .26 | − .18 |
| 3 Workload | .20 | .36 | .81 | .15 | − .19 | .24 | .18 | .23 | .14 | .09 | − .10 | − .04 |
| 4 CWB | .71 | .70 | .14 | .88 | − .37 | .65 | .43 | .62 | .44 | .56 | − .14 | − .28 |
| 5 Job sat | − .31 | − .52 | − .21 | − .40 | .91 | − .31 | − .77 | − .30 | − .47 | − .13 | .43 | .25 |
| 6 Symptoms | .61 | .58 | .26 | .55 | − .25 | .82 | .45 | .53 | .53 | .62 | − .20 | − .14 |
| 7 Intent | .37 | .59 | .21 | .48 | − .76 | .32 | .81 | .34 | .39 | .23 | − .25 | − .24 |
| 8 HAB | .59 | .57 | .13 | .55 | − .31 | .51 | .35 | .75 | .48 | .49 | − .14 | − .23 |
| 9 NA | .33 | .41 | .16 | .44 | − .44 | .50 | .34 | .42 | .92 | .44 | − .34 | − .30 |
| 10 Mood | .43 | .39 | .07 | .48 | − .13 | .60 | .18 | .40 | .36 | .79 | − .15 | − .03 |
| 11 Neutral | − .05 | − .23 | − .12 | − .12 | .36 | − .18 | − .19 | − .11 | − .28 | − .09 | .78 | .13 |
| 12 Soc des | − .13 | − .16 | − .09 | − .26 | .30 | − .13 | − .26 | − .16 | − .33 | .03 | .17 | .85 |
p < .05 at r > .11; main diagonal contains test–retest reliabilities (pretest-posttest correlations). N = 276. Job sat job satisfaction, HAB = hostile attribution bias, Soc des social desirability
A concern for this study is the relationship of the three stressor scales with the five measures of potential biasing factors. The conflict scale correlated most strongly with HAB (r = 0.59, p < 0.05), with somewhat lower correlations with NA (r = 0.33, p < 0.05) and negative mood (r = 0.43, p < 0.05). The ICAWS correlated only − 0.13 (p < 0.05) with social desirability, and it did not correlate significantly with neutral objects satisfaction (r = − 0.05, p > 0.05). The OCS had a similar pattern of correlations (i.e., HAB: r = 0.56, p < 0.05; NA: r = 0.41, p < 0.05; negative mood: r = 0.39, p < 0.05; social desirability: r = − 0.16, p < 0.05), except that it related significantly with neural objects satisfaction (r = − 0.23, p < 0.05). The pattern with workload was a bit different. The QWI correlated significantly with only HAB (r = 0.13, p < 0.05), NA (r = 0.16, p < 0.05), and neutral objects satisfaction (r = − 0.12, p < 0.05), but these correlations were quite small. The correlations with social desirability (r = − 0.09) and negative mood (r = 0.07) were not statistically significant.
The table also shows correlations of the five potential biasing factors with the four strain variables. As with the stressor measures, patterns differed for the strain measures. At both time points all correlations were statistically significant, although some were quite small and close to the 0.11 threshold for statistical significance. CWB and physical symptoms showed patterns of correlation with bias factors quite similar to the ICAWS and OCS. There were quite robust correlations for HAB, NA and mood, and modest correlations for neutral objects satisfaction and social desirability. Job satisfaction and turnover intentions had smaller correlations with HAB and mood. Job satisfaction had a larger correlation (r = 0.43) with neutral objects satisfaction than did any stressor or other strain—not surprising since both reflect satisfaction.
Also of interest are correlations among the biasing factor measures themselves. Is it possible, for example, that measures of NA or mood are themselves influenced by social desirability? Many of the correlations among biasing factor measures are statistically significant, with the largest being among HAB, NA and mood. Eschleman and Bowling (2011) noted that affective disposition measures that asked about negative feelings would be more subject to social desirability bias than the neutral objects satisfaction scale that asks about neutral objects and that is exactly what we found. The correlations of social desirability with NA and neutral objects satisfaction were − 0.33 vs. 0.17 for time 1 and − 0.30 vs. 0.13 for time 2, respectively. Both absolute value of pairs were significantly different using the Hotelling t-test for dependent correlations with Williams (1959) correction (t(274) = 2.33, p < 0.05 for time 1 and t(274) = 2.57, p < 0.05 for time 2).
We next compared cross-sectional with lagged correlations to see if controlling for transitory mood and other transient factors by introducing time into the design would influence stressor-strain correlations (see Table 3). We averaged the two corresponding cross-sectional and two corresponding lagged correlations before comparison. In all but two cases the cross-sectional mean correlations were larger on average than the corresponding lagged correlations, but the magnitude of difference was quite small, 0.02 for conflict, 0.03 for constraints and 0.01 for workload.
Table 3.
Comparison of cross-sectional and lagged correlations in study 1
| Strain | Time | Interpersonal conflict | Organizational constraints | Workload | |||
|---|---|---|---|---|---|---|---|
| Cross-sectional | Lagged | Cross-sectional | Lagged | Cross-sectional | Lagged | ||
| CWB | Time 1 | .75 | .69 | .73 | .67 | .15 | .13 |
| Time 2 | .71 | .70 | .70 | .67 | .14 | .15 | |
| Mean | .73 | .695 | .715 | .67 | .145 | .14 | |
| Job Satisfaction | Time 1 | − .37 | − .33 | − .49 | − .49 | − .19 | − .18 |
| Time 2 | − .31 | − .32 | − .52 | − .51 | − .21 | − .22 | |
| Mean | − .34 | − .325 | − .505 | − .50 | − .20 | − .20 | |
| Symptoms | Time 1 | .66 | .54 | .68 | .56 | .24 | .23 |
| Time 2 | .61 | .66 | .58 | .60 | .26 | .24 | |
| Mean | .635 | .60 | .63 | .58 | .25 | .235 | |
| Turnover Intention | Time 1 | .46 | .40 | .58 | .54 | .18 | .19 |
| Time 2 | .37 | .41 | .59 | .59 | .21 | .21 | |
| Mean | .415 | .405 | .585 | .565 | .195 | .20 | |
All correlations significant at p < .05. Mean differences in correlations between contemporaneous and lagged correlations are as follows: interpersonal conflict = .024; organizational constraints = .03; workload = .006; n = 277
Study 1 Discussion
The main purpose of study 1 was to explore relationships between each stressor scale and five potential biasing factors that might affect them to determine if the same pattern existed across the stressor measures. Failing to find significant correlations between potential biasing factors and these measures is evidence to rule them out as extraneous influences on assessment, assuming each biasing factor measure is reflecting what is intended. Finding significant correlation leaves open the possibility of bias, but it must be acknowledged that mere correlation does not establish a variable is a biasing factor. It is possible that the assumed biasing factor has a substantive effect. For example, exposure to stressors might adversely affect mood, or those who are high on NA might be more subject to objective stressors (Spector et al., 2000). Overall, results were similar for the ICAWS and OCS, with fairly strong correlations for HAB, NA, and negative mood, leaving them as potential biasing factors. Relationships with neutral objects satisfaction and social desirability were quite small, in most cases below 0.20, suggesting little or no room for bias. Results with the QWI were quite different. Correlations with the five potential biasing factors were quite small (all but one less than 0.20) and half were not statistically significant. This suggests that the QWI is not likely subject to these biasing factors. It should be noted that earlier we discussed prior multi-source studies showing that convergence between self- and other-reports of workload assessed with the QWI was larger than for the other two stressor measures, suggesting it reflects more of the objective environment than do the other two scales. This might leave less room for biasing factors.
Examining the strain outcomes revealed the potential for biasing factors to affect all four scales, although more so for CWB and physical symptoms than for job satisfaction or turnover intentions. What is not clear from mere correlations is whether these relationships reflect biasing or substantive effects. Are the observed stressor–strain correlations inflated by common method variance due to these shared biasing factors, or do they reflect that the underlying constructs reflected in the biasing factor measures are part of the stressor–strain process? We need to go beyond static correlations in order to provide more conclusive evidence.
We utilized a 2-wave design to investigate transient mood and other occasion factors to do just that. A comparison of corresponding contemporaneous with lagged correlations found only slight differences. Differences of this magnitude likely reflect real variation in work experiences from week-to-week that might be reflected in how people responded. For example, if the frequency of conflicts at work increased appreciably between the two time points, or if a coworker who was a source of interpersonal friction left the organization, likely ratings of conflict would change. These results cast doubt that momentary mood would be an important biasing factor for these stressor measures, although they do not rule out the possibility that a more enduring mood persisted over the 2-week time period (note the test–retest correlation for our negative mood measure was 0.79). These results also bring into question the efficacy of using a short time lag between the assessment of predictor and criterion variables, as has become a fairly common practice, to control for biasing factors. It might account for short-term transitory states, but those are not likely major biasing factors based on results here.
Study 2
A limitation to study 1 was that its design did not permit us to answer questions about the reasons for observed relationship patterns. We showed that the stressor measures related to some of the potential biasing factors, but we could not separate biasing from substantive factors, which raises additional questions about the observed pattern of results. For example, do individuals who are in a bad mood over-report interpersonal conflict (biasing explanation) or are individuals in bad moods more likely to provoke interpersonal conflicts (substantive explanation)? To address this limitation, we conducted study 2 which utilized a randomized experimental design to investigate the potential effect of mood on the ICAWS, OCS, and QWI, as well as their relationships with strains. To accomplish that, we randomly assigned employed participants to one of three conditions: negative mood induction, positive mood induction, and no mood induction control. The control condition had no manipulation and served as a baseline where participants completed the survey under typical conditions—one scale after the other with no distractor tasks. Participants completed strain scales prior to the mood induction condition and completed the stressor scales following the mood induction condition. Because our focus was on the stressor measures, we wanted to see if manipulating mood would affect their level, that is, would stressor reports be higher when mood was more negative and lower when mood was more positive. Thus, we assessed stressors after the mood induction. Because we wanted participants to be in different mood states when they completed the stressor and strain measures, we assessed strains before the mood induction.
If mood affects reports of stressors and strains and serves as a biasing factor, it should inflate correlations when both are assessed contemporaneously when mood is the same during the time both sets of measures are completed. Separating the stressor and strain assessment by the mood induction would mean that mood at the time of stressor assessment would be different from mood at the time of strain assessment. This should result in reduced correlations when mood is manipulated compared to when it is not, and people are presumably in the same mood when completing all measures. The mood induction would be expected to change the participants’ mood from when they completed the strain to when they completed the stressor measures.
Study 2 Method
Participants
Participants were recruited from Amazon Mechanical Turk (MTurk; Buhrmester et al., 2011). Eligibility criteria included working at least 10 h per week in the USA and having a 100% response approval rating on Mturk. Participants were paid $2.00 for completing the survey.
Initially, 471 eligible employees completed an online survey containing an open-ended response item. Of the 471, 15 employees were removed because they provided low-quality written responses to the mood induction (i.e., gibberish or off-topic). An additional 17 employees were deleted for failing the same response consistency check as in study 1 (responding similarly to oppositely worded job satisfaction items). None of the participants who provided high-quality written responses failed the speed check. In total, responses from 439 were analyzed for the experimental study, with 145 in the negative mood condition, 143 in the positive mood condition, and 151 in the control condition. The 439 participants were 45% male and ranged in age from 18 to 68 (M = 32, SD = 9.7). The sample was 79.3% White, 9.1% Black, 9.1% Asian, and 7.7% Hispanic (percentages sum to more than 100% as 58 employees indicated two categories). Most of the employees (58%) worked 30 to 40 h per week, with 31% working 20 to 29 h/week. Participants’ job tenure averaged 4.9 years (SD = 4.4 years) and ranged from 1.1 to 29.5 years. As with study 1, the sample held a wide variety of jobs across industries.
Measures
Stressors and Strains
As in study 1, we included the ICAWS, OCS, and QWI as the stressor measures. We also included the same measures of CWB, job satisfaction, physical symptoms, and turnover intentions.
Mood
Twenty-two items were used to measure participants’ mood. Twenty items were taken from the Positive and Negative Affect Scales (PANAS; Watson et al., 1988), and two items were added to capture happiness and sadness. Participants indicated the extent to which they currently felt emotions such as excited, distressed, and proud on a 5-point scale (1 = very slightly or not at all; 5 = extremely). The positive and negative mood subscales each demonstrated sufficient internal consistency reliability (see Table 5).
Table 5.
Correlations of stressors with strains by condition for study 2
| Variable | Conditiona | Interpersonal conflict | Organizational constraints | Workloadb | Observed range | Possible range | Coefficient alpha |
|---|---|---|---|---|---|---|---|
| CWB |
N P C |
.57* .62* .52* |
.46* .49* .45* |
.07 .12 .29* |
10–41 10–42 10–31 |
10–50 |
.86 .86 .76 |
| Job satisfaction |
N P C |
− .41* − .42* − .32* |
− .48* − .38* − .46* |
− .30*AB − .13A − .48*B |
3–18 3–18 3–18 |
3–18 |
.94 .95 .94 |
| Physical symptoms |
N P C |
.36* .47* .30* |
.37* .34* .46* |
.23* .15 .30* |
18–68 18–69 18–61 |
18–90 |
.84 .88 .83 |
| Turnover intentions |
N P C |
.43* .42* .34* |
.43* .33* .43* |
.23*AB .16A .40*B |
1–6 1–6 1–6 |
1–6 |
na na na |
aN = negative mood induction (n = 145), P = positive mood induction (n = 143), C = control group, no mood induction (n = 151)
bCorrelations with same superscript not significantly different according to z-test for comparison of independent correlations comparisons
*p < .05
Procedure
Participants were randomly assigned to one of three conditions: positive mood, negative mood, and no mood induction as a control. Participants in all conditions took an online survey in which they responded to the measures specified above. First, all participants responded to the strain measures (e.g., turnover intentions, physical symptoms). Then the participants in the positive and negative mood conditions completed a mood manipulation task adapted from Gunn and Finn (2015). Participants in the positive mood condition were asked to “Please take the next 5 min to recall an especially positive recent event in your life and reflect on it. Take some time to write down as many details as possible, and really try to place yourself in the context of the event.” Participants in the negative mood condition were given the same instructions, except they were asked to recall a negative event rather than a positive event. This method has been shown to effectively induce moods in individual studies (Bless et al., 1996; Krauth-Gruber & Ric, 2000) and a meta-analysis (Westermann et al., 1996). Participants in the control condition were not given a mood induction task but completed all measures in the same order as the other two conditions. We chose not to give them a control task because we wanted to compare the induction conditions to the standard method of conducting a survey that did not include a distractor task in the middle. Next, all participants responded to the three stressor measures as we were interested in the effects of the induction manipulation on these measures. Finally, all participants reported their current mood as a manipulation check.
Study 2 Results
We first compared means to determine whether the mood induction manipulation affected responses to the surveys. Table 4 shows means, standard deviations, observed and possible ranges, and coefficient alphas by treatment condition. It also shows results of one-way between-subject analyses of variance (ANOVA). In the fourth column we show the effect sizes (R2) with an asterisk to indicate that the overall effect was statistically significant. The only significant difference among conditions was for positive mood. As shown at the bottom of the table, the means for positive mood came out in the order expected. The superscripts indicate the results of Duncan subsequent tests to compare individual means, with the same letters indicating the means were not significantly different statistically. The positive mood induction had the highest positive mood score, followed by the control group, although these two means were not significantly different statistically (both had superscript A). These were followed by the negative mood condition which was significantly lower statistically from the other two conditions (superscript B). Thus, we can conclude that the negative mood induction resulted in a significant reduction in positive mood compared to the control condition and positive mood induction condition. Although effects of the mood induction conditions on negative mood was nonsignificant, the means lined up as expected. Negative mood was highest for those in the negative induction condition and lowest in the positive mood induction condition.
Table 4.
Descriptive statistics and mean comparisons for study 2
| Variable | Conditiona | Meanb | R2 | Standard deviation | Observed range | Possible range | Coefficient alpha |
|---|---|---|---|---|---|---|---|
| Interpersonal conflict |
N P C |
7.0 6.8 6.6 |
.00 |
3.1 3.2 2.7 |
4–16 4–20 4–14 |
4–20 |
.83 .86 .81 |
| Organizational constraints |
N P C |
24.6 24.7 23.8 |
.00 |
10.0 9.7 10.1 |
11–50 11–53 11–55 |
11–55 |
.91 .92 .91 |
| Workload |
N P C |
17.6 17.6 16.6 |
.01 |
5.3 4.8 4.9 |
6–25 5–25 6–25 |
5–25 |
.87 .84 .84 |
| CWB |
N P C |
15.2 15.1 15.0 |
.00 |
5.7 5.5 4.3 |
10–41 10–42 10–31 |
10–50 |
.86 .86 .76 |
| Job satisfaction |
N P C |
12.7 12.8 12.6 |
.00 |
3.8 3.8 3.8 |
3–18 3–18 3–18 |
3–18 |
.94 .95 .94 |
| Physical symptoms |
N P C |
30.9 30.8 31.3 |
.00 |
8.1 9.1 7.8 |
18–68 18–69 18–61 |
18–90 |
.84 .88 .83 |
| Turnover intentions |
N P C |
3.0 2.9 3.1 |
.01 |
1.4 1.2 1.4 |
1–6 1–6 1–6 |
1–6 |
na na na |
| Negative mood |
N P C |
17.8 16.8 17.4 |
.00 |
8.8 7.4 6.9 |
11–48 11–39 11–44 |
11–55 |
.94 .91 .91 |
| Positive mood |
N P C |
27.6B 31.7A 29.1A |
.03* |
9.8 11.2 10.2 |
11–55 12–55 11–55 |
11–55 |
.93 .94 .93 |
na not applicable due to single item
*p < .05
aN = negative mood induction (n = 145), P = positive mood induction (n = 143), C = control group, no mood induction (n = 151)
bMeans with same superscript not significantly different according to Duncan’s subsequent test for mean comparisons
The next step was to compare stressor–strain correlations among the three treatment conditions (see Table 5). We included the two mood measures with the five strains. To compare correlation pairs, we used z-tests for independent correlations. The ICAWS and OCS showed very similar patterns of correlations with strains. Every correlation was statistically significant. Furthermore, there were no significant differences among the corresponding correlations across treatment conditions. For the QWI, however, we found significant differences for job satisfaction and turnover intentions, the two attitudinal variables. Although not all differences were statistically significant, the patterns were consistent in that the control group correlation was the largest among the three conditions, a pattern consistent with a bias interpretation.
Study 2 Discussion
The goal of study 2 was to test whether manipulating moods would affect people’s responses to the three stressor measures, and whether that might affect correlations with other measures. The mood induction showed a significant impact on positive mood, with the means lining up as expected with the highest level for the positive induction and lowest level for negative induction, although only the positive induction yielded a statistically significant difference. The mood induction manipulation failed to show a statistically significant effect on negative mood scores, although the trend was as expected with the negative mood induction condition having a higher mean score than the positive mood induction condition, with the control condition in between. Thus, the mood induction seemed to have a larger impact on positive than negative mood.
Clearly the mood induction failed to demonstrate an effect on means for the three stressor measures. If mood was affecting judgments and perceptions of stressors, we would expect to find that when a negative mood was induced, employees would report higher stressors and strains, and when a positive mood was induced, they would report lower stressors and strains. For none of the stressors or strains was there a significant effect of the mood induction manipulation, and the means failed to show the expected pattern. For the ICAWS, the negative mood induction condition was higher than the positive mood induction condition, but for the OCS, it was lower, and for the QWI both means were the same within rounding error.
For the ICAWS and OCS, the stressor–strain correlations were all consistent across conditions, as there were no statistically significant differences found. This was not the case for the QWI where statistically significant differences were found for two of four comparisons. Further, the pattern across all four strains was consistent with a biasing factor prediction in that manipulating mood led to a reduction in the correlation between the QWI and strains.
Study 3
To this point, we have used a 2-wave study and a mood-manipulation experiment to investigate five potential biasing factors for three stressor scales. We were able to rule out neutral objects satisfaction and social desirability as potential influences on the three stressors, as they had little relationship with them. Their relationships with the strain variables, however, were not as consistent with correlations as high as 0.43 for neutral objects satisfaction and 0.30 for social desirability with job satisfaction. This leaves open the possibility that job satisfaction is biased by a general tendency to report being satisfied and by social desirability. Of course, it is also possible that the two biasing factors represent personality differences that lead to job satisfaction, and there is a large literature suggesting a dispositional influence on job satisfaction (Staw & Cohen-Charash, 2005). Study 3 was designed to further investigate the potential impact of the two biasing factors that had the most potential to influence the three stressor measures: HAB and NA. We conducted a multi-source study with a sample of public sector employee-supervisor dyads. Employees completed an overlapping set of measures used in the first two studies, and their supervisors competed the measures of the three stressors and CWB. In addition, for study 3, we chose a new population where it would be more feasible to recruit dyads than with an online panel.
Study 3 Method
Participants and Procedure
Participants were 161 support staff and their immediate supervisors from two public universities in the southeastern USA. They all worked full-time and were in most cases working remotely due to the COVID-19 pandemic ongoing during the time of the study. The employee sample was mostly female (88%), with a mean age of 43.4 (SD = 13.6), and about half had a median job tenure of 1–5 years. The supervisor sample was 70% female with a mean age of 49.5 (SD = 12.4), and median tenure of 6 and 10 years.
Participants were recruited via their university emails. A list was acquired from a public records request of each university. Invitations were sent to employee-supervisor pairs. Each dyad was asked to provide a self-generated identification code so that their responses could be anonymously matched. Based on their codes we were able to determine that approximately six supervisors provided data on two employees, so there were 161 unique employees and 155 unique supervisors in the sample. Invitations were sent to the employees who were asked to complete a survey and send a matching invitation to their supervisors.
Six cases were dropped for excessive missing data (more than two items), mostly from the supervisors. An additional three cases had one missing item and one case had two missing items. One item for the QWI was imputed with a middle value of three. Three missing CWB-C items were replaced with ones.
All scales used were the same as in study 1 with the following exceptions. The 13-item version of the PSI was substituted for the 18-item version to reduce the length of the survey. The measures of mood, neutral objects satisfaction, and social desirability were excluded. Employees completed all measures whereas their immediate supervisors completed only the stressor and CWB scales, but about the employee’s job and behavior and not their own. The data reported here were from a larger study.
Study 3 Results and Discussion
Descriptive statistics of means, standard deviations, ranges (observed and possible), and coefficient alphas are shown in Table 6. Table 7 contains the correlations among all variables in study 3. The first concern is the convergent validities of the three stressor measures. As can be seen (bolded values), all three stressor measures as well as the CWB measures showed significant correlations between employee and supervisor. This convergence is evidence that for each of the three stressor scales and CWB, there is a common source of variance between employees and supervisors. This means that the same factors influence them both. Although we would like to conclude that the common element is an objective feature of the job for the stressors, and the employee’s actual behavior for CWB, this is just one possibility. There is the possibility that there are common biasing factors that affects the judgements and reports of employees and their immediate supervisors that cannot be dismissed out of hand.
Table 6.
Descriptive statistics for study 3
| Variable | Mean | Standard deviation | Observed range | Possible range | Coefficient alpha |
|---|---|---|---|---|---|
| Employee | |||||
| Conflict | 5.5 | 1.7 | 4–15 | 4–20 | .72 |
| Constraints | 15.8 | 5.6 | 11–37 | 11–55 | .93 |
| Workload | 13.4 | 4.9 | 5–25 | 5–25 | .88 |
| CWB | 11.9 | 2.5 | 10–22 | 10–50 | .74 |
| Job satisfaction | 15.7 | 3.1 | 4–18 | 3–18 | 88 |
| Physical symptoms | 22.9 | 7.9 | 13–59 | 18–90 | .88 |
| Turnover intentions | 2.1 | 1.3 | 1–6 | 1–6 | Na |
| Hostile attribution bias | 13.2 | 6.0 | 7–39 | 7–49 | .84 |
| Negative affectivity | 21.5 | 7.6 | 10–48 | 10–50 | .89 |
| Supervisor | |||||
| Conflict | 5.6 | 1.6 | 4–11 | 4–20 | .67 |
| Constraints | 18.1 | 5.6 | 11–39 | 11–55 | .86 |
| Workload | 13.8 | 4.4 | 5–25 | 5–28 | .82 |
| CWB | 11.2 | 1.9 | 10–19 | 10–50 | .71 |
n=161
Table 7.
Correlations among study 3 variables
| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Employee | ||||||||||||
| 1 Conflict | ||||||||||||
| 2 Constraints | .28* | |||||||||||
| 3 Workload | .15 | .52* | ||||||||||
| 4 CWB | .25* | .50* | .06 | |||||||||
| 5 Job sat | − .34* | − .61* | − .26* | − .42* | ||||||||
| 6 Symptoms | .12 | .30* | .05 | .39* | − .25* | |||||||
| 7 Intent | .33* | .63* | .30* | .45* | − .80* | .23* | ||||||
| 8 HAB | .34* | .27* | .14 | .31* | − .31* | .26* | .35* | |||||
| 9 NA | .12 | .37* | .03 | .44* | − .33* | .48* | .25* | .40* | ||||
| Supervisor | ||||||||||||
| 10 Conflict | .39* | .00 | .02 | .02 | − .01 | .03 | − .01 | .11 | − .04 | |||
| 11 Constraints | .17* | .26* | .33* | .16* | − .22* | .01 | .29* | .25* | .03 | .25* | ||
| 12 Workload | .09 | .09 | .36* | .02 | − .12 | − .02 | .10 | .09 | − .05 | .22* | .48* | |
| 13 CWB | .26* | .20* | − .02 | .24* | − .19* | .13 | .38* | .26* | .01 | .23* | .16* | − .16* |
p < .05 at r > .11; N = 161. Convergent validities are bolded
Job sat job satisfaction, HAB hostile attribution bias, NA negative affectivity (trait)
We next examined relationships between the two potential biasing factors (HAB and NA) and both employee- and supervisor-reported stressors and CWB. For workload, the two biasing factors did not significantly relate for either source, so bias is ruled out by these findings. For the ICAWS there is a significant relationship for HAB but not NA for self-reports whereas neither potential biasing factor is significant for supervisor reports. A comparison of the HAB correlation for employee-reports (r = 0.34) was significantly higher statistically compared to the corresponding nonsignificant correlation for supervisor-reports (r = − 0.01) according to a Hotelling’s t-test with Williams (1959) correction for dependent correlations (t(158) = 2.77, p < 0.05). Thus, for conflict, we can rule out NA but not HAB as a potential biasing factor. For organizational constraints, there was an almost identical correlation of HAB regardless of source (r = 0.27 vs. 0.25 for employee and supervisor reports, respectively), and these are not significantly different statistically. These results suggest that HAB relates to an objective feature of the job, and it might well be that individuals higher on HAB are in jobs with greater constraints. The situation for NA, on the other hand, is different as there is a significant relationship of NA with constraints reported by employees but not supervisors, and the two correlations (0.37 and 0.03, respectively) are significantly different statistically (t(158) = 3.77, p < 0.05). This leaves open the possibility that employee reports of organizational constraints might be biased by NA. Of course, it is also possible that supervisors have only a rough idea of the constraints of their employees, thus attenuating relationships with their reports of constraints and their employee’s NA.
CWB had a similar pattern of results to organizational constraints. There was statistically nonsignificant difference in correlations between HAB and the two sources of CWB measurements (r = 0.31 vs. 0.26). However, NA significantly correlated with employee-reported CWB (r = 44), but not supervisor-reported CWB (r = 0.01). Thus, these results rule out HAB but not NA as a potential biasing factor for employee reports of CWB. Further, since CWB and the OCS share this potential source of bias, it is possible that NA acts as common method variance, inflating the OCS-CWB correlation.
General Discussion
According to Spector et al.’s (2019) measure-centric perspective, biasing factors are unique to measures and there is no over-riding method variance affecting all measures using the same method such as self-report. This counters conventional wisdom that method variance is endemic to the method and therefore all measures using the same method will be affected by the same biasing factors. Rather there may be biasing factors that are idiosyncratic to individual measures. We applied that idea to three popular job stressor measures to see if we could find evidence that they do not all share the same potential biasing factors. Results of three studies using different methods support the measure-centric perspective, indicating that these three measures, all reflecting stressful job conditions, do not have the same pattern of relationships with the five potential biasing factors investigated.
Study 1 serves as an initial test to see if the potential biasing factors were related to each stressor scale. Failure to relate suggests there is no bias. The repeated testing allowed us to check for transient mood effects. Study 2 was a further exploration of mood effects using an experimental approach. Study 3 used a multisource strategy to separate objective from subjective components of self-report to see if the biasing factors (in this case only HAB and NA) relate to objective stressors.
Pulling together results across the three studies we can see different pattens across the three stressors. We summarized these findings and their implications in Table 8 for each of the five potential biasing factors with each of the three stressor scales. Based on lack of consistent relationships, we were able to rule out three potential biasing factors (mood, neutral objects satisfaction and social desirability) as being unlikely to have biasing effects on these measures. That left us with HAB and NA as potential biasing factors for the ICAWS and OCS. But merely being related is not sufficient evidence for a biasing effect. We relied on the multi-source study 3 that used supervisor reports to see if the two potential biasing factors that related to self-reports of the two stressors would also relate to the corresponding supervisor reports. In two cases, we found significant differences that support a biasing effect. That is, we have evidence that HAB may be a biasing factor for the ICAWS and NA may be a biasing factor for the OCS.
Table 8.
Summary of conclusions and implications from three studies
| Bias | Stressor | Study 1 | Study 2 | Study 3 | Implications |
|---|---|---|---|---|---|
| HAB | ICAWS | Yes | – | Yes | ICAWS potentially biased by HAB |
| OCS | Yes | – | No | Relationship of OCS with HAB likely due to objective stressors | |
| QWI | No | – | No | Workload not biased by HAB | |
| NA | ICAWS | Yes | – | No | Conflicting evidence that ICAWS is related to NA. Inconclusive.* |
| OCS | Yes | – | Yes | OCS potentially biased by NA | |
| QWI | No | – | No | Workload not biased by NA | |
| Mood | ICAWS | No | No | – | ICAWS not biased by mood |
| OCS | No | No | – | OCS not biased by mood | |
| QWI | No | Yes | – | Conflicting evidence that QWI biased by mood. Inconclusive | |
| NOS | ICAWS | No | – | – | No impact of NOS on ICAWS |
| OCS | No | – | – | No impact of NOS on OCS | |
| QWI | No | – | – | No impact of NOS on QWI | |
| SD | ICAWS | No | – | – | No impact of SD on ICAWS |
| OCS | No | – | – | No impact of SD on OCS | |
| QWI | No | – | – | No impact of SD on QWI |
*ICAWS significantly related to ICAWS self-report in study 1 but not study 2
There has been a lively debate about the extent to which NA has a biasing effect (as a source of method variance) versus a substantive effect. Spector et al. (2000) discussed several plausible substantive mechanisms and supporting evidence for each. Of course, the existence of substantive effects does not rule out the possibility that at least some of the variance in the stressor measures is due to NA as a bias. This leaves open the possibility that NA can have both effects.
HAB has received limited attention in the organizational realm and has not been prominent in discussions of biasing effects or method variance. Our results showed that it related to self-reports of the ICAWS and OCS but not the QWI and all self-reported strains in studies 1 and 3, and that it related to supervisor reports of organizational constraints and CWB. There is a growing number of organizational studies investigating this personality variable, and our results support its continued investigation, not only as a potential biasing factor in organizational assessment, but as a potential substantive factor in people’s response to the work environment within the stress realm and beyond.
Future Directions
We found evidence for the measure-centric idea that each measured variable might be affected by its own set of biasing factors. Because we only investigated three measures, it is difficult to know how measures we did not investigate would fare with these five potential biasing factors, not to mention potential biasing factors we did not include. Moving forward it would be helpful to explore other stressor measures, as well as measures in other domains. Can we identify characteristics of measures that make them vulnerable to specific biasing factors? For example, would all measures focused on characteristics of tasks like the QWI be immune to the biasing factors we included here? Is it the interpersonal nature of the ICAWS that makes it vulnerable to HAB? If so, would all measures of mistreatment show a similar effect? Study of additional measures might illuminate patterns and contribute to theory about what drives measurement.
Determining if a given factor plays a biasing or substantive role requires a systematic approach. Spector (2021) discussed an iterative strategy for determining why observed variables are related. It begins by establishing that target variables are related, and then proceeds to rule in or out potential explanatory variables. In this case, it would mean testing if the measure in question is significantly correlated with measures of hypothetical biasing factors. Barring methodological limitations in measurement of the biasing factors and research procedures (e.g., inadequate sample size or inappropriate sample), failure to find a relationship is strong evidence against a potential biasing role. If, on the other hand, a hypothetical biasing factor is significantly related to the target measure, additional evidence is needed to distinguish biasing from a substantive effect. The iterative strategy would involve conducting additional studies that go beyond cross-sectional self-reports to test for biasing versus substantive effects. We suggest four types of studies that can provide evidence to address the distinction.
Alternative measures of target variables. This involves the comparison of self-reported with other measures (e.g., coworker or supervisor reports, observer ratings, objective measures from records, or unobtrusive measures) of the target measures. Finding that the alternative measures and target measure relate similarly with the potential biasing factors is evidence for a substantive effect, and evidence against the biasing effect. Finding that the biasing factor relates to the self-report and not to the other measures is evidence for a biasing role. This is the strategy we used in study 3 to show if self-report and supervisor-report of the same target stressor variable would relate similarly to a potential biasing factor. We were able to rule out the biasing effect in some cases but found support for it in others.
Manipulating the target variables. Experiments and quasi-experiments can be used to see if changes in target variables are reflected in their self-reports. Finding that self-reports change in the direction expected is evidence that they reflect the intended construct, but alone cannot rule out an additional biasing effect. It is possible, for example, that NA would serves as a biasing factor for a given stressor measure, but that the level of the measure changes from pre to post intervention. Thus, individuals high on NA might report higher levels of stressors than their low NA counterparts both before and after intervention, even though both groups show change in the same direction. Where this become important is in determining why the stressor might relate to a target strain. A pre-post assessment of the strain could indicate if the intervention affected both stressor and strain, and if individuals who show the most impact of the intervention on the stressor show the most impact on the strain. This sort of evidence supports a substantive effect but cannot itself rule out an additional biasing effect.
Manipulating the biasing factor. Some biasing factors, such as mood, can be experimentally manipulated to see if changing the factor influences the target measure. Others might be subject to naturally occurring change that could be incorporated into a study. Certain life experiences both positive and negative (e.g., intensive psychotherapy or trauma) might influence some potential biasing factors and could be investigated to see if it affects a target measure. It should be kept in mind that not all potential biasing factors are inherent in people, as many can be part of the person’s environment and experiences. An example of this approach is a study of newly graduated nurses experiences with patient assaults. Spector et al. (2015) used this approach to test for the biasing effect of first-time assault experiences on reports of violence prevention climate. They found that being assaulted did not bias subsequent reports of climate (scores did not decline more for those assaulted).
Longitudinal strategies. Collecting data over time can be a means of tracking the effects on target variables if time is incorporated appropriately. This means assessing the target variable before and after something has caused it to change. The logic would be similar to an intervention study as it shows that when the intended construct changes (e.g., workloads get heavier) the self-reports increase as expected, and whether expected outcomes (e.g., strains) changed as well.
Limitations
There are three limitations to consider in evaluating our findings. First, we were able to provide evidence to rule out some potential biasing effects on the stressor measures, but where we could not, we provide preliminary evidence for a biasing effect that needs replication with additional methods. Given that the four of the five potential biasing factor measures reflect stable properties of people and not situations, it would be difficult to experimentally manipulate them as we did for mood.
Second, although the mood induction showed a significant effect as expected on positive mood, it did not for negative mood. We utilized a technique that has been well used in the literature (e.g., Bless et al., 1996; Krauth-Gruber & Ric, 2000) and were surprised that only one mood measure showed a statistically significant effect (although all means lined up as expected). We originally piloted the experiment with a video mood induction, by having respondents watch either a happy or sad scene from a movie. We were unable to detect any effects on a mood measure and switched to the method of writing about a personal event. Perhaps one factor that reduced the apparent impact was the timing of the mood measure, which followed the stressor measures. It has been found that mood manipulation checks can have stronger effects when they immediately follow the induction (Krauth-Gruber & Ric, 2000). It is also possible that had we used a stronger induction procedure, results might have been different, but ethical issues restrict the extent to which mood can reasonably be induced. Despite the relatively weak induction effect, we found significant differences in correlations for the QWI.
Finally, we computed quite a few correlations, especially in study 1 where we included five potential biasing factors and three target stressor measures for a total of 15 significance tests for each time point. This leaves open the possibility that some significant findings were due to type 1 errors. Although possible, it seems unlikely that type 1 errors are an explanation for our results. In study, 1 we collected data on the sample twice, yet the pattern of significance was similar for corresponding correlations. Only one differed in significance between time 1 and time 2 (neutral objects satisfaction and QWI), but this was because at time 1 the correlation (− 0.12) just exceeded the critical value of − 0.11 whereas the correlation at time 2 (− 0.10) fell just below the critical value. One pattern that should be noted is that the correlations of the three stressor measures with HAB and NA were larger (and in some cases much larger) in almost all cases for the online panel than the university support staff. Perhaps this is due to the greater variability in working environments for the online panel where each participant is presumably works for a different employer than the support staff who all work for the same state university system. Of course, other factors are also possible, including that online panelists tend to do surveys quickly and might be basing responses more on a general impression than careful consideration of each item. Support for this suggestion is found in the median time spent per item by support staff in study 3 was almost double that of our online panelists in Study 1 (8 vs. 4.2 s).
Conclusions
We set out to expand our understanding about whether potential biasing factors differ among measures, focusing on three popular stressors. Our results support the possibility that the three stressor measures were not affected by the same biasing factors. We have evidence that the ICAWS is potentially biased only by HAB, the OCS only by NA, and the QWI by neither of these potential biasing factors. This possibility suggests the need for additional research to better disentangle biasing from substantive effects. There is evidence that at least some of the relationship of NA with stressors is due to substantive effects (Spector et al., 2000), but this does not rule out the possibility that there are also biasing effects, at least with the OCS. Results of our three studies support the core thesis of Spector et al.’s (2019) measure-centric perspective, providing evidence that biasing factors should be investigated for individual measures, even within the same domain.
Data Availability
Data are available from the first author upon request.
Footnotes
Details about the ICAWS can be found at https://paulspector.com/assessments/pauls-no-cost-assessments/interpersonal-conflict-at-work-scale-icaws/
Details about the OCS can be found at https://paulspector.com/assessments/pauls-no-cost-assessments/organizational-constraints-scale-ocs/
Details about the QWI can be found at https://paulspector.com/assessments/pauls-no-cost-assessments/quantitative-workload-inventory-qwi/
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- Aguinis H, Villamor I, Ramani RS. MTurk research: Review and recommendations. Journal of Management. 2021;47(4):823–837. doi: 10.1177/0149206320969787. [DOI] [Google Scholar]
- An M, Boyajian ME, O’Brien KE. Perceived victimization as the mechanism underlying the relationship between work stressors and counterproductive work behaviors. Human Performance. 2016;29(5):347–361. doi: 10.1080/08959285.2016.1172585. [DOI] [Google Scholar]
- Bal, A., & O'Brien, K. E. (2010, April 8–10). Field validation of the Hostile Atrributional Style Survey Society for Industrial and Organizational Psychology, Atlanta, GA, April 8–10.
- Bless H, Clore GL, Schwarz N, Golisano V, Rabe C, Wölk M. Mood and the use of scripts: Does a happy mood really lead to mindlessness? Journal of Personality and Social Psychology. 1996;71(4):665–679. doi: 10.1037/0022-3514.71.4.665. [DOI] [PubMed] [Google Scholar]
- Bowling NA, Beehr TA. Workplace harassment from the victim's perspective: A theoretical model and meta-analysis. Journal of Applied Psychology. 2006;91(5):998–1012. doi: 10.1037/0021-9010.91.5.998. [DOI] [PubMed] [Google Scholar]
- Brief AP, Burke MJ, George JM, Robinson BS, Webster J. Should negative affectivity remain an unmeasured variable in the study of job stress? Journal of Applied Psychology. 1988;73(2):193–198. doi: 10.1037/0021-9010.73.2.193. [DOI] [PubMed] [Google Scholar]
- Bruk-Lee V, Spector PE. The social stressors-counterproductive work behaviors link: Are conflicts with supervisors and coworkers the same? Journal of Occupational Health Psychology. 2006;11(2):145–156. doi: 10.1037/1076-8998.11.2.145. [DOI] [PubMed] [Google Scholar]
- Buhrmester, M., Kwang, T., & Gosling, S. D. (2011). Amazon's Mechanical Turk: A new source of inexpensive, yet high-quality, data? Perspectives on Psychological Science, 6(1), 3–5. http://www.jstor.org.ezproxy.lib.usf.edu/stable/41613414 [DOI] [PubMed]
- Burke MJ, Brief AP, George JM. The role of negative affectivity in understanding relations between self-reports of stressors and strains: A comment on the applied psychology literature. Journal of Applied Psychology. 1993;78(3):402–412. doi: 10.1037/0021-9010.78.3.402. [DOI] [PubMed] [Google Scholar]
- Cammann, C., Fichman, M., Jenkins, D., & Klesh, J. (1979). The Michigan Organizational Assessment Questionnaire. Unpublished manuscript, University of Michigan, Ann Arbor.
- Campbell DT, Fiske DW. Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin. 1959;56(2):81–105. doi: 10.1037/h0046016. [DOI] [PubMed] [Google Scholar]
- Castille CM, Kuyumcu D, Bennett RJ. Prevailing to the peers' detriment: Organizational constraints motivate Machiavellians to undermine their peers. Personality and Individual Differences. 2017;104:29–36. doi: 10.1016/j.paid.2016.07.026. [DOI] [Google Scholar]
- Che XX, Zhou ZE, Kessler SR, Spector PE. Stressors beget stressors: The effect of passive leadership on employee health through workload and work–family conflict. Work & Stress. 2017;31(4):338–354. doi: 10.1080/02678373.2017.1317881. [DOI] [Google Scholar]
- Chen PY, Spector PE. Negative affectivity as the underlying cause of correlations between stressors and strains. Journal of Applied Psychology. 1991;76(3):398–407. doi: 10.1037/0021-9010.76.3.398. [DOI] [PubMed] [Google Scholar]
- Chen C, Wen P, Hu C. Role of formal mentoring in protégés' work-to-family conflict: A double-edged sword. Journal of Vocational Behavior. 2017;100:101–110. doi: 10.1016/j.jvb.2017.03.004. [DOI] [Google Scholar]
- Cohen TR, Panter AT, Turan N. Predicting Counterproductive Work Behavior from Guilt Proneness. Journal of Business Ethics. 2013;114(1):45–53. doi: 10.1007/s10551-012-1326-2. [DOI] [Google Scholar]
- Crowne DP, Marlowe D. A new scale of social desirability independent of psychopathology. Journal of Consulting Psychology. 1960;24:349–354. doi: 10.1037/h0047358. [DOI] [PubMed] [Google Scholar]
- Crowne DP, Marlowe D. The approval motive. John Wiley; 1964. [Google Scholar]
- DeArmond S, Matthews RA, Bunk J. Workload and procrastination: The roles of psychological detachment and fatigue. International Journal of Stress Management. 2014;21(2):137–161. doi: 10.1037/a0034893. [DOI] [Google Scholar]
- Dodge KA, Coie JD. Social-information-processing factors in reactive and proactive aggression in children's peer groups. Journal of Personality and Social Psychology. 1987;53(6):1146–1158. doi: 10.1037/0022-3514.53.6.1146. [DOI] [PubMed] [Google Scholar]
- Edwards JA, Guppy A, Cockerton T. A longitudinal study exploring the relationships between occupational stressors, non-work stressors, and work performance. Work & Stress. 2007;21(2):99–116. doi: 10.1080/02678370701466900. [DOI] [Google Scholar]
- Edwards, A. L. (1957). The social desirability variable in personality assessment and research. Dryden Press.
- Epps J, Kendall PC. Hostile attributional bias in adults. Cognitive Therapy and Research. 1995;19(2):159–178. doi: 10.1007/BF02229692. [DOI] [Google Scholar]
- Eschleman KJ, Bowling NA. A construct validation of the Neutral Objects Satisfaction Questionnaire (NOSQ) Journal of Business and Psychology. 2011;26(4):501–515. doi: 10.1007/s10869-010-9206-1. [DOI] [Google Scholar]
- Fox S, Spector PE, Goh A, Bruursema K. Does your coworker know what you're doing? Convergence of self- and peer-reports of counterproductive work behavior. International Journal of Stress Management. 2007;14(1):41–60. doi: 10.1037/1072-5245.14.1.41. [DOI] [Google Scholar]
- Frese M, Zapf D. Methodological issues in the study of work stress: Objective vs subjective measurement of work stress and the question of longitudinal studies. In: Cooper CL, Payne R, editors. Causes, coping and consequences of stress at work. John Wiley & Sons; 1988. pp. 375–411. [Google Scholar]
- Gabriel AS, Podsakoff NP, Beal DJ, Scott BA, Sonnentag S, Trougakos JP, Butts MM. Experience sampling methods: A discussion of critical trends and considerations for scholarly advancement. Organizational Research Methods. 2019;22(4):969–1006. doi: 10.1177/1094428118802626. [DOI] [Google Scholar]
- Goldberg LR, Johnson JA, Eber HW, Hogan R, Ashton MC, Cloninger C, Gough HG. The international personality item pool and the future of public-domain personality measures. Journal of Research in Personality. 2006;40(1):84–96. doi: 10.1016/j.jrp.2005.08.007. [DOI] [Google Scholar]
- Gunn RL, Finn PR. Applying a dual process model of self-regulation: The association between executive working memory capacity, negative urgency, and negative mood induction on pre-potent response inhibition. Personality and Individual Differences. 2015;75:210–215. doi: 10.1016/j.paid.2014.11.033. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Judge TA, Bretz RD. Report on an alternative measure of affective disposition [Empirical Study] Educational and Psychological Measurement. 1993;53(4):1095–1104. doi: 10.1177/0013164493053004022. [DOI] [Google Scholar]
- Kim M, Beehr TA. Challenge and hindrance demands lead to employees' health and behaviours through intrinsic motivation. Stress and Health. 2018;34(3):367–378. doi: 10.1002/smi.2796. [DOI] [PubMed] [Google Scholar]
- Krauth-Gruber S, Ric F. Affect and stereotypic thinking: A test of the mood-and-general-knowledge-model. Personality and Social Psychology Bulletin. 2000;26(12):1587–1597. doi: 10.1177/01461672002612012. [DOI] [Google Scholar]
- Kuyumcu D, Dahling JJ. Constraints for some, opportunities for others? Interactive and indirect effects of machiavellianism and organizational constraints on task performance ratings [journal article] Journal of Business and Psychology. 2014;29(2):301–310. doi: 10.1007/s10869-013-9314-9. [DOI] [Google Scholar]
- Liu C, Nauta MM, Li C, Fan J. Comparisons of organizational constraints and their relations to strains in China and the United States. Journal of Occupational Health Psychology. 2010;15(4):452–467. doi: 10.1037/a0020721. [DOI] [PubMed] [Google Scholar]
- Mackinnon A, Jorm AF, Christensen H, Korten AE, Jacomb PA, Rodgers B. A short form of the Positive and Negative Affect Schedule: Evaluation of factorial validity and invariance across demographic variables in a community sample. Personality and Individual Differences. 1999;27(3):405–416. doi: 10.1016/S0191-8869(98)00251-7. [DOI] [Google Scholar]
- Moorman RH, Podsakoff PM. A meta-analytic review and empirical test of the potential confounding effects of social desirability response sets in organizational behaviour research. Journal of Occupational and Organizational Psychology. 1992;65(2):131–149. doi: 10.1111/j.2044-8325.1992.tb00490.x. [DOI] [Google Scholar]
- Narayanan L, Menon S, Spector P. A cross-cultural comparison of job stressors and reactions among employees holding comparable jobs in two countries. International Journal of Stress Management. 1999;6(3):197–212. doi: 10.1023/A:1021986709317. [DOI] [Google Scholar]
- Narayanan L, Menon S, Spector PE. Stress in the workplace: A comparison of gender and occupations. Journal of Organizational Behavior. 1999;20(1):63–73. doi: 10.1002/(SICI)1099-1379(199901)20:1<63::AID-JOB873>3.0.CO;2-J. [DOI] [Google Scholar]
- Nunnally JC, Bernstein IH. Psychometric theory. McGraw Hill; 1994. [Google Scholar]
- Penney LM, Spector PE. Job stress, incivility, and counterproductive work behavior (CWB): The moderating role of negative affectivity. Journal of Organizational Behavior. 2005;26(7):777–796. doi: 10.1002/job.336. [DOI] [Google Scholar]
- Pindek S, Spector PE. Organizational constraints: A meta-analysis of a major stressor. Work & Stress. 2016;30(1):7–25. doi: 10.1080/02678373.2015.1137376. [DOI] [Google Scholar]
- Podsakoff PM, MacKenzie SB, Lee J-Y, Podsakoff NP. Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology. 2003;88(5):879–903. doi: 10.1037/0021-9010.88.5.879. [DOI] [PubMed] [Google Scholar]
- Podsakoff PM, MacKenzie SB, Podsakoff NP. Sources of method bias in social science research and recommendations on how to control it. Annual Review of Psychology. 2012;63:539–569. doi: 10.1146/annurev-psych-120710-100452. [DOI] [PubMed] [Google Scholar]
- Rau, R., Morling, K., & Rösler, U. (2010). Is there a relationship between major depression and both objectively assessed and perceived demands and control? Work & Stress, 24(1), 88-106. http://www.informaworld.com/10.1080/02678371003661164
- Skyrme, P. Y. T. (1992). The relationship of job stressors to work performance, intent to quit, and absenteeism of first line supervisors University of South Florida]. Tampa, FL.
- Spector PE. Mastering the use of control variables: The Hierarchical Iterative Control (HIC) approach. Journal of Business and Psychology. 2021;36(5):737–750. doi: 10.1007/s10869-020-09709-0. [DOI] [Google Scholar]
- Spector PE, Jex SM. Development of four self-report measures of job stressors and strain: Interpersonal Conflict at Work Scale, Organizational Constraints Scale, Quantitative Workload Inventory, and Physical Symptoms Inventory. Journal of Occupational Health Psychology. 1998;3(4):356–367. doi: 10.1037/1076-8998.3.4.356. [DOI] [PubMed] [Google Scholar]
- Spector PE, Nixon AE. How often do I agree: An experimental test of item format method variance in stress measures. Occupational Health Science. 2019;3:125–143. doi: 10.1007/s41542-019-00039-z. [DOI] [Google Scholar]
- Spector PE, Dwyer DJ, Jex SM. Relation of job stressors to affective, health, and performance outcomes: A comparison of multiple data sources. Journal of Applied Psychology. 1988;73(1):11–19. doi: 10.1037/0021-9010.73.1.11. [DOI] [PubMed] [Google Scholar]
- Spector PE, Jex SM, Chen PY. Relations of incumbent affect-related personality traits with incumbent and objective measures of characteristics of jobs. Journal of Organizational Behavior. 1995;16(1):59–65. doi: 10.1002/job.4030160108. [DOI] [Google Scholar]
- Spector PE, Zapf D, Chen PY, Frese M. Why negative affectivity should not be controlled in job stress research: Don't throw out the baby with the bath water. Journal of Organizational Behavior. 2000;21(1):79–95. doi: 10.1002/(SICI)1099-1379(200002)21:1<79::AID-JOB964>3.0.CO;2-G. [DOI] [Google Scholar]
- Spector PE, Bauer JA, Fox S. Measurement artifacts in the assessment of counterproductive work behavior and organizational citizenship behavior: Do we know what we think we know? Journal of Applied Psychology. 2010;95(4):781–790. doi: 10.1037/a0019477. [DOI] [PubMed] [Google Scholar]
- Spector PE, Yang L-Q, Zhou ZE. A longitudinal investigation of the role of violence prevention climate in exposure to workplace physical violence and verbal abuse. Work & Stress. 2015;29(4):325–340. doi: 10.1080/02678373.2015.1076537. [DOI] [Google Scholar]
- Spector PE, Rosen CC, Richardson HA, Williams LJ, Johnson RE. A new perspective on method variance: A measure-centric approach. Journal of Management. 2019;45(3):855–880. doi: 10.1177/0149206316687295. [DOI] [Google Scholar]
- Staw BM, Cohen-Charash Y. The dispositional approach to job satisfaction: More than a mirage, but not yet an oasis: Comment [Comment/Reply] Journal of Organizational Behavior. 2005;26(1):59–78. doi: 10.1002/job.299. [DOI] [Google Scholar]
- Strahan, R., & Gerbasi, K. C. (1972). Short, homogeneous version of the Marlow-Crowne social desirability scale [Article]. Journal of Clinical Psychology, 28(2), 191–193. http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=15847479&site=ehost-live
- Watson D, Clark LA. Negative affectivity: The disposition to experience aversive emotional states. Psychological Bulletin. 1984;96(3):465–490. doi: 10.1037/0033-2909.96.3.465. [DOI] [PubMed] [Google Scholar]
- Watson D, Pennebaker JW, Folger R. Beyond negative affectivity: Measuring stress and satisfaction in the workplace. Journal of Organizational Behavior Management. 1986;8(2):141–157. doi: 10.1300/J075v08n02_09. [DOI] [Google Scholar]
- Watson D, Clark LA, Tellegen A. Development and validation of brief measures of positive and negative affect: The PANAS scales. Journal of Personality and Social Psychology. 1988;54(6):1063–1070. doi: 10.1037/0022-3514.54.6.1063. [DOI] [PubMed] [Google Scholar]
- Weitz J. A neglected concept in the study of job satisfaction. Personnel Psychology. 1952;5:201–205. doi: 10.1111/j.1744-6570.1952.tb01012.x. [DOI] [Google Scholar]
- Westermann R, Spies K, Stahl G, Hesse FW. Relative effectiveness and validity of mood induction procedures: A meta-analysis. European Journal of Social Psychology. 1996;26(4):557–580. doi: 10.1002/(SICI)1099-0992(199607)26:4<557::AID-EJSP769>3.0.CO;2-4. [DOI] [Google Scholar]
- Williams EJ. The comparison of regression variables. Journal of the Royal Statistical Society (series b) 1959;21:396–399. [Google Scholar]
- Wood D, Harms PD, Lowman GH, DeSimone JA. Response speed and response consistency as mutually validating indicators of data quality in online samples. Social Psychological and Personality Science. 2017;8(4):454–464. doi: 10.1177/1948550617703168. [DOI] [Google Scholar]
- Wu L-Z, Zhang H, Chiu RK, Kwan HK, He X. Hostile attribution bias and negative reciprocity beliefs exacerbate incivility’s effects on interpersonal deviance. Journal of Business Ethics. 2014;120(2):189–199. doi: 10.1007/s10551-013-1658-6. [DOI] [Google Scholar]
- Zhou ZE, Yan Y, Che XX, Meier LL. Effect of workplace incivility on end-of-work negative affect: Examining individual and organizational moderators in a daily diary study. Journal of Occupational Health Psychology. 2015;20(1):117–130. doi: 10.1037/a0038167. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
Data are available from the first author upon request.

