Table 1.
Author(s) | Journal | Definition | Evidence | Key conclusion |
---|---|---|---|---|
Podsakoff and Organ (1986) | Journal of Management | “Because both measures come from the same source, any defect in that source contaminates both measures, presumably in the same fashion and in the same direction.” | Own experience and articles. | “[W]e strongly recommend the use of procedural or design remedies for dealing with the common method variance problem as opposed to the use of statistical remedies or post-hoc patching up.” |
Spector (1987) | Journal of Applied Psychology | “[A]n artifact of measurement that biases results when relations are explored among constructs measured in the same way.” | Articles with self-report measures of perceptions of jobs/work environments and affective reactions to jobs and data from Job Satisfaction Survey. | “The data and research results summarized here suggest that the problem [of CMV] may in fact be mythical.” |
Williams, Buckley, and Cote (1989) | Journal of Applied Psychology |
Not explicitly given, but some explanation is provided, for example: “Values in the monomethod triangles are always inflated by shared method variance, because the correlation between the methods is 1.0 (the same method is used).” |
Same articles as used by Paul E Spector (1987). | “In summary, this research indicates that the conclusions reached by Spector (1987) were an artifact of his method and that method variance is real and present in organizational behavior measures of affect and perceptions at work.” |
Bagozzi and Yi (1990) | Journal of Applied Psychology | “As an artifact of measurement, method variance can bias results when researchers investigate relations among constructs measured with the common method.” | Same articles as used by Paul E Spector (1987) and Williams et al. (1989). | “Our reanalyses of the data analyzed by Spector (1987) suggest that the conclusions stated by Williams et al. (1989) could have been an artifact of their analytic procedure [ . . . ]. In the 10 studies [ . . . ], we found method variance to be sometimes significant, but not as prevalent as Williams et al. concluded.” |
Avolio, Yammarino, and Bass (1991) | Journal of Management | “[CMV] is defined as the overlap in variance between two variables attributable to the type of measurement instrument used rather than due to a relationship between the underlying constructs.” | Survey data consisting of measures on leadership and outcomes (i.e., effectiveness of unit and satisfaction) gathered from immediate followers of a leader. | “The label “single-source effects” appears to be applied to the effects of an entire class of data collection that is rather wide-ranging. Stated more explicitly, single-source effects are not necessarily an either-or issue [ . . . ].” |
Doty and Glick (1998) | Organizational Research Methods | “[CMV] occurs when the measurement technique introduces systematic variance into the measure, [which] can cause observed relationships to differ from the true relationships among constructs.” | Quantitative review of studies reporting multitrait-multimethod correlation matrices in six social science journals between 1980 and 1992. | “[CMV] is an important concern for organizational scientists. A significant amount of methods variance was found in all of the published data sets.” |
Keeping and Levy (2000) | Journal of Applied Psychology | “It is often argued that observed correlations are produced by the fact that the data originated from the same source rather than from relations among substantive constructs.” | Survey data consisting of measures on appraisal and positive/negative affect gathered from employees. | “Although common method variance is a very prevalent critique of attitude and reactions research, additional support from other research areas seems very consistent with our findings and argues against a substantial role for common method variance [ . . . ].” |
Lindell and Whitney (2001) | Journal of Applied Psychology | “[When] individuals’ reports of their internal states are collected at the same time as their reports of their past behavior related to those internal states [ . . . ] the possibility arises that method variance (MV) has inflated the observed correlations.” | Hypothetical correlations between leader characteristics, role characteristics, team characteristics, job characteristics, martial satisfaction and self-reported member participation. | “MV-marker-variable analysis should be conducted whenever researchers assess correlations that have been identified as being most vulnerable to CMV [ . . . ]. For correlations with low vulnerability [ . . .], conventional correlation and regression analyses may provide satisfactory results.” |
Podsakoff, MacKenzie, Lee, and Podsakoff (2003) | Journal of Applied Psychology | “Most researchers agree that common method variance (i.e., variance that is attributable to the measurement method rather than to the constructs the measures represent) is a potential problem in behavioral research.” | Literature review of previously published articles across behavioral research (e.g., management, marketing, psychology . . .). | “Although the strength of method biases may vary across research context, [ . . . ] CMV is often a problem and researchers need to do whatever they can to control for it. [ . . . ] this requires [ . . . ] implementing both procedural and statistical methods of control.” |
Spector (2006) | Organizational Research Methods | “It has become widely accepted that correlations between variables measured with the same method, usually self-report surveys, are inflated due to the action of method variance.” | Articles on turnover processes, social desirability, negative affectivity, acquiescence and comparison between multi and monomethod correlations. | “The time has come to retire the term common method variance and its derivatives and replace it with a consideration of specific biases and plausible alternative explanations for observed phenomena, regardless of whether they are from self-reports or other methods.” |
Pace (2010) | Organizational Research Methods | “86.5% [ . . . ] agreed or strongly agreed that CMV means that correlations among all variables assessed with the same method will likely be inflated to some extent due to the method itself.” | Opinions of 225 editorial board members of Journal of Applied Psychology, Journal of Organizational Behavior, and Journal of Management. | “According to survey results, CMV is recognized as a frequent and potentially serious issue but one that requires much more research and understanding.” |
Siemsen, Roth, and Oliveira (2010) | Organizational Research Methods | “CMV refers to the shared variance among measured variables that arises when they are assessed using a common method.” | Algebraic analysis, including extensive Monte-Carlo runs to test robustness. | “CMV can either inflate or deflate bivariate linear relationships [ . . . ]. With respect to multivariate linear relationships, [ . . . ] common method bias generally decreases when additional independent variables suffering from CMV are included [ . . . ]. [Q]uadratic and interaction effects cannot be artifacts of CMV.” |
Brannick, Chan, Conway, Lance, and Spector (2010) | Organizational Research Methods | “[M]ethod variance is an umbrella or generic term for invalidity of measurement. Systematic sources of variance that are not those of interest to the researcher are good candidates for the label ‘method variance’.” | Expert opinions from four scholars who have written about CMV: David Chan, James M. Conway, Charles E. Lance, and Paul E. Spector. | “Rather than considering method variance to be a plague, which, [ . . . ] leads inevitably to death (read: rejection of publication), method variance should be regarded in a more refined way. [ . . . ] [R]ather than considering method variance to be a general problem that afflicts a study, authors and reviewers should consider specific problems in measurement that affect the focal substantive inference.” |
Note. CMV = common method variance.