Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2017 Dec 1.
Published in final edited form as: Curr Opin Psychol. 2016 May 18;12:53–57. doi: 10.1016/j.copsyc.2016.04.015

On Priming Action: Conclusions from a Meta-Analysis of the Behavioral Effects of Incidentally-Presented Words

Evan Weingarten 1, Qijia Chen 2, Maxwell McAdams 3, Jessica Yi 4, Justin Hepler 5, Dolores Albarracin 6
PMCID: PMC5147746  NIHMSID: NIHMS795389  PMID: 27957520

Abstract

This paper presents a summary of the conclusions drawn from a meta-analysis of the behavioral impact of presenting words connected to an action or a goal representation (Weingarten et al., 2016). The average and distribution of 352 effect sizes from 133 studies (84 reports) revealed a small behavioral priming effect (dFE = 0.332, dRE= 0.352), which was robust across methodological procedures and only minimally biased by the publication of positive (vs. negative) results. More valued behavior or goal concepts (e.g., associated with important outcomes or values) were associated with stronger priming effects than were less valued behaviors. In addition, opportunities for goal satisfaction appeared to decrease priming effects.

Keywords: priming, automaticity, goal, motivation, meta-analysis


Now, goal-priming experiments are coming under scrutiny — and in the process, revealing a problem at the heart of psychological research itself. (Satel, 2013)

In 1996, Bargh, Chen, and Burrows asked a group of New York University undergraduates to complete a brief research task and then request a second task from another researcher in a nearby room. The first task was comprised of scrambled sentences (e.g., they her respect see usually, they her bother see usually, and they her exercising see usually). After unscrambling the sentences, participants sought out the experimenter who was chatting with a friend.

Naturally, the NYU students were able to correctly form 15 sentences from the grabled ones (e.g., they usually respect her, they usually bother her, and they usually see her exercising). Surprisingly, the content of the unscrambled sentences (containing polite, rude, or neutral themes that were varied among participants) influenced the amount of time students took to interrupt the experimenter when requesting permission to proceed with the second study. College students who had unscrambled sentences about rudeness were more likely to interrupt the experimenter’s conversation than those who had unscrambled sentences about politeness or unrelated topics.

In light of this evidence, Bargh and colleagues (1996) argued that the effects of priming were not limited to social perception (Bargh, 1994) but instead reached the more substantial domain of action. Since that time and for nearly two decades, social psychologists and scholars in many other fields have attempted to understand the perceptual and motivational principles responsible for the intriguing observations presented by Bargh et al. (1996) in their seminal study. For example, Bargh and his colleagues (2001) tasked students with solving a series of word search puzzles that either contained synonyms of achievement (e.g., win, achieve) or control words (e.g., building, staple). All students were reminded of achievement goals, leading to improved intellectual performance before the exercise. However, those who first found achievement words located more words on subsequent word search puzzles than did those who initially found neutral words..

Despite the excitement surrounding the effects of primes on performance, the Zeitgeist changed as a result of failure to directly replicate the phenomenon. The last five years have shown a dramatic shift towards the more somber intellectual climate reflected in the quotes below.

As all of you know, of course, questions have been raised about the robustness of priming results. The storm of doubts is fed by several sources, including the recent exposure of fraudulent researchers, general concerns with replicability that affect many disciplines, multiple reported failures to replicate salient results in the priming literature, and the growing belief in the existence of a pervasive file drawer problem that undermines two methodological pillars of your field: the preference for conceptual over literal replication and the use of meta-analysis. […] For all these reasons, right or wrong, your field is now the poster child for doubts about the integrity of psychological research. (Kahneman, 2012)

The worst is yet to come for primingover the next two or three years you're going to see an avalanche of failed replications published. (Barlett, 2013)

Could a well-executed meta-analysis of the behavioral effects of incidentally presented concepts transform this controversy and inform the many disciplines concerned with this phenomenon? Weingarten et al. (2016) thought so, especially with the use of sophisticated methods to detect the systematic elimination of null and negative findings (a form of publication bias often referred to as the file drawer problem) (see Cooper, 2010; Cooper & Hedges, 1994). With the objective of gathering the most comprehensive data available on this issue, Weingarten et al. (2016) obtained published and unpublished research on the performance effects of priming concepts compared with a control condition. They then calculated Cohen’s g by subtracting the mean of the control group from the mean of the priming group and dividing that by the pooled SD, or used analogous methods for categorical dependent measures. The results from this project were reported by Weingarten et al. (2016).

Weingarten et al. (2016) synthesized 352 published and unpublished effect sizes, obtained from research conducted in the US and internationally. Priming methods included various forms of supraliminal and subliminal word presentation clearly linked to a concept (win, affiliate). The most commonly primed concepts were presented supraliminally (e.g., via scrambled sentences and word puzzles) and pertained to achievement, although social behaviors such as helping were also prevalent. Among many other measurements, performance measures included a score for test performance (number of solved problems), time spent on a task, and various ratings of overt behavior. Non-performance measures such as concept accessibility, or measures of attitudes, beliefs, or knowledge, were deemed ineligible in an attempt to model effects on actual cognitive and motor performance.

Summary of Average Findings

On average, Weingarten et al. (2016) obtained a small but significant effect size comparable to many findings in psychological, sociological, and medical research (see e.g., d. = 0.21, Johnson, Scott Sheldon, & Carey, 2008). Weighted mean effect sizes and associated heterogeneity statistics appear in Table 1 and indicate considerable non-random variability.

Table 1.

Average Effect (d.) and Heterogeneity (k = 352)

Weighted mean effect
95% Confidence Interval
Heterogeneity Indexes
FE d. = 0.323 0.277–0.387 Q351 = 934.77***
RE d. = 0.352 0.294–0.409 I2 = 62.45

Detailed analyses of inclusion and publication bias were conducted, including funnel plot analyses (Begg & Mazumdar, 1994; Egger et al., 1997; Light & Pillemer, 1984; Peters et al., 2010), trim-and-fill methods (Duval & Tweedie, 1998), various failsafe N estimates (Orwin, 1983; Rosenberg, 2005; Rosenthal, 1979), and cumulative meta-analysis (Rothstein et al., 2005). A trim-and-fill analysis via the R02 estimator filled nine studies that yielded new, estimated fixed-effects d. = 0.295 (95% CI [0.264, 0.325]; z = 19.08, p < .001) and random-effects d. = 0.312 (95% CI [0.257, 0.366], z = 11.15, p < .001), suggesting a significant effect even after accounting for publication bias. Similarly, an Egger OLS modeled at the study level indicated a small study effect (fixed effects: t(218) = 4.25, p < .001; random effects: t(218) = 5.19, p < .001). Weingarten et al. (2016) also used these analyses to identify outliers and remove them from further analyses. As a consequence, nine effect sizes from eight studies in subsequent analyses were removed (Jefferis & Fazio, 2008, Study 1; Keatley et al., 2014, Study 1; Legal et al., 2007, Study 1; Levesque, 1999, Study 2; Macrae & Johnston, 1998, Study 2; Oettingen et al., 2006, Study 4; Roehrich, 1992, Study 1; Sela & Shiv, 2009, Study 3). P-curves (Simonsohn et al., 2014) were then fit to estimate potential bias in the selection of the statistical findings reported in the synthesized studies. (For a complete list of included studies, see Weingarten et al., 2016). Results suggested an absence of both p-hacking and selective reporting in the synthesized studies. Overall, these findings suggest a real performance effect that is not attributable to publication bias.

After removing effect sizes from the trim-and-fill analysis and modeling the studies at the study level, Weingarten et al. (2016) obtained an average effect size of d. = 0.315 (95% CI [0.263, 0.368]; t(132) = 11.75, p < .001) from fixed-effects and d. = 0.323 (95% CI [0.270, 0.376]; t(132) = 11.95, p < .001) random-effects models. Both of these analyses rejected the null homogeneity hypothesis (Q(342) = 806.43, p < .001) and had a similar I2 value of 57.59% (95% CI [54.08, 60.72]), demonstrating between moderate and large heterogeneity. This new effect size had a Rosenthal (Rosenberg) failsafe number of 46,930 (31,623), which exceeds the 5k+10 threshold and thus suggested that publication bias is unlikely to fully explain the meta-analytic findings (Rosenthal, 1991; Rosenthal & Rosnow, 2008).

Weingarten et al. (2016) likewise ruled out that the effect sizes emerge from two different distributions using a normal-quantile plot of the 343 individual effect sizes, which also checks for non-normality of the data (Wang & Bushman, 1998, 1999). The normal-quantile plot examines potential publication bias by reviewing whether the shape of the curve has any discontinuities around 0 (indicative of publication bias) or has an S-shaped structure that might signal two underlying populations (Wang & Bushman, 1998). A Shapiro-Wilk normality test on the 343 data points yielded a marginally significant p-value (W = 0.992, p = .073), suggesting non-normality of the data, possibly indicating publication bias. The authors characterized this value as insufficient to explain their effect. The shape of the distribution did not immediately suggest that the studies come from two populations (curve, not S-shaped; Wang & Bushman, 1998).

Finally, p-curve analyses suggested that selective reporting could not explain the results of the set of studies from which Weingarten et al. (2016) drew their effect sizes (Simonsohn et al., 2014). They presented two sets of p-curve analyses (based on continuous tests) of the studies in this meta-analysis: (a) a p-curve on all studies conducted using p-values based on the researchers’ focal hypotheses (Simonsohn et al., 2014), and (b) p-curves based on studies with the largest error degrees of freedom. The curves can be found in Figure 1 and are based on the focal hypotheses of authors (often including interaction effects rather than mere differences between prime and control conditions). When Weingarten et al. (2016) included all studies (see Panel A, published or unpublished) with clear hypotheses for behavioral measures (as outlined in the paper’s p-curve disclosure table), The researchers found no evidence of p-hacking (no left-skew), but dual evidence of a right-skew and flatter than 33% power. The p-curve being flatter than 33% indicated that, on average, the studies considered in this meta-analysis lacked the statistical power to uncover the effect of interest in the study, although selective reporting alone cannot explain the entirety of the evidence. Weingarten et al. (2016) again found this pattern when they restricted the p-curve to studies in the top half of error degrees of freedom (see panel B of Figure 1). However, when they restricted the p-curve to studies in the top third (see panel C of Figure 1) or top quartile (see Panel D of Figure 1) of degrees of freedom, there was a clear right skew, further indicating that selective reporting alone cannot explain the study results.

Figure 1.

Figure 1

The Role of Theoretically Meaningful Moderators

The replicability of priming effects, however, depends not only on statistical considerations, but also on theoretically relevant features of priming studies. Is it possible that some failures to replicate are the result of researchers pursuing priming effects under moderating conditions that eliminate those effects, and thus should not be taken as evidence of the nonexistence of priming per se? This possibility is consistent with the observed heterogeneity of the pooled effect size, so Weingarten et al.’s (2016) meta-analysis also considered theoretical moderators of priming effects. As implied by the definition of goals as desirable end states (Lewin, 1943; see also Bargh et al., 2001; Kruglanski et al., 2002), conditions in which performance had high value (e.g., often the goal was important to participants due to an accompanying reward) were associated with stronger priming effects than conditions of low value (for other effects of value, see Johnson & Eagly, 1989; see Barsalau, this issue). Second, as indicated by theories about goals (Zeigarnik, 1967; see also Marsh, Hicks, & Bink, 1998; Förster, Lieberman, & Higgins, 2005), behavioral priming effects remained even in the absence of satisfaction opportunity when performance was valuable. In contrast, behavioral priming effects decayed in the absence of a satisfaction opportunity when performance was limited in value. Contrary to much speculation, there was generally an absence of methodological effects such as subliminal vs. supraliminal priming.

As quoted above, priming has become the poster child for concerns about the replicability and veracity of behavioral science research: These concerns attract the attention of the popular press (e.g., Satel, 2013) and even President Obama and his Council of Advisors on Science and Technology (Begley, 2014). Because many of the raised concerns cannot be addressed by single studies, but only through the analysis of publication bias in the context of a meta-analysis, the Weingarten et al’s (2016) meta-analysis represented the first empirical investigation of these concerns. Through this approach, the meta-analysis excluded publication bias and selective reporting of analyses as an alternative explanation for the existence of priming effects. Furthermore, the p-curve techniques showed that the field is not plagued by widespread academic misconduct in the form of p-hacking, despite persistent conjectures in academic and popular debates to the contrary. In addition to clarifying these issues for the area of priming research, this approach may serve as a general model for addressing replicability concerns. Illuminating this pathway is an important, timely, and broad contribution. Psychology is not alone when it comes to replicability failures, given that reproducibility challenges trouble virtually all scientific disciplines (Ioannidis, 2005), including medicine (Prinz et al., 2011), behavioral genetics (Sullivan, 2007), and neuroscience (Button et al., 2013) among others.

Highlights.

  • A meta-analysis revealed a small behavioral priming effect.

  • The effect was robust and had little publication bias.

  • More (vs. less) valued behavior or goal concepts showed stronger priming effects.

  • Opportunities for goal satisfaction appeared to decrease priming effects.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Contributor Information

Evan Weingarten, Marketing Department, University of Pennsylvania.

Qijia Chen, University of Pennsylvania.

Maxwell McAdams, University of Pennsylvania.

Jessica Yi, University of Pennsylvania.

Justin Hepler, Facebook.

Dolores Albarracin, Psychology Department, University of Illinois at Urbana-Champaign.

References

  1. Bargh JA. The four horsemen of automaticity: Awareness, efficiency, intention, and control in social cognition. In: Wyer RS Jr, Srull TK, editors. Handbook of social cognition. 2nd. Hillsdale, NJ: Erlbaum; 1994. pp. 1–40. [Google Scholar]
  2. Bargh JA, Chen M, Burrows L. Automaticity of social behavior: Direct effects of trait construct and stereotype priming on action. Journal of Personality and Social Psychology. 1996;71:230–244. doi: 10.1037//0022-3514.71.2.230. [DOI] [PubMed] [Google Scholar]
  3. Bargh JA, Gollwitzer PM, Lee-Chai AY, Barndollar K, Troetschel R. The automated will: Nonconscious activation and pursuit of behavioral goals. Journal of Personality and Social Psychology. 2001;81:1014–1027. [PMC free article] [PubMed] [Google Scholar]
  4. Barlett T. Power of suggestion. 2013 http://chronicle.com/article/Power-of-Suggestion/136907/ Chronicle of Higher Education (Retrieved 10/7/2014)
  5. Begg CB, Mazumdar M. Operating characteristics of a rank correlation test for publication bias. Biometrics. 1994;50:1088–1101. [PubMed] [Google Scholar]
  6. Begley S. U.S. science officials take aim at shoddy studies. Reuters. 2014 http://uk.reuters.com/article/2014/01/27/science-reproducibility-idUKL2N0KX18S20140127 (Retrieved 10/07/14)
  7. Button KS, Ioannidis JP, Mokrysz C, Nosek BA, Flint J, Robinson ES, Munafò MR. Power failure: why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience. 2013;14:365–376. doi: 10.1038/nrn3475. [DOI] [PubMed] [Google Scholar]
  8. Cooper H. Research synthesis and meta-analysis: A step by step approach (4th ed., Applied Social Research Methods Series, Vol. 2) Thousand Oaks, CA: Sage; 2010. [Google Scholar]
  9. Cooper HM, Hedges LV, editors. The handbook of research synthesis. New York: Russell Sage Foundation; 1994. [Google Scholar]
  10. Duval SJ, Tweedie RL. Practical estimates of the effect of publication bias in meta-analysis. Australasian Epidemiologist. 1998;5:14–17. [Google Scholar]
  11. Egger M, et al. Bias in meta-analysis detected by a simple, graphical test. British Medical Journal. 1997;315:629–634. doi: 10.1136/bmj.315.7109.629. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Förster J, Liberman N, Higgins ET. Accessibility from active and fulfilled goals. Journal of Experimental Social Psychology. 2005;41:220–239. [Google Scholar]
  13. Ioannidis JP. Why most published research findings are false. PLoS Medicine. 2005;2:e124. doi: 10.1371/journal.pmed.0020124. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Jefferis VE, Fazio RH. Accessibility as input: The use of construct accessibility as information to guide behavior. Journal of Experimental Social Psychology. 2008;44(4):1144–1150. http://dx.doi.org/10.1016/j.jesp.2008.02.002. [Google Scholar]
  15. Johnson BT, Eagly AH. The effects of involvement on persuasion: A meta-analysis. Psychological Bulletin. 1989;106:290–314. [Google Scholar]
  16. Johnson BT, Scott-Sheldon LAJ, Carey MP. Meta-synthesis of health behavior change meta-nnalyses. Am J Public Health. 2008;100:2193–2198. doi: 10.2105/AJPH.2008.155200. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Kahneman D. Personal communication. September 26, 2012. 2012 [Google Scholar]
  18. Keatley DA, Clarke DD, Ferguson E, Hagger MS. Effects of pretesting implicit self-determined motivation on behavioral engagement: evidence for the mere measurement effect at the implicit level. Frontiers in Psychology. 2014 doi: 10.3389/fpsyg.2014.00125. http://dx.doi.org/10.3389/fpsyg.2014.00125. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Legal J, Meyer T, Delouvee S. Effect of Compatibility Between Conscious Goal and Nonconscious Priming on Performance. Current Research in Social Psychology. 2007;12(6):80–90. [Google Scholar]
  20. Levesque CS. Dissertation. 1999. Automatic activation of intrinsic and extrinsic motivation. [Google Scholar]
  21. Lewin K. Defining the ‘Field at a Given Time’. Psychological Review. 1943;50:292–310. [Google Scholar]
  22. Kruglanski AW, Shah JY, Fishbach A, Friedman R, Chun WY, Sleeth-Keppler D. A theory of goal-systems. In: Zanna MP, editor. Advances in Experimental Social Psychology. Vol. 34. New York: Academic Press; 2002. pp. 331–378. [Google Scholar]
  23. Orwin RG. A fail-safe N for effect size in meta-analysis. Journal of Education Statistics. 1983;8:157–159. [Google Scholar]
  24. Light RJ, Pillemer DB. Summing up: The Science of Reviewing Research. Cambridge, Massachusetts: Harvard University Press; 1984. [Google Scholar]
  25. Macrae CN, Johnston L. Help, I need somebody: Automatic action and inaction. Social Cognition. 1998;16(4):400–417. http://dx.doi.org/10.1521/soco.1998.16.4.400. [Google Scholar]
  26. Marsh RL, Hicks JL, Bink ML. Activation of completed, uncompleted, and partially completed intentions. Journal of Experimental Psychology: Learning, Memory, and Cognition. 1998;24:350–361. [Google Scholar]
  27. Oettingen G, Grant H, Smith PK, Skinner M, Gollwitzer PM. Nonconscious Goal pursuit: acting in an explanatory vacuum. Journal of Experimental Social Psychology. 2006;42:668–675. http://dx.doi.org/10.1016/j.jesp.2005.10.003. [Google Scholar]
  28. Peters JL, Sutton AJ, Jones DR, Abrams KR, Rushton L, et al. Assessing publication bias in meta-analyses in the presence of between-study heterogeneity. J R Stat Soc A Stat Soc. 2010;173:575–591. [Google Scholar]
  29. Prinz F, Schlange T, Asadullah K. Believe it or not: how much can we rely on published data on potential drug targets? Nature Reviews: Drug discovery. 2011;10:712–712. doi: 10.1038/nrd3439-c1. [DOI] [PubMed] [Google Scholar]
  30. Roehrich L. Dissertation. 1992. Priming the Pump: Activation of the Alcohol Expectancy Construct Increases Drinking Behavior. [Google Scholar]
  31. Rosenberg MS. The File-Drawer Problem Revisited: A General Weighted Method for Calculating Fail-Safe Numbers in Meta-Analysis. Evolution. 2005;59(2):464–458. http://dx.doi.org/10.1111/j.0014-3820.2005.tb01004.x. [PubMed] [Google Scholar]
  32. Rosenthal R. The “file drawer problem” and tolerance for null results. Psychological Bulletin. 1979;86:638–641. [Google Scholar]
  33. Rosenthal R. Meta-Analytic Procedures for Social Research. 1991 [Google Scholar]
  34. Rosenthal R, Rosnow R. Essentials of Behavioral Research: Methods and Data Analysis. 3. McGraw-Hill; 2008. [Google Scholar]
  35. Rothstein HR, Sutton AJ, Borenstein M. Publication Bias in Meta-Analysis: Prevention, Assessment and Adjustments. New York: John Wiley & Sons; 2005. [Google Scholar]
  36. Satel SL. Primed for controversy. New York Times. 2013 http://www.nytimes.com/2013/02/24/opinion/sunday/psychology-research-control.html?_r=2& (Retrieved 10/07/14)
  37. Sela A, Shiv B. Unraveling Priming: When Does the Same Prime Activate a Goal versus a Trait? Journal of Consumer Research. 2009;36 http://dx.doi.org/10.1086/598612. [Google Scholar]
  38. Simonsohn U, Nelson LD, Simmons JP. P-curve: A Key to the File Drawer. Journal of Experimental Psychology: General. 2014 doi: 10.1037/a0033242. http://dx.doi.org/10.1037/a0033242. [DOI] [PubMed] [Google Scholar]
  39. Sullivan PF. Spurious genetic associations. Biological Psychiatry. 2007;61:1121–1126. doi: 10.1016/j.biopsych.2006.11.010. [DOI] [PubMed] [Google Scholar]
  40. Wang MC, Bushman BJ. Using the Normal Quantile Plot to Explore Meta-Analytic Data Sets. Psychological Methods. 1998;3(1):46–54. http://dx.doi.org/10.1037/1082-989X.3.1.46. [Google Scholar]
  41. Wang M, Bushman B. SAS Publishing; 1999. Integrating Results through Meta-Analytic Review Using SAS Software. [Google Scholar]
  42. Weingarten E, Chen Q, McAdams M, Yi J, Helper J, Albarracin D. From Primed Concepts to Action: A Meta-Analysis of the Behavioral Effects of Incidentally-Presented Words. Psychological Bulletin. 2016 doi: 10.1037/bul0000030. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Zeigarnik B. On finished and unfinished tasks. In: Ellis WD, editor. A sourcebook of Gestalt psychology. New York, NY: Humanities Press; 1967. pp. 300–314. [Google Scholar]

RESOURCES