Skip to main content
Frontiers in Psychology logoLink to Frontiers in Psychology
. 2014 Apr 10;5:298. doi: 10.3389/fpsyg.2014.00298

It is premature to regard the ego-depletion effect as “Too Incredible”

Martin S Hagger 1,*, Nikos L D Chatzisarantis 1
PMCID: PMC3989757  PMID: 24782802

The “strength” model conceptualizes self-control as a limited resource (Baumeister et al., 1998). Individuals are able to exert self-control, but only for a limited period after which capacity declines leading to reduced self-control capacity; a state known as ego-depletion. The model has generated a sizable literature confirming the ego-depletion effect in multiple spheres. Our meta-analysis of published ego-depletion studies computed a medium-sized effect (d = 0.62) across 198 tests (Hagger et al., 2010).

Carter and McCullough (2013) recently applied analyses aimed at testing for publication bias to our data including Schimmack's (2012) incredibility index and two regression techniques (Egger et al., 1997; Moreno et al., 2009). Regression analyses indicated that the ego-depletion effect was substantially smaller than reported in our analysis and may even be zero, and the incredibility index indicated low statistical power and the chances of finding so many significant effects improbable. They concluded that the ego-depletion effect is subject to considerable publication bias and questioned whether it is a “real” effect at all. We replicated these analyses and found similar results. We have made our analyses available to download from the open-access Dryad Digital Repository (Hagger and Chatzisarantis, 2014).

We thank Carter and McCullough (2013) raising the issue of bias. We take this opportunity to present some alternative conclusions to the ones they presented. We agree that journal editors should be more judicious in demanding bias tests in meta-analyses, but believe that that recommendation does not resolve the problem of interpreting the bias. An important addendum to the regression analyses is that the bias detected by a significant regression line cannot be definitively attributed to publication bias. Sterne et al. (2000, 2001) suggest that such bias could be attributed to a number of possible sources. Instead, they use the term “small study” effect; the tendency for smaller studies to report larger effect sizes. One possible reason would be due to publication bias: journals tending to favor the publication of small studies with statistically significant results and disproportionately large effect sizes. However, the findings may also be due to methodological inadequacies or true heterogeneity in the effect. A definitive response to resolving the nature of bias detected by these methods (i.e., whether it is publication bias or other source of bias that causes a “small study effect”) would be to demand authors conducting meta-analyses be diligent in the pursuit of “fugitive literature”: unpublished studies with null findings, or findings that conflict with the commonly-accepted paradigm, that Rosenthal (1994) eloquently predicted would reside in the “file drawers” of researchers who could not get them published. In the case of ego-depletion, a unique contribution would be to identify unpublished studies including those with null or negative effects, as well as studies that have since been published, and recalculate the meta-analytic effect size. Such an undertaking would not only yield a more robust effect size ostensibly independent of publication bias but also be informative as to whether the “small study effect” detected in the analyses was due to publication bias, other forms of bias, or true heterogeneity. We encourage researchers to make their replications of ego-depletion studies freely available to aid future meta-analyses.

We would also like to express concerns regarding Carter and McCullough's prediction, based on their regression analyses, that the ego-depletion effect may be zero. This prediction was based on the intercept of the regression of the ego-depletion effect size on precision. However, if the true ego-depletion effect size is zero or close to it, one would expect the effect sizes in the literature to be randomly distributed in both positive and negative directions about zero. If this is the case, then where are those negative findings? There are scant few ego-depletion experiments that have found opposite effects, i.e., an improvement in second-task performance after engaging in an initial self-control task, let alone null effects. Given the intensiveness of research in this field, would it not be reasonable to expect to have seen the negative findings published? The absence of these effects creates a problem for the claim that the true effect is zero and the credibility of the analysis.

It could be argued that such negative effects might not have been published because their interpretation might contradict commonly-accepted theory and may lie in the file drawers of the researchers who found them. However, we think that such findings would likely have seen the light of day in journals because they contradict the strength model and support alternative hypotheses consistent with other theories such as adaptation or learned industriousness (Converse and DeShon, 2009). For example, one could pose an alternative hypothesis that improvement in self-control performance in ego-depletion experiments could be due to learning the capacity to self-regulate which was transferable. In such cases one would expect statistically-significant improvements in performance on a subsequent self-control task after engaging in an initial task that taxes self-control. Of course, we would have to assume that researchers were sufficiently virtuous in not turning their null results into supportive evidence using selective reporting (Francis, 2014). We contend that if the predicted effect size for ego-depletion is zero, then negative effects should be present in this literature and we would expect such effects to be published given their pivotal role in testing alternative hypotheses based on other theories.

As a final point, while we thank Carter and McCullough for raising a notable question regarding the existence of bias in ego-depletion meta-analysis, their analysis tells us little about its source and does not acknowledge that other effects in published meta-analyses are subject to similar bias. We think it is important to view and interpret the bias found for ego-depletion using these methods in context. For example, how does the small study effect found for ego depletion match up to the relative to the incidence of bias in the discipline of social psychology as whole? A useful future endeavor would be to systematically identify meta-analyses published in social psychology over a substantive period, subject each to the bias-identification analyses, and comment on the extent of the bias within the discipline.

Author contributions

Martin S. Hagger and Nikos L. D. Chatzisarantis conceived the ideas presented in the article and drafted the article.

Conflict of interest statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Supplementary material

The Supplementary Material for this article can be found online at: http://www.frontiersin.org/journal/10.3389/fpsyg.2014.00298/abstract

Figure S1

Funnel plot of the ego-depletion effect size (standardized mean difference, Cohen's d) against the against the study precision (1/standard error).

Figure S2

Funnel plot of the ego-depletion effect size (standardized mean difference, Cohen's d) against the standard error of the effect size.

Acknowledgments

Martin S. Hagger was supported by Australian Research Council Discovery Project Grant DP130103277. The authors would like to thank members of the Laboratory of Self-Regulation, Health Psychology and Behavioural Medicine Research Group, School of Psychology and Speech Pathology at Curtin University for their comments on an earlier draft of this article.

References

  1. Baumeister R. F., Bratslavsky E., Muraven M., Tice D. M. (1998). Ego depletion: is the active self a limited resource? J. Pers. Soc. Psychol. 74, 1252–1265 10.1037/0022-3514.74.5.1252 [DOI] [PubMed] [Google Scholar]
  2. Carter E. C., McCullough M. E. (2013). Is ego depletion too incredible? Evidence for the overestimation of the depletion effect. Behav. Brain Sci. 36, 683–684 10.1017/S0140525X13000952 [DOI] [PubMed] [Google Scholar]
  3. Converse P. D., DeShon R. P. (2009). A tale of two tasks: reversing the self-regulatory resource depletion effect. J. Appl. Psychol. 94, 1318–1324 10.1037/a0014604 [DOI] [PubMed] [Google Scholar]
  4. Egger M., Davey Smith G., Schneider M., Minder C. (1997). Meta-analyses: separating the good from the bad. Br. Med. J. 315, 629–634 10.1136/bmj.315.7109.0a9310563 [DOI] [Google Scholar]
  5. Francis G. (2014). The frequency of excess success for articles in Psychological Science. Psychon. Bull. Rev. [Epub ahead of print]. 10.3758/s13423-014-0601-x [DOI] [PubMed] [Google Scholar]
  6. Hagger M. S., Chatzisarantis N. L. D. (2014). Data from: it is premature to regard the ego-depletion effect as “too incredible.” Dryad Digital Repository. 10.5061/dryad.23j8n Available online at: http://datadryad.org/resource/doi:10.5061/dryad.23j8n [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Hagger M. S., Wood C., Stiff C., Chatzisarantis N. L. D. (2010). Ego depletion and the strength model of self-control: a meta-analysis. Psychol. Bull. 136, 495–525 10.1037/a0019486 [DOI] [PubMed] [Google Scholar]
  8. Moreno S., Sutton A., Ades A., Stanley T., Abrams K., Peters J., et al. (2009). Assessment of regression-based methods to adjust for publication bias through a comprehensive simulation study. BMC Med. Res. Methodol. 9:2 10.1186/1471-2288-9-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Rosenthal M. C. (1994). “The fugitive literature,” in The Handbook of Research Synthesis, eds Cooper H., Hedges L. V. (New York, NY: Russell Sage Foundation; ), 85–94 [Google Scholar]
  10. Schimmack U. (2012). The ironic effect of significant results on the credibility of multiple-study articles. Psychol. Methods 17, 551–566 10.1037/a0029487 [DOI] [PubMed] [Google Scholar]
  11. Sterne J. A., Gavaghan D., Egger M. (2000). Publication and related bias in meta-analysis: power of statistical tests and prevalence in the literature. J. Clin. Epidemiol. 53, 1119–1129 10.1016/S0895-4356(00)00242-0 [DOI] [PubMed] [Google Scholar]
  12. Sterne J. A. C., Egger M., Davey Smith G. (2001). Investigating and dealing with publication and other biases in meta-analysis. BMJ 323:101 10.1136/bmj.323.7304.101 [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Figure S1

Funnel plot of the ego-depletion effect size (standardized mean difference, Cohen's d) against the against the study precision (1/standard error).

Figure S2

Funnel plot of the ego-depletion effect size (standardized mean difference, Cohen's d) against the standard error of the effect size.


Articles from Frontiers in Psychology are provided here courtesy of Frontiers Media SA

RESOURCES