Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2016 Mar 1.
Published in final edited form as: Health Psychol Rev. 2014 Apr 9;9(1):21–24. doi: 10.1080/17437199.2014.900722

Toward Healthy Theorizing about Health Behaviours in the Maze of Messy Reality: A Reaction to Peters, de Bruin, and Crutzen

Blair T Johnson 1, Susan Michie 2
PMCID: PMC4372132  NIHMSID: NIHMS588796  PMID: 25793486

Peters, de Bruin, and Crutzen (2013) concluded (1) that past research syntheses have often neglected the finer points of behavioural theories, making for dubious conclusions and ineffective health promotion efforts and (2) offered an iterative protocol for evidence base accumulation (IPEBA) as a way to blend work from theories, experiments, and meta-analyses of experiments. In the spirit of science, we heartily agree that when it is possible and feasible to improve methods one should do so, especially when the outcomes can improve public health. That said, we believe that instantiating IPEBA needs to account for several stubborn particulars of an all-too-often messy research reality:

  1. Pressure to do something, anything, to help. Health promotion interventions are often—and perhaps even usually—confounded, and use “everything except the kitchen sink” as intervention content. For some targeted behaviours, life itself is often at stake for the targeted population—such as HIV prevention at times or in places with no available drug therapy. These trials may not be the clean tests of theories that IPEBA envisions, but their results may be of considerable practical value to populations with high need to promote health. With enough varied replications of such trials, meta-analyses may indeed tease out which behaviour change techniques (BCTs) drive successful behaviour change and these robustly identified BCTs then can inform the development of more efficient, briefer interventions, greater understanding of mechanisms of action and, then improved theories.

  2. What is the best theory? Taking a single theoretical approach to conduct an intervention is no insurance that it is the right approach to the problem, yet in being “theory-driven” such scholars often act as though they have donned the cloak of theoretical invincibility—we should be open to the possibility that a cherished theory does not quite do the job. To advance theory, it is necessary both to identify deficiencies in current models and theories and to provide guidance for future empirical tests (e.g., DiClemente, Crosby, & Kegler, 2009). For example, the scholars proposing the network-individual-resource model of HIV prevention focused on clear deficits in prominent health promotion models, such as insufficient attention to risk-related resources that individuals or their networks may possess (Johnson, Redding, DiClemente, Dodge, Mustanski et al., 2010, p. S215). Another problem results when investigators state that an intervention is based on a particular model yet actually follow others (as a careful inspection of their methods may reveal). Similarly, investigators may use elicitation research to shape the themes of the intervention, yet not report how the elicitation research changed the details of intervention content beyond that already planned on the basis of the pre-selected theory. Improved reporting standards and archiving of intervention content would help to rectify this situation over time. Thus, an IPEBA framework should thoroughly consider competing theories and address how future research can produce critical tests between them.

  3. Encouraging theories and tests that can accommodate big science. No matter how nuanced theories are, all of them make assumptions. Understandably, nearly all assume—tacitly or explicitly—that targeted individuals are living biological organisms subject to environmental conditions. Yet, this assumption can have huge ramifications for the success of trials: When participants complete an intervention, they must live in environments that vary greatly in the extent to which they support the goals of the intervention (Johnson, Redding et al. 2010). It is quite plausible, therefore, to consider that interventions might fail at longer intervals after the intervention for vulnerable individuals living in difficult environments, such as adolescent minorities who live in communities with high levels of prejudice and discrimination (e.g., Reid, Dovidio, Ballester, & Johnson, in press). Such factors also directly influence health outcomes for sexual minorities (e.g., Hatzenbuehler, 2009). We should consider that trials conducted in developed countries may well operate differently when replicated in developing countries. Thus, an IPEBA framework should be open to the theoretical insights that allied fields such as biology, sociology, and economics might present.

  4. Healthy scepticism about the trappings of controlled trials. Peters et al. (2013) emphasize controlled trials of BCTs—which are a form of efficacy evidence (Flay, 1986)—but there are other categories that are highly important to public health, especially those providing effectiveness evidence. Ultimately, the theory-related work we do in efficacy trials should lead to better public health outcomes in the community, which is best labelled effectiveness evidence. As Flay argued, the infrastructure supporting controlled trials may be a factor in generating the efficacy results themselves, which complicates replications in community settings. Specifically, controlled trials offer incentives for participation (e.g., monetary sums), trained interventionists, professional settings, participant trackers, and other elements that may be far beyond the resources of community-level interventionists, who may not even be able to evaluate or report measures of the success of their intervention. Thus, if it is difficult to test theories with efficacy data,1 then try predicting whether the results of these trials will work as well when they are diffused into the community. Similarly, as much as we are aficionados of theories to promote health, we also must recognize that methodological features should be considered. Scholars are more likely to trust evidence that comes from higher-quality trials—RCTs and the like - but it is not necessarily the case that lower-quality results would differ. Indeed, some meta-analyses have taken exactly this strategy, trusting the repeated-measures effects that appear in both controlled and uncontrolled trials and de-emphasizing the comparison between intervention and control, recognizing that control groups often receive active intervention content (e.g., Albarracín et al., 2005; de Bruin et al., 2010; Ferrer et al., 2011; Lennon, Huedo-Medina, Gerwien, & Johnson, 2012). By virtue of including such evidence, meta-analyses can explicitly examine whether the results differ, and they will benefit from a far larger database. As meta-analysts increasingly examine whether methodological quality matters in relation to health promotion success, true gold standards of methodological quality ought to emerge (see Johnson & Low, 2013). In short, an IPEBA protocol should consider the complexities that community and methodological realities necessitate.

In conclusion, the maze of messy reality for health promotion research presents challenges for the scholarly enterprise and for providing clear public health policy. Nonetheless, despite Peters et al.’s (2013) statement that “a cumulative science of behaviour change can develop” (e.g., p. 1, emphasis added), we assert that one already is developing. Despite all the challenges that Peters et al. and we identify (see also Michie & Johnson, 2013), regularities in research findings have been documented. Science, like life, is a messy business. To advance science rapidly and efficiently, it is necessary to recognise strengths as well as weaknesses in what has gone before and build on what we have on the ground, whilst keeping an eye on the stars.

Acknowledgments

The preparation of this article was facilitated by United States Public Health Service grant R01-MH58563 to Blair T. Johnson.

Footnotes

1

Although we concur with Peters et al. (2013) that there are significant challenges in testing theories appropriately, we assert that there are many ways to evaluate the efficacy of fear appeals without resorting to numerous dummy-coded contrasts.

Contributor Information

Blair T. Johnson, University of Connecticut

Susan Michie, University College London.

References

  1. Albarracín D, Gillette JC, Earl AN, Glasman LR, Durantini MR, Ho MH. A test of major assumptions about behavior change: a comprehensive look at the effects of passive and active HIV-prevention interventions since the beginning of the epidemic. Psychological Bulletin. 2005;131(6):856–897. doi: 10.1037/0033-2909.131.6.856. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. De Bruin M, Viechtbauer W, Schaalma HP, Kok G, Abraham C, Hospers HJ. Standard care impact on effects of highly active antiretroviral therapy adherence interventions: A meta-analysis of randomized controlled trials. Archives of Internal Medicine. 2010;170:240–250. doi: 10.1001/archinternmed.2009.536. [DOI] [PubMed] [Google Scholar]
  3. DiClemente RJ, Crosby RA, Kegler M, editors. Emerging theories in health promotion practice and research. New York: Wiley; 2009. [Google Scholar]
  4. Flay BR. Efficacy and effectiveness trials (and other phases of research) in the development of health promotion programs. Preventive Medicine. 1986;15(5):451–474. doi: 10.1016/0091-7435(86)90024-1. [DOI] [PubMed] [Google Scholar]
  5. Hatzenbuehler ML. How does sexual minority stigma “get under the skin”? A psychological mediation framework. Psychological Bulletin. 2009;135(5):707. doi: 10.1037/a0016441. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Johnson BT, Low RE. Panning for the gold in health research: Incorporating studies’ methodological quality in meta-analysis. Manuscript submitted for publication; 2013. [DOI] [PubMed] [Google Scholar]
  7. Johnson BT, Redding CA, DiClemente RJ, Dodge BM, Mustanski BS, Sheeran P, Warren MR, Zimmerman RS, Fisher WA, Conner MT, Carey MP, Fisher JD, Stall RD, Fishbein M. A Network-Individual-Resource model for HIV prevention. AIDS and Behavior. 2010;14(Suppl 2):204–221. doi: 10.1007/s10461-010-9803-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Johnson BT, Scott-Sheldon LAJ, Carey MP. Meta-synthesis of health behavior change meta-analyses. American Journal of Public Health. 2010;100:2193–2198. doi: 10.2105/AJPH.2008.155200. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Lennon CA, Huedo-Medina TB, Gerwien DP, Johnson BT. A role for depression in sexual risk reduction for women? A meta-analysis of HIV prevention trials with depression outcomes. Social Science & Medicine. 2012;75(4):688–698. doi: 10.1016/j.socscimed.2012.01.016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Michie S, Johnson BT. On the complexities of health promotion research involving behaviour change techniques: A comment on Peters, de Bruin, and Crutzen (2013) Manuscript submitted for publication; 2013. [Google Scholar]
  11. Peters GJY, de Bruin M, Crutzen R. Everything should be as simple as possible, but no simpler: Towards a protocol for accumulating evidence regarding the active content of health behaviour change interventions. Health Psychology Review. 2013:1–14. doi: 10.1080/17437199.2013.848409. ahead-of-print. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Reid AE, Dovidio JF, Ballester E, Johnson BT. HIV prevention interventions to reduce sexual risk for African Americans: The influence of community-level stigma and psychological processes. Social Science & Medicine. doi: 10.1016/j.socscimed.2013.06.028. in press. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES