Skip to main content
Elsevier Sponsored Documents logoLink to Elsevier Sponsored Documents
. 2014 Aug;67(8):845–849. doi: 10.1016/j.jclinepi.2014.03.002

Research participation effects: a skeleton in the methodological cupboard

Jim McCambridge a,, Kypros Kypri b, Diana Elbourne c
PMCID: PMC4236591  PMID: 24766858

Abstract

Objective

There have been concerns about impacts of various aspects of taking part in research studies for a century. The concerns have not, however, been sufficiently well conceptualized to form traditions of study capable of defining and elaborating the nature of these problems. In this article we present a new way of thinking about a set of issues attracting long-standing attention.

Study Design and Setting

We briefly review existing concepts and empirical work on well-known biases in surveys and cohort studies and propose that they are connected.

Results

We offer the construct of “research participation effects” (RPE) as a vehicle for advancing multi-disciplinary understanding of biases. Empirical studies are needed to identify conditions in which RPE may be sufficiently large to warrant modifications of study design, analytic methods, or interpretation. We consider the value of adopting a more participant-centred view of the research process as a way of thinking about these issues, which may also have benefits in relation to research methodology more broadly.

Conclusion

Researchers may too readily overlook the extent to which research studies are unusual contexts, and that people may react in unexpected ways to what we invite them to do, introducing a range of biases.

Keywords: Research participation, Bias, Research methods, Hawthorne effect, Research assessment, Mixed methods, Surveys, Cohort studies


What is new?

  • “Research participation effects” offer a new way of thinking about poorly understood sources of bias in surveys and cohort studies, and also in trials.

  • Research studies are unusual contexts, and people may react in unexpected ways to what we invite them to do.

  • Adopting the perspective of the participant suggests that existing well-known sources of bias may be connected to each other.

  • Mixed methods participant-centred research may lead to better prevention of bias.

The construct of “research participation effects” (RPE) has been proposed to better guide the empirical investigations of issues previously conceptualized as the Hawthorne effect [1]. We have also elaborated overlooked implications for behavioral intervention trials, identifying mechanisms by which bias may be introduced which randomization does not prevent [2]. This discussion considers the wider implications of RPE for thinking about bias, particularly addressing existing thinking about bias in surveys and cohort studies.

New ways of understanding biases provide platforms for important advances in research design and methods. For example, Solomon [3] identified that the discovery of “pre-test sensitisation”, whereby measuring individual psychology or behavior at one point of time biased later measurement of the same characteristics, led to the introduction of control groups within behavioral sciences. Chalmers [4] identified allocation concealment to prevent selection bias as the primary motivation for the use of randomization in the original streptomycin trial. Chalmers [4] has suggested that addressing biases resulting from patient preferences may provide the next historical milestone in the development of trials methodology. Just as patients may prefer allocation to one arm of a clinical trial over another, people may react to whatever it is they are requested to do in the context of research. These reactions have the potential to affect study outcomes in ways that undermine the validity of inferences the research was designed to permit.

A few years after the Hawthorne effect made its debut in the scientific literature [5], the concept of “demand characteristics” was introduced to psychology [6]. This referred to the ways in which study participants responded to their perceptions of the implicit preferences of researchers, tailoring their responses so as to be good subjects. Like the Hawthorne effect, although being well known, this construct has contributed disappointingly little to the methodological literature [7]. The unintended effects of research assessments have received attention other than when conceptualized as the Hawthorne effect. Randomized evaluation studies often show small effects, though there are inconsistencies [8], [9], [10], [11], [12].

Change due to having been assessed, having views about the desirability of different possible research requirements, and deliberately or unwittingly trying to satisfy researchers, are all consequences of research participation. The interaction of the research participant with the research process is discernible as a common thread running through these examples. The consequences of research participation may vary in strength across study designs, participants, topic areas, and the contexts in which research is done, and according to more specific features of the studies themselves.

1. Well-established biases in surveys and cohort studies

Ensuring adequate response rates, that is securing participation itself, is widely established as a key issue in survey design [13]. Evidence has accumulated over decades on how to do this [14], and in a context of falling response rates there has been extensive research on the implications of non-response for the estimation of prevalence and other parameters of interest in general household surveys [13]. There has also been much study of reporting errors made by participants in surveys, which draws attention to the sensitivity of the particular behavior or issue being enquired about [15]. This literature also distinguishes between task-related errors that are technical products of survey design, and motivated responses, for example, in the form of self-deception and impression management [16]. Thus in surveys, biases associated with research participation apply both to the decision to take part and to the accuracy of information provided. These biases may be conceptualized in many ways and often are thought about differently across disciplines and over time [17].

In a prospective cohort or longitudinal study [18], repeated data collection permits consequences of research participation to manifest themselves in altered behavior, cognitions, or emotions [12]. As Solomon [3] described, it is possible for inferences about data collected at one time point to be biased simply because of earlier data collection. This complication is more likely to occur, and is more likely to be problematic, in certain circumstances (see below). Some outcomes cannot be influenced by reactivity to evaluation, for example, where data collection is unobtrusive [19].

Asking someone how often they ride a bicycle may increase cycling in some circumstances and not others. It can only do so if the causal pathway to this outcome involves behavior that can be modified by this procedure [20]. For example, if a study participant owns a bicycle and is asked about their cycling behavior or views about cycling in a cohort study of health and lifestyle, they might think further about cycling, and might cycle more frequently as a result. This would artificially inflate levels of cycling in the cohort. If the study participant does not have access to a bicycle, this is less likely to occur unless they first acquire the means to start cycling. Asking about cycling in a different context may also reduce the likelihood of this occurring. The psychological processes involved are not important here; the point is that the more such effects occur, the more they may undermine the objectives of the study by introducing bias.

This problem may not emanate only from the content of data collection. Participants may have read the consent form carefully and thought about their health and lifestyle before deciding whether or not to take part. A cohort study is thus vulnerable to both the possible reporting and participation problems previously described for cross-sectional surveys, at both study entry and at follow-up. Additionally, actual change in the behavior being investigated may have been induced. Change in the object of the evaluation influenced by any aspect of research participation entails bias, regardless of how it has been produced. This is so unless an assumption is made that such influences do not vary in time with repeated measurements, which is unlikely to be very often a safe assumption. Randomized controlled trials are cohort studies with randomization, and as such are vulnerable both to the previously described problems, and also to additional ones associated with randomization [2]. This implies problems in making valid inferences from research data that afflict all study designs. These problems are mostly, but not all, very well known. What is novel about this presentation is the suggestion that they are linked, and by extension that conceptualizing them in this way as RPE may lead to better understanding of methodological problems.

2. A research participant-centred perspective

Different types of studies make different requests of, and place different demands on, their participants. There is nonetheless a core sequence of early events involving both a recruitment and baseline assessment phase, as presented in Fig. 1 for a typical individually randomized trial. We have found this a useful vehicle for thinking through the potential for RPE. For those who continue to participate over time, our lack of attention to the possible impact of the research process might imply that it is inert [12] and perhaps also that participants are somehow passive in this sequence. Fig. 1 provides a brief description of what we usually do to or with the people who become our research participants and in which order. It offers no information on participant characteristics or how or why they may matter to RPE. We suggest there is a prima facie case that reasons for participation, severity of problems or views about the issue being investigated, susceptibility to social desirability or monitoring effects, and readiness for change can all have a bearing on whether any of this process will impact on participants. These intrapersonal features might be expected to engage dynamically in the interpersonal process through which research participation is enacted. Research questions might address any of these targets for study.

Fig. 1.

Fig. 1

The research process. Cross-sectional surveys end with baseline assessment, cohort studies also involve follow-up assessment(s) only, RCTs involve randomization to study conditions as described previously. RCT, randomized controlled trial.

Adopting a more participant-centred view of the research process [21] might first consider the nature of the decision-making involved in taking part in research [22]. Altruism has long been considered as the primary reason why people take part in most types of research [23]. Being disinterested in implications for self would appear to make RPE less likely, perhaps unless the research provides an unexpected stimulus for more personal introspection. More recent thinking has pointed toward more qualified versions of altruism, termed weak [24] or conditional [25] altruism, whereby a process of evaluation of the implications for oneself accompanies the motivation to help others in making decisions to take part in research. Such conditionality may be more likely in some circumstances than others. Trials and other intervention studies probably also attract those seeking interventions who are less altruistically minded for understandable reasons. Such a spectrum of reasons for participation may have implications for the generation of RPE, with less altruistic reasons more likely to generate RPE. There is little literature on participant reasons for continuation in research studies over time [26] and it may be profitable to pay attention to other influences on ongoing participation in cohort studies and trials [27].

Qualitative studies should be useful in identifying targets for study. There are studies available on many of the aspects of the research process already described, including for example how much prospective participants read and engage with provided study information [28], [29]. Study of preferences (see [30]) is another area where qualitative methods have uncovered problems within the largely quantitative endeavor that is randomized controlled trials. Preferences for allocation in trials have not only been found to exist, but also to be quite dynamic over time and capable of being influenced by dedicated interventions [31].

There are not studies, however, which evaluate individual participant-level qualitative data and also explore the possible implications for bias at the quantitative study level [27]. This is probably because there has not been an explicit effort to apply the type of conceptualization suggested here, which links qualitative and quantitative and individual and study level data. Beyond investigations of the acceptability of research procedures to prospective participants, there has been no programmatic approach to studying the effects of apparently mundane aspects of taking part in research. We offer an example that demonstrates that it is not difficult to do these types of studies and for participants to discuss their engagement with the research process; a qualitative study showing how thwarted preferences for allocation to a novel intervention led to disappointment and subsequently to movements both toward and away from change in a weight-loss trial [32].

This situation is perhaps not dissimilar to the 30-year tradition of study of participant cognitive engagement with surveys, where much quantitative and qualitative data have been used to enhance the content of particular surveys, but have yielded disappointing progress in methodology for questionnaire design [33]. Our perspective suggests that unrecognized potential for bias resides in routine research practice. We acknowledge that this calls for a type of mixed methods orientation [34] in which the core concepts and issues are framed as done in quantitative research, and that a qualitative phenomenological approach is used to identify possible problems, which may in turn be further evaluated in quantitative studies. What might be described as a post-positivist concern for bias adopted here may be unsatisfactory to some qualitative researchers who have epistemological differences with such an approach [35]. This may also be unfamiliar territory for many readers of an epidemiology journal, which we suggest is useful to explore for new insights into the nature of biases. In Box 1 we offer some suggestions for helpful questions to ask in a given study, and for developing this type of research more widely.

Box 1. Helpful questions for researchers.

  • 1.

    Why are participants taking part in this study?

  • 2.

    What does taking part in this study mean for participants?

  • 3.

    Why do participants behave as they do in this study?

  • 4.

    How does what participants do affect any concerns about bias?

  • 5.

    How far are the most likely sources of bias connected in this study?

  • 6.

    Is existing thinking about bias adequate for the methodological problems faced here?

  • 7.

    How might existing thinking about bias be extended to address methodological problems not well covered?

  • 8.

    What can qualitative data or quantitative data contribute to better understand these issues?

  • 9.

    How can qualitative data and quantitative data be combined to address research participation effects?

  • 10.

    How can the construct of research participation effects be developed to guide more advanced study?

3. Conclusion

The potential for RPE may be intrinsic to all human research designs, though there are probably many areas where it can be safely ignored, as unlikely to threaten valid inference. There are other domains of research where they certainly cannot be ignored. The problem is that we do not know where this is the case, and therefore further conceptual work and empirical studies elaborating these issues are needed. We suggest that conventionally understood forms of bias as found in cross-sectional surveys and cohort studies are also interpretable as RPE. Furthermore, this preliminary conceptualization may be fruitful for creative thinking about biases and how to minimize them in designing research studies.

RPE is unwittingly created in the decisions made by researchers. Paying attention to the practices of researchers and approaching research on the research enterprise more sociologically [36] will also be useful. Because of their origins in the decisions made by researchers, RPE may be amenable to control in design, or in analysis if it is not possible to prevent them. Although we have known something of RPE for around 100 years [3], it will be disappointing if future progress is as slow as in the past. Perhaps this is partly because they call attention to unresolved and difficult-to-resolve issues to do with the relationship between quantitative and qualitative research approaches and data. RPE is nonetheless a skeleton in the methodological cupboard that deserves a decent burial.

Footnotes

This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/3.0/).

Conflict of interest: There are no competing interests to declare.

Funding: J.M. is supported by a Wellcome Trust Research Career Development fellowship in Basic Biomedical Science (WT086516MA) to investigate the issues addressed in this article.

References

  • 1.McCambridge J., Witton J., Elbourne D. Systematic review of the Hawthorne effect: new concepts are needed to study research participation effects. J Clin Epidemiol. 2014;67:267–277. doi: 10.1016/j.jclinepi.2013.08.015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.McCambridge J., Kypri K., Elbourne D. In randomization we trust? There are overlooked problems in experimenting with people in behavioral intervention trials. J Clin Epidemiol. 2014;67:247–253. doi: 10.1016/j.jclinepi.2013.09.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Solomon R.L. An extension of control group design. Psychol Bull. 1949;46(2):137–150. doi: 10.1037/h0062958. [DOI] [PubMed] [Google Scholar]
  • 4.Chalmers I. Comparing like with like: some historical milestones in the evolution of methods to create unbiased comparison groups in therapeutic experiments. Int J Epidemiol. 2001;30:1156–1164. doi: 10.1093/ije/30.5.1156. [DOI] [PubMed] [Google Scholar]
  • 5.French J.R.P. In: Research methods in the behavioral sciences. Festinger L., Katz D., editors. Holt, Rinehart & Winston; New York, NY: 1953. Experiments in field settings. [Google Scholar]
  • 6.Orne M.T. The nature of hypnosis: artifact and essence. J Abnorm Soc Psychol. 1959;58:277–299. doi: 10.1037/h0046128. [DOI] [PubMed] [Google Scholar]
  • 7.McCambridge J., de Bruin M., Witton J. The effects of demand characteristics on research participant behaviours in non-laboratory settings: a systematic review. PLoS One. 2012;7(6):e39116. doi: 10.1371/journal.pone.0039116. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.McCambridge J., Kypri K. Can simply answering research questions change behaviour? Systematic review and meta analyses of brief alcohol intervention trials. PLoS One. 2011;6(10):e23748. doi: 10.1371/journal.pone.0023748. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.McCambridge J., Butor-Bhavsar K., Witton J., Elbourne D. Can research assessments themselves cause bias in behaviour change trials? A systematic review of evidence from Solomon 4-group studies. PLoS One. 2011;6(10):e25223. doi: 10.1371/journal.pone.0025223. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Dholokia U.M. A critical review of question-behavior effect research. Rev Market Res. 2010;7:147–199. [Google Scholar]
  • 11.Willson VLP R.R. A meta-analysis of pre-test sensitization effects in experimental design. Am Educ Res J. 1982;19:249–258. [Google Scholar]
  • 12.French D.P., Sutton S. Reactivity of measurement in health psychology: how much of a problem is it? What can be done about it? Br J Health Psychol. 2010;15(Pt 3):453–468. doi: 10.1348/135910710X492341. [DOI] [PubMed] [Google Scholar]
  • 13.Dillman D.A., Smyth J.D., Christian LM. Wiley & Sons; Hoboken, NJ: 2009. Internet, mail, and mixed-mode surveys: the tailored design method. [Google Scholar]
  • 14.Edwards P.J., Roberts I., Clarke M.J., Diguiseppi C., Wentz R., Kwan I. Methods to increase response to postal and electronic questionnaires. Cochrane Database Syst Rev. 2009;(3):MR000008. doi: 10.1002/14651858.MR000008.pub4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Tourangeau R., Yan T. Sensitive questions in surveys. Psychol Bull. 2007;133(5):859–883. doi: 10.1037/0033-2909.133.5.859. [DOI] [PubMed] [Google Scholar]
  • 16.Davis C.G., Thake J., Vilhena N. Social desirability biases in self-reported alcohol consumption and harms. Addict Behav. 2010;35(4):302–311. doi: 10.1016/j.addbeh.2009.11.001. [DOI] [PubMed] [Google Scholar]
  • 17.Chavalarias D., Ioannidis J.P. Science mapping analysis characterizes 235 biases in biomedical research. J Clin Epidemiol. 2010;63:1205–1215. doi: 10.1016/j.jclinepi.2009.12.011. [DOI] [PubMed] [Google Scholar]
  • 18.Rothman K.J., Greenland S. 2nd ed. Lippincott-Raven Publishers; Philadelphia, PA: 1998. Modern epidemiology (second edition) pp. 1–737. [Google Scholar]
  • 19.Webb E.J., Campbell D.T., Schwartz R.D., Sechrest L. Rand McNally; Oxford, UK: 1966. Unobtrusive measures: nonreactive research in the social sciences. [Google Scholar]
  • 20.Hernán M.A. A definition of causal effect for epidemiological research. J Epidemiol Community Health. 2004;58:265–271. doi: 10.1136/jech.2002.006361. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Scott C., Walker J., White P., Lewith G. Forging convictions: the effects of active participation in a clinical trial. Soc Sci Med. 2011;72(12):2041–2048. doi: 10.1016/j.socscimed.2011.04.021. [DOI] [PubMed] [Google Scholar]
  • 22.Rosnow R.L., Rosenthal R. Freeman; New York, NY: 1997. People studying people: artifacts and ethics in behavioral research. [Google Scholar]
  • 23.Gabbay M., Thomas J. When free condoms and spermicide are not enough: barriers and solutions to participant recruitment to community-based trials. Control Clin Trials. 2004;25:388–399. doi: 10.1016/j.cct.2004.06.004. [DOI] [PubMed] [Google Scholar]
  • 24.Canvin K., Jacoby A. Duty, desire or indifference? A qualitative study of patient decisions about recruitment to an epilepsy treatment trial. Trials. 2006;7:32. doi: 10.1186/1745-6215-7-32. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.McCann S.K., Campbell M.K., Entwistle V.A. Reasons for participating in randomised controlled trials: conditional altruism and considerations for self. Trials. 2010;11:31. doi: 10.1186/1745-6215-11-31. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Wendler D., Krohmal B., Emanuel E.J., Grady C., Group ESPRIT Why patients continue to participate in clinical research. Arch Intern Med. 2008;168:1294–1299. doi: 10.1001/archinte.168.12.1294. [DOI] [PubMed] [Google Scholar]
  • 27.Marcellus L. Are we missing anything? Pursuing research on attrition. Can J Nurs Res. 2004;36(3):82–98. [PubMed] [Google Scholar]
  • 28.Snowdon C., Garcia J., Elbourne D. Making sense of randomization; responses of parents of critically ill babies to random allocation of treatment in a clinical trial. Soc Sci Med. 1997;45(9):1337–1355. doi: 10.1016/s0277-9536(97)00063-4. [DOI] [PubMed] [Google Scholar]
  • 29.Snowdon C., Elbourne D., Garcia J. Declining enrolment in a clinical trial and injurious misconceptions: is there a flipside to the therapeutic misconception? Clin Ethics. 2007;2:193–200. [Google Scholar]
  • 30.Bower P., King M., Nazareth I., Lampe F., Sibbald B. Patient preferences in randomised controlled trials: conceptual framework and implications for research. Soc Sci Med. 2005;61(3):685–695. doi: 10.1016/j.socscimed.2004.12.010. [DOI] [PubMed] [Google Scholar]
  • 31.Donovan J.L., Lane J.A., Peters T.J., Brindle L., Salter E., Gillatt D. Development of a complex intervention improved randomization and informed consent in a randomized controlled trial. J Clin Epidemiol. 2009;62:29–36. doi: 10.1016/j.jclinepi.2008.02.010. [DOI] [PubMed] [Google Scholar]
  • 32.McCambridge J., Sorhaindo A., Quirk A., Nanchahal K. Patient preferences and performance bias in a weight loss trial with a usual care arm. Patient Educ Couns. 2014;95:243–247. doi: 10.1016/j.pec.2014.01.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Schwarz N. Cognitive aspects of survey methodology. Appl Cognit Psychol. 2007;21:277–287. [Google Scholar]
  • 34.Teddlie C., Tashakkori A. viii. Sage; Thousand Oaks; London: 2009. p. 387. (Foundations of mixed methods research: integrating quantitative and qualitative approaches in the social and behavioral sciences). [Google Scholar]
  • 35.Guba E.G., Lincoln Y.S. In: The SAGE handbook of qualitative research. 3rd ed. Denzin N.K., Lincoln Y.S., editors. Sage; Thousand Oaks, CA: 2005. Paradigmatic controversies, contradictions, and emerging influences; pp. 191–215. [Google Scholar]
  • 36.Barata P.C., Gucciardi E., Ahmad F., Stewart D.E. Cross-cultural perspectives on research participation and informed consent. Soc Sci Med. 2006;62(2):479–490. doi: 10.1016/j.socscimed.2005.06.012. [DOI] [PubMed] [Google Scholar]

RESOURCES