Abstract
Background
Open Science is ‘the movement to make scientific research, data and dissemination accessible to all levels of an inquiring society’.
In the spirit of the Open Science movement, advance publication of protocols for clinical trials is now being advocated by BioMed Central, BMJ Open and others. Simultaneously, participants are becoming increasingly active in their pursuit and sharing of trial- and health- related information. Whilst access to protocols alongside published trial findings has clear benefits, advance publication of trial protocols is potentially problematic for trials of complex behavioural interventions. In this article we explain, with examples, how this could lead to unblinding, ‘contamination’ between intervention and control groups and deliberate biasing of assessment outcomes by participants. We discuss potential solutions and demonstrate the need for public debate about how this issue is best managed.
Conclusion
Triallists may still be underestimating participants’ interest in information. This needs to change: joint and open discussions with the public are needed to inform how we should proceed.
Keywords: Open Science, Complex interventions, Protocols, Behaviour change, Therapy interventions, Clinical trials, E-Health, Advance publication, Trial registration, Codesign
Background
Open Science is ‘the movement to make scientific research, data and dissemination accessible to all levels of an inquiring society’ [1]. In the spirit of the Open Science movement, advance publication of protocols for clinical trials is now being advocated by BioMed Central, BMJ Open and others. Simultaneously, participants are becoming increasingly active in their pursuit and sharing of trial- and health-related information. For the vast majority of trials this is a good thing: access to well-written protocols alongside published trial findings has clear benefits [2, 3], including the provision of more information than that available in trial registries. Advance publication of protocols is also about transparency so that post-trial publications are consistent (in process and analysis) or have to have account for protocol deviations.
A good protocol must specify all interventions in detail, so that they are replicable [4]. For novel or expensive drug interventions or those involving devices, lack of access to the intervention outside the trial arm limits the possibility of contamination even if the details are fully available in the protocol. However, a published protocol makes other interventions, such as a fully described therapy or self-administered lifestyle change, accessible outside of the trial context. The resulting problems could include unblinding, ‘contamination’ between intervention and control groups and deliberate biasing of assessment outcomes by participants. We discuss examples and demonstrate the need for public debate about how this issue is best managed.
Potential for unblinding and contamination
An interesting example of the potential difficulties faced with published protocols is that of Eliasson et al. [5]. Their paper sets out the protocol for a randomised controlled trial comparing a form of modified constraint therapy (baby-CIMT) versus massage for infants at risk of developing unilateral cerebral palsy. Both approaches are described in detail. The hypotheses are that the baby-CIMT group will develop manual ability in the involved hand faster than the massage group and that these differences will remain at the age of 2 years; additionally, that parents in the baby-CIMT group will feel more competent about parenting. The protocol also states that parents will be blinded to the study hypotheses but not to the group allocation – clearly this will not be the case for parents who have accessed the published protocol, which may alter their engagement with the trial. This could be a particular problem for participants who discover in this way that they are in the control group. Parents may be provoked to change their behaviour and seek the active intervention for their child, given that clinical benefit is a major item on parental agendas when entering trials [6–8]. Unblinding and contamination due to patients allocated to placebo becoming aware of the nature of a therapy intervention being received by other nearby patients is an ongoing problem [9]. Though we are not aware of examples where this has occurred due to patients accessing the published protocol, it is a theoretical risk. Furthermore, parents of newborn infants allocated to the control group in a recent unblinded randomised controlled trial showed an ‘almost universal experience of disappointment’, with a few admitting either an attempt or success in obtaining the treatment outside of the trial context [10]. This latter behaviour has been described as ‘compensatory rivalry’ by Cook and Campbell [11].
We contacted the authors of the baby-CIMT paper [5]; they were not aware of any parents having accessed the online protocol but this was not a focus of their investigation. Interestingly, the protocol was published relatively late into the data collection period (trial start date September 2012; protocol publication date June 2014; study completion date December 2015); this may have been a protective factor.
Biasing of assessment outcomes
Access to the published protocol also gives participants the opportunity to prepare for assessments in order to influence their outcome. We first identified this as a genuine risk when we undertook a focus group to discuss the details of a planned pilot feasibility study. One of the suggested questionnaires (Parenting Sense of Competence Scale [12]) required the parent to indicate on a five-point scale the extent to which they agreed or disagreed with each of a series of statements. One parent commented that it reminded her of her experiences with the Edinburgh Postnatal Depression Scale [13] after she had had her second child. Specifically, she had ‘Googled’ the scale and prepared her answers prior to her visit from the health visitor in order to avoid being identified as having possible depression (see Table 1). She was not the only parent who commented that they might give misleading responses to questions in order to avoid exposing vulnerabilities. Whilst not singled out by the Cochrane Collaboration as a specific form of bias [14], we think that manipulation of assessment outcomes by participants merits further investigation. The Cochrane Collaboration describes detection bias, in which there are differences between the intervention and control groups with respect to outcome assessment due to knowledge of the assessor about treatment assignment. We describe a different situation, in which knowledge of the nature of the assessment by the participant could lead to a deliberate alteration of their response. In essence it represents a potential form of response bias [15], i.e. ‘a systematic tendency to respond to a range of questionnaire items on some basis other than the specific item content’.
Table 1.
Emma: ‘To be honest, like, I, from my experience, and I had a really hard time after Edgar was born, my second one. And, erm, had to fill in Edinburgh thing.’ | |
Anna: ‘Yeah, the depression thing, yeah.’ | |
Emma: ‘Yeah, and I totally, like, filled it in ‘cause I didn’t want to have depression (laughter), so like, I like, I Googled it and everything, you know, to see what score you needed. (Laughter) and, erm.‘ | |
Anna: ‘Wow.’ | |
Emma: ‘Well no, I Goo- ‘cause you can do it online can’t you? And then I came up with like 12.5 or something and you needed 11 to not have depression, so I was like… And then the health visitor came round so I was thinking, “Right, 11” …’ | |
(Participant names have been changed to respect confidentiality) |
Is this really a new problem?
Some of the issues we describe have been seen in other contexts for a long time. It comes as no surprise that study participants might alter behaviours which they know or suspect are being monitored. In the field of experimental psychology, such effects are covered by the term ‘demand characteristics’, defined as ‘the totality of cues and mutual expectations which inhere in a social context … which serve to influence the behaviour and/or self-reported experience of the research receiver’ [16]; though caution has been advocated in using this term outwith the laboratory setting [17], with the broader term ‘research participation effects’ preferred [18]. What is new is the extent to which these problems scale up in the shift towards increasingly accessible information, increasingly early in the timeline of ongoing research. It is now easy to seek out detailed information on the nature of many assessment tools on the Internet, potentially influencing performance in those assessments.
How big could the problem be?
It is easy to look up a study protocol; many are published in open access journals and are freely available. In 2015 over 80% of the developed world (and over one third of the developing world) had Internet access at home [19]. Use of the Internet to obtain health-related information is now the norm [20], so there is nothing to stop participants from accessing published protocols online. Such information is likely to be shared more widely through the use of social media [21] and online patient communities [22]. Information about previous similar trials (either from academic journals or reports in the news or social media) may also influence the understanding and behaviour of participants.
Participants can potentially determine which trial arm they are in, and choose to follow instructions related to items from any trial arm (or none), based on their understanding of the stated hypotheses. Whilst Information Sheets could also prompt similar behaviours, the information in the published protocol is more detailed, yet often appears to be written with the assumption that it will not be read by participants. This adds to the problem because there is an additional risk that the accessed information may be misinterpreted in a wide range of ways. However, we simply do not know the degree to which these factors currently impact on clinical trials of behavioural change interventions. This compounds the existing problem that adherence to behavioural change interventions is often low even in the absence of other considerations [23].
To declare that advance publication of protocols for certain trial types could bias outcomes may seem a step in the wrong direction, but to deny the potential problem is foolish. At the moment, the cast of characters who might engage with published protocols is still seen as editors, funders, researchers and reviewers, all actively involved for good scientific reasons. Two other more opaque figures remain, namely readers and patients. Currently, patients are configured as users of this information in a specific way, as ‘prospective volunteers’ seeking information about potential trials to take part in. If we also see them as part of the category of reader, they may also be ‘citizen scientists’ who are now empowered to discovering deviations in protocols. However, they can also engage with this information in alternate ways, as ‘participants in trials’ or as relatives or friends of those taking part in trials. This last form of engagement is in some ways the most problematic for ongoing trials. If the patients are prospective volunteers then the protocol could supplement the Information Sheet and help them to decide whether to enter the trial. It could also influence their conduct when they enter the trial. If, instead, they are already taking part in the trial and subsequently access the protocol and change their behaviour, the trial data will be affected. Participants could also be concerned or even angry that certain information was withheld. Maximising the quality of the Information Sheets provided to participants may avoid this to some extent. However, with full transparency it becomes impossible to undertake randomised controlled trials with blinding of participants for certain types of complex intervention.
A way forward?
It would be interesting to ask participants to what extent they or their significant others sought information regarding their trial beyond that provided by the research team, what they gleaned and whether it influenced their behaviour in relation to the trial. Assuming that participants have good memories and a willingness to share such information, this could help us to understand the current scale of the problem.
If published protocols are found to influence the behaviour of participants in complex intervention trials, trial preregistration with embargoed preregistration of the detailed protocol until trial completion may be the best option. This solution is already available – protocols registered on the Open Science Framework can be embargoed for up to 4 years before they are published. The embargo limit is key in avoiding publication bias or even covert registration of multiple protocols.
Another potential solution is the Registered Reports initiative [24]. This provides the opportunity to submit a detailed research proposal to a journal for peer review, including methods and planned analyses, prior to commencing the research. The journal can then provisionally agree in advance to publish the final study. This practice could encourage high standards of research integrity whilst keeping the protocol under wraps until the study is complete.
Trial preregistration, that requires declaration of the primary outcome measure in advance, has probably contributed to a recent increase in reported null effects in clinical trials [25] and could also help reduce duplication of research efforts even without a full published protocol. The complex intervention descriptions on trial preregistration websites probably lack the detail needed to allow replication of trial behaviours – but the problems of unblinding and biasing of assessment outcomes could still occur. Trial preregistration also helps protect the integrity of published research: one of the authors (AB) has had experience of discovering serious unexplained deviations from the protocol in a submitted paper based on trial registration details alone. However, published reports of trials have not infrequently deviated in significant and unexplained ways from published registry entries, though similar deviations from published protocols are also reported [26, 27].
What advantages of advance protocol publication do we risk losing in this field? Looking at the statement from BMJ Open [28], advance publication is about ‘enabling researchers and funding bodies to stay up to date in their fields’, preventing duplication of work and facilitating collaboration. Arguably, trial preregistration data would suffice for these purposes, whilst reducing the contamination risk and still allowing potential participants to search for studies for which they may be eligible. Alternatively, published protocols will have to be written with the expectation that they may be accessed by participants (or members of their social circle), in which case we can expect to encounter the problems described above, making it extremely hard or impossible to undertake certain types of trial. Given the calls for more comprehensive reporting of all the stages of the development of complex interventions [29] alongside the development of reporting guidelines for these stages [30], similar problems can be predicted with papers describing the development of the content and delivery of interventions.
The onus is now on researchers to smarten up experimental designs to deal with these potential problems. Objective tracking of adherence, for example, through smartphone data, is one approach. Researchers could also draw up contracts with participants whereby they agree not to attempt to unblind themselves through the use of the Internet or social media. This approach could of course backfire by providing a more explicit list of unblinding options for those hungry for information.
Conclusions
The Open Science movement has emerged alongside the rise of more active, engaged, patients and is here to stay. It has furthered the drive for increased research transparency which is overall extremely beneficial. As part of this drive, advanced publication of trial protocols is advocated, with a full description of interventions. This causes difficulties for trials of complex behavioural change interventions, such as unblinding, contamination and biasing of questionnaire-based assessments. Triallists may still be underestimating the public, and especially participants’ interest in information [22]. This needs to change: joint and open discussions with the public are needed to inform how we should proceed as there is currently no optimal solution.
Acknowledgments
Availability of data and materials
The full transcripts of the intervention design workshops are not relevant to the manuscript and are not shared.
Authors’ contributions
This idea emerged from codesign workshops with parents of children with hemiplegia led by AB and JP, when working with them on developing an intervention prototype. AB and JP conceptualised the article. AB wrote the first draft, to which TR and JP added their contributions. All authors read and approved the final manuscript.
Authors’ information
AB trained as an National Institute for Health Research (NIHR) clinical trials fellow. AB and JP have experience with clinical trials of complex therapy interventions and TR has expertise in implementation science. The paper has been reviewed by one of the parents in the codesign workshop. AB is guarantor of the article.
Competing interests
AB is funded by a Career Development Fellowship award from the NIHR, was previously funded by an NIHR Clinical Trials Fellowship, and works at Newcastle upon Tyne Hospitals NHS Foundation Trust. The views expressed in this publication are those of the author and not necessarily those of the NHS, the NIHR, or the Department of Health. Dr. Rapley reports grants from the NIHR during the conduct of the study; grants from Pfizer and grants from Biomarin Europe Ltd. outside the submitted work.
Consent for publication
This has been granted by the contributors to the codesign workshop whose quotes have been used.
Ethics approval and consent to participate
Ethical approval for the codesign workshops was provided by the North of Scotland Research Ethics Committee ref 14/NS/1027. Study title eTIPS: Early Therapy in Perinatal Stroke. Focus groups around design of the intervention.
Contributor Information
Anna Purna Basu, Email: anna.basu@ncl.ac.uk.
Janice Elizabeth Pearse, Email: janice.pearse@ncl.ac.uk.
Tim Rapley, Email: tim.rapley@ncl.ac.uk.
References
- 1.FOSTER: FACILITATE OPEN SCIENCE TRAINING FOR EUROPEAN RESEARCH. https://www.fosteropenscience.eu/. Accessed 28 Apr 2016.
- 2.Summerskill W, Collingridge D, Frankish H. Protocols, probity, and publication. Lancet. 2009;373(9668):992. doi: 10.1016/S0140-6736(09)60590-0. [DOI] [PubMed] [Google Scholar]
- 3.Chan AW, Tetzlaff JM, Altman DG, et al. SPIRIT 2013 statement: defining standard protocol items for clinical trials. Ann Intern Med. 2013;158(3):200–7. doi: 10.7326/0003-4819-158-3-201302050-00583. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Hoffmann TC, Glasziou PP, Boutron I, et al. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ. 2014;348:g1687. doi: 10.1136/bmj.g1687. [DOI] [PubMed] [Google Scholar]
- 5.Eliasson AC, Sjostrand L, Ek L, Krumlinde-Sundholm L, Tedroff K. Efficacy of baby-CIMT: study protocol for a randomised controlled trial on infants below age 12 months, with clinical signs of unilateral CP. BMC Pediatr. 2014;14:141. doi: 10.1186/1471-2431-14-141. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Woolfall K, Shilling V, Hickey H, et al. Parents’ agendas in paediatric clinical trial recruitment are different from researchers’ and often remain unvoiced: a qualitative study. PLoS One. 2013;8(7):e67352. doi: 10.1371/journal.pone.0067352. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Tarrant C, Jackson C, Dixon-Woods M, McNicol S, Kenyon S, Armstrong N. Consent revisited: the impact of return of results on participants’ views and expectations about trial participation. Health Expect. 2015;18(6):2042–53. doi: 10.1111/hex.12371. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Snowdon C, Garcia J, Elbourne D. Making sense of randomization; responses of parents of critically ill babies to random allocation of treatment in a clinical trial. Soc Sci Med. 1997;45(9):1337–55. doi: 10.1016/S0277-9536(97)00063-4. [DOI] [PubMed] [Google Scholar]
- 9.Pandian JD, Felix C, Kaur P, et al. Family-led rehabilitation after stroke in India: the ATTEND pilot study. Int J Stroke. 2015;10(4):609–14. doi: 10.1111/ijs.12475. [DOI] [PubMed] [Google Scholar]
- 10.Meinich Petersen S, Zoffmann V, Kjaergaard J, Graff Stensballe L, Greisen G. Disappointment and adherence among parents of newborns allocated to the control group: a qualitative study of a randomized clinical trial. Trials. 2014;15:126. doi: 10.1186/1745-6215-15-126. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Cook TD, Campbell DT. Quasi-experimentation: design and analysis issues in field settings. Chicago: Rand McNally; 1979. [Google Scholar]
- 12.Gibaud-Wallston J, Wandersman LP. Development and utility of the Parenting Sense of Competence Scale. Toronto: American Psychological Association; 1978. [Google Scholar]
- 13.Cox JL, Holden JM, Sagovsky R. Detection of postnatal depression. Development of the 10-item Edinburgh postnatal depression scale. Br J Psychiatry. 1987;150:782–6. doi: 10.1192/bjp.150.6.782. [DOI] [PubMed] [Google Scholar]
- 14.Higgins JP, Altman DG, Gotzsche PC, et al. The Cochrane Collaboration’s tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928. doi: 10.1136/bmj.d5928. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Paulhus DL. Measurement and control of response bias. In: Robinson JP, Shaver PR, Wrightsman LS, editors. Measures of personality and social psychological attitudes. San Diego: Academic Press; 1991. pp. 17–59. [Google Scholar]
- 16.MT O, WG W. Demand characteristics. In: AE K, editor. Encyclopaedia of psychology. Washington, D.C: American Psychological Association and Oxford University Press; 2000. pp. 469–70. [Google Scholar]
- 17.McCambridge J, de Bruin M, Witton J. The effects of demand characteristics on research participant behaviours in non-laboratory settings: a systematic review. PLoS One. 2012;7(6):e39116. doi: 10.1371/journal.pone.0039116. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.McCambridge J, Kypri K, Elbourne D. Research participation effects: a skeleton in the methodological cupboard. J Clin Epidemiol. 2014;67(8):845–9. doi: 10.1016/j.jclinepi.2014.03.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.ITU: Committed to connecting the world. Statistics. https://www.itu.int/en/ITUD/Statistics/Documents/facts/ICTFactsFigures2015.pdf. Accessed 28 Apr 2016.
- 20.Fox S. The Social Life of Health Information 2011. http://www.pewinternet.org/files/old-media/Files/Reports/2011/PIP_Social_Life_of_Health_Info.pdf. Accessed 28 Dec 2016.
- 21.Chou WYS, Hunt YM, Beckjord EB, Moser RP, Hesse BW. Social media use in the United States: implications for health communication. J Med Internet Res. 2009;11(4):e48. doi:10.2196/jmir.1249. [DOI] [PMC free article] [PubMed]
- 22.Thompson MA. Social media in clinical trials. Am Soc Clin Oncol Educ Book. 2014;e101-5. doi:10.14694/EdBook_AM.2014.34.e101. [DOI] [PubMed]
- 23.Eysenbach G. The law of attrition. J Med Internet Res. 2005;7(1):e11. doi: 10.2196/jmir.7.1.e11. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Open Science Framework: Registered Reports. https://osf.io/8mpji/wiki/home/. Accessed 28 Apr 2016.
- 25.Kaplan RM, Irvin VL. Likelihood of null effects of large NHLBI clinical trials has increased over time. PLoS One. 2015;10(8):e0132382. doi: 10.1371/journal.pone.0132382. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Dwan K, Altman DG, Cresswell L, Blundell M, Gamble CL, Williamson PR. Comparison of protocols and registry entries to published reports for randomised controlled trials. Cochrane Database Syst Rev. 2011;1:MR000031. doi: 10.1002/14651858.MR000031.pub2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.COMPARE: Tracking Switched Outcomes in Clinical Trials. http://compare-trials.org/. Accessed 28 Dec 2016.
- 28.BMJ Open: Instructions for Authors. http://bmjopen.bmj.com/site/about/guidelines.xhtml#studyprotocols. Accessed 28 Dec 2016.
- 29.Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M. Developing and evaluating complex interventions: new guidance. London: Medical Research Council; 2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Mohler R, Kopke S, Meyer G. Criteria for reporting the development and evaluation of complex interventions in healthcare: revised guideline (CReDECI 2) Trials. 2015;16:204. doi: 10.1186/s13063-015-0709-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
The full transcripts of the intervention design workshops are not relevant to the manuscript and are not shared.