What is post-randomization bias?
Marriage is a potentially wonderful union between two individuals who are in love. Interestingly, almost 50 percent of all marriages in the United States will end in divorce or separation, with 41% of first-time marriages ending in divorce. Logically, picking the right partner for marriage increases the likelihood of success. However it is quite clear that most of the success of a marriage depends on what happens after the marriage ceremony.
Randomized controlled trials (RCT) are quite similar to marriages. Within a RCT, pre-treatment randomization (the marriage ceremony) is designed to minimize the differences among groups by equally distributing people with particular characteristics among all the trial arms. Methodologically, randomization should improve our ability to determine if long-term changes are reflective of the interventions provided, and not the group characteristics in the trial arms. Yet, success with a RCT depends also on post-randomization efforts (the efforts that occur after the marriage ceremony), and post-randomization bias (or post-randomization confounding) is more prevalent than one thinks. We define post-randomization bias as a confusion of effects that occur after the pre-treatment randomization process within a randomized clinical trial. The goal of this editorial is to outline selected post-randomization biases that can markedly influence a publication’s results.
Conscious post-randomization biases
Conscious post-randomization bias involves willfully and knowingly fabricating or falsifying research results. Fabricating data involves making up data or results and reporting them as facts. Falsification of data includes knowingly manipulating research processes, tests, or outcomes to better one’s results.
Data smudging/data altering
Sadly, fraud and data fabrication, which occur along a continuum, are more prevalent in orthopedics-related research today than ever [1]. Fraud, including data fabrication or smudging, occurs when a researcher invents all or part of the presented results or modifies current findings to improve an outcome [2].
Why it’s a problem
Most clinical interventions in rehabilitation have minimal to moderate effects that are often calculated from small sample sizes. Data smudging or altering findings even slightly can notably shift findings to a determined direction.
Solutions
Sharing datasets, which is a requirement of some journals, provide journals an opportunity to evaluate statistical findings and review the data for normality. Having a separate biostatistician who is not involved in the study interventions reduces the likelihood of biasing the analyses.
Cherry-picking results
Selective inclusion and reporting of outcomes can occur in various ways during post-randomization. Examples include reporting only positive results, the omission of data for all time points, modification of ‘primary’ and ‘secondary’ outcomes labels between protocol, trial registry, and publication, and failure to report measures of variation for non-significant outcomes [3].
Why it’s a problem
Cherry-picking results can misrepresent the available evidence and mislead researchers [who can spend time and resources to investigate outcomes ‘not yet reported’] and healthcare professionals [who will not have access to the complete evidence to implement in clinical practice].
Solutions
Journals should not accept RCTs for review if these have not been previously registered. Further, the comparison between the registration/protocol and the trial provides journals an opportunity to identify if the outcomes reported in the methods are the same reported in the trial results, or if non-significant results are mentioned but not informed adequately. If the trial is already published, the same comparison should be made by readers, and an assessment of the risk of bias should be applied if the article is included in systematic reviews [4].
Poor intervention fidelity
Intervention fidelity refers to the degree to which an intervention study is delivered as planned (i.e., adherence to the intervention, dose or exposure, quality of delivery, participant responsiveness and program differentiation) [5].
Why it’s a problem
Poor intervention fidelity reflects a failure in the treatment integrity within the study. Poor intervention fidelity is associated with a decrease in treatment retention, an increase in attrition bias, a misrepresentation of the true effects associated with the intervention tested and poorer treatment outcomes [6,7].
Solutions
The Template for Intervention Description and Replication (TIDieR) checklist [8] was designed to carefully and comprehensively report the specifics of interventions. The checklist consists of who, how and where the care was provided when and how much, and whether the care was tailored or modified. A consequence of TIDieR is the ability to make more conclusive statements about treatment effects [5,7].
Unconscious biases
In clinical trials, most variations in the truth are associated with unconscious biases. Unconscious biases reflect carelessness more so than malfeasance [9]. Unconscious bias in research [10] is present in all forms of data capture in a litany of different patient populations.
Lack of therapist equipoise (nonspecific effects)
Therapist allegiance is another term that reflects a lack of equipoise, and both refer to consequent favoritism toward one intervention and contamination or distortion of the outcome because of the therapist’s theoretical or treatment preferences [11].
Why it’s a problem
The performance of the intervention and consequently patient outcomes can be affected by the therapist’s interests/beliefs, therapist’s experience/skills, and therapist’s conscious or unconscious placement of enthusiasm, relevance, or confidence in one specific intervention versus another [11,12].
Solutions
One well-trained therapist without any specific allegiance for each treatment group would be ideal to avoid therapist equipoise bias. Conflicts of interest statements should be used and disclosed for each study. Other solutions include implementing the methods ‘expertise-based RCT,’ ‘equipoise-stratified design,’ and ‘clinician’s choice design.’ [12] Further, an understanding of effect sizes (the magnitude of the effect) and numbers needed to treat (the amount of patients needed before a notable difference occurs against the comparison group) and logical boundaries of these associated with typical interventions may provide readers with perspectives on which studies to believe and which to question.
Mode of administration bias
Mode of administration bias occurs when results collected from self-reported questionnaires are influenced by the ‘how’, ‘when’, and ‘manner’ that they are collected. Although mode of administration bias can be intentional or unintentional, most forms are unintentional [13].
Why it’s a problem
Patients are less inclined to report their own thoughts and considerations regarding their outcomes when they know a clinician is directly involved in the interpretation of the findings and when they feel the results may negatively influence the patient-therapist relationship [14]. It is well documented that a patient’s willingness to admit complaints is lessened in a face-to-face interview [14].
Solutions
For RCTs, questionnaires and data collection should be standardized. Further, if possible, clinicians should remain unaware of the questionnaire findings and patients should be instructed that this is so.
What else can we do as consumers of randomized controlled trials, especially if we suspect post-randomization bias?
For the consumer of research, and one who reads an RCT and considered implementing the results into clinical practice, there are a number of considerations one may take. First, be cognizant that post-randomization bias is likely extremely prevalent. Place close attention on the fidelity of the care provided if it is doesn’t match the care you give then the results are likely dissimilar to what you are seeing in clinical practice.
Secondly, it’s important to consider the authorship group who publishes the data or the funding source who supported the study to the point that it is worth discounting the results as meaningless. Continuing education providers will likely have therapeutic allegiances toward a particular philosophy, thus it’s not surprising to see large treatment effects in trials favoring their approach. Further, industry funding sources have long demonstrated publication bias supporting their investigated product [15].
Lastly, there are approximately 2.5 million papers published every year [16]. Of those 2.5 million, there is inclined to be contradicting results among similar publications that use similar randomization schemes. The dissimilarities may be related to post-randomization bias since this period of the trial encompasses a greater scope of variability than any other element. Be comfortable with contradicting findings and learn to evaluate the merit of the post-randomization phase before supporting or refuting trial results.
References
- [1].Yan J, MacDonald A, Baisi LP, et al. Retractions in orthopaedic research: a systematic review. Bone Joint Res. 2016. June 30;5(6):263–268. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [2].Jaffer U, Cameron AE.. Deceit and fraud in medical research. Int J Surg. 2006;4(2):122–126. [DOI] [PubMed] [Google Scholar]
- [3].Page MJ, McKenzie JE, Kirkham J, et al. Bias due to selective inclusion and reporting of outcomes and analyses in systematic reviews of randomised trials of healthcare interventions. Cochrane Database Syst Rev. 2014. October 02;MR000035. DOI: 10.1002/14651858.MR000035.pub2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- [4].Higgins JP, Green S. Cochrane handbook for syst reviews of interventions. Chiclester, UK: John Wiley & Sons; 2011. [Google Scholar]
- [5].Carroll C, Patterson M, Wood S, et al. A conceptual framework for implementation fidelity. Implement Sci. 2007. December 07;2(1):40. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [6].Durlak JA, DuPre EP. Implementation matters: a review of research on the influence of implementation on program outcomes and the factors affecting implementation. Am J Community Psychol. 2008. March 07;41(3–4):327–350. [DOI] [PubMed] [Google Scholar]
- [7].Borrelli B. The assessment, monitoring, and enhancement of treatment fidelity in public health clinical trials. J Public Health Dent. 2011. April 19;71: S52–S63. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [8].Hoffmann TC, Glasziou PP, Boutron I, et al. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ. 2014;348:g1687. [DOI] [PubMed] [Google Scholar]
- [9].Stewart WW, Feder N. The integrity of the scientific literature. Nature. 1987;325:207–214. [DOI] [PubMed] [Google Scholar]
- [10].Brodie MA, Pliner EM, Ho A, et al. Big data vs accurate data in health research: large-scale physical activity monitoring, smartphones, wearable devices and risk of unconscious bias. Med Hypotheses. 2018;119:32–36. [DOI] [PubMed] [Google Scholar]
- [11].Wilson GT, Wilfley DE, Agras WS, et al. Allegiance bias and therapist effects: results of a randomized controlled trial of binge eating disorder. Clin Psychol (New York). 2011. June 01;18(2):119–125. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [12].Cook C, Sheets C. Clinical equipoise and personal equipoise: two necessary ingredients for reducing bias in manual therapy trials. J Man Manip Ther. 2011. January 02;19(1):55–57. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [13].Cook C. Mode of administration bias. J Man Manip Ther. 2010. June 10;18(2):61–63. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [14].Floyd J, Fowler J. Quality of life and pharmacoeconomics in clinical trials. Chapel Hill NC: Lippincott Williams & Wilkins; 1998. [Google Scholar]
- [15].Bekelman JE, Li Y, Gross CP. Scope and impact of financial conflicts of interest in biomedical research: a systematic review. JAMA. 2003. January 22;289(4):454–465. [DOI] [PubMed] [Google Scholar]
- [16].Ware M, Mabe M. The STM report: an overview of scientific and scholarly journal publishing. International Association of Scientific, the Netherlands: Technical and Medical Publishers; 2015. [Google Scholar]