Dear David Taylor, Editor-in-Chief of Therapeutic Advances in Psychopharmacology
The call for rigor by van Elk and Fried 1 is a welcome contribution to psychedelic research. The authors emphasized the well-known principles of internal, external, and construct validities, highlighting that conclusions should not extend beyond robust statistical inferences. The examples of previous inadequacies and mistakes regarding some of these principles, including oversight by peer reviewers, are valuable, and future research can indeed benefit from more rigor.
However, the 21 studies scrutinized amount to a small portion of the 320 clinical trials and over 5800 studies retrieved on PubMed for ‘psychedelics OR hallucinogens NOT cannabis’ in the past 10 years. Therefore, the judgment that there are ‘serious doubts on the inferences that have been drawn in research carried out in the last decade’ must be interpreted with care. Furthermore, problems were classified as easy, moderate, or hard to solve, but it is difficult to understand why conflicts of interest were rated as easy. Pharmaceuticals are one of the most lucrative businesses on the planet, and bad practices have become pervasive in medicine, to the detriment of the scientific literature. This includes ‘regulatory capture’, through which the industry’s vested interests critically influence norms, shaping many of the problems rated as easy and moderate, such as standards for adverse event reporting, sample size estimations, lack of long-term research, and restricted access to clinical trial data.
On the other end of their spectrum, placebo effect, causal mechanisms, and unblinding were rated as hard problems, based on an unqualified claim that randomized controlled trials (RCTs) ‘are considered the gold standard’ and are ‘usually double-blind’. Such strong statements demand rigorous epistemological justification and support in empirical data. However, among 200 randomly selected RCTs in five of the top medical and four of the top psychiatric journals, only 7% and 9%, respectively, reported blinding assessments. 2 In the broader literature, blinding assessments were reported between 2% 3 and 8% 4 of hundreds of randomly selected studies. Psychedelic studies are above this range, with 17%, 5 which is unfortunately also low, likely because of the removal of blinding assessment recommendations in guidelines such as CONSORT in 2010. While it is plausible that actual unblinding is higher with psychoactive drugs, these pervasive failures across biomedicine undermine claims for the method’s epistemic authority. It is thus important to ponder if we are facing a gold or a double standard: many studies are accepted despite unblinding, while others are rejected because of unblinding. Regarding psychedelics, clinical benefits were disqualified in the 1960s precisely because of unblinding. 6 More than half a century later, history repeated itself.
But is this kind of skepticism epistemically warranted? Does unblinding invalidate reported improvements, or does it raise questions and reveal some of the many causal factors involved in the multiple levels of mechanisms implicated in psychiatric disorders and treatments? Hypothetically, how could an experimental intervention, compared to a control, cause considerably larger improvements in subjective feelings without patients eventually breaking blind correctly guessing their treatment allocations based on their feelings? The situation is acutely problematic in psychiatry and doubly serious in psychopharmacology: because changing subjectivities is a specific drug effect, the attempt to separate those from unspecific ‘placebo effect’ is epistemically unwarranted. Moreover, RCTs are not neutral devices, and double-blinding introduces other biases: ambivalence, passivity, confusion, resentful demoralization, and voluntary submission. 7 Notably, double-blinding is intended to increase internal validity to strengthen causal inferences. On the other hand, it decreases external validity because clinical practices do not resemble the passivity, frustrations, and uncertainties imposed on patients and clinicians by double-blinding. But external validity, usually neglected in research, arguably matters most to patients, clinicians, and policymakers, who need to know if what worked in clinical trials will also work outside RCTs. 8 In the case of psychedelic therapy, blinding experimental groups becomes nearly impossible because the drugs specifically affect self-awareness and agential stances, which are directly related to the purported psychological mechanisms of psychedelic therapy. Furthermore, almost all suggested methods for trying to improve blinding, such as the use of low doses, fewer drug administrations, minimal support conditions, incomplete disclosure, deception, and even general anesthesia, are likely to result in less benefit to patients, and may also cause potential ethical violations and unintended harms.
A solution to this conundrum is to acknowledge that in psycho-therapeutic interventions, including psycho-active drugs in psychiatry, expectancy, therapeutic alliance, self-narratives, interpersonal behavior, and even beliefs, poorly defined as ‘placebo effect’, are not biases (noise), but treatment factors (signal). By definition, biases, or confounders, are unrelated to the therapeutic intervention, whereas factors are related to the intervention. All should be measured and reported, but always considering if each one is derailing or supporting multicausal pathways underlying clinical improvements in psychiatry. Misclassifying supportive factors as biases would lead, logically, to poorer treatment outcome estimates. Thus, epistemological concerns have important consequences.
The really hard problem, then, is for biomedical researchers to acknowledge that the epistemic authority attributed to the double-blind standard is ill-founded in a philosophical bias. 9 This introduces an epistemic bias in biomedical research: the outright rejection of clinical improvements requiring patient’s agency, insight, and knowledge. 10 It originates in a problematic reductionism, ill-suited for psychiatry and for understanding how self-aware human beings purposefully modify their own lived experiences, especially when using psycho-active drugs and psychotherapy. Rigorous research also requires avoiding common misunderstandings about what can be achieved with RCTs, recognizing that methodological appropriateness depends on the phenomena under investigation and that ‘the gold standard or “truth” view does harm when it undermines the obligation of science to reconcile RCTs results with other evidence in a process of cumulative understanding’. 8 For example, other methodological and statistical approaches can be used to incorporate ‘Real-World Evidence’ into decision-making and regulatory approvals in addition to RCTs. 11 In the case of complex and multifaceted treatments such as psychedelic-assisted therapy, the supposed rigor of blinding is too narrow in scope and thus inadequate to study the dynamic cognitive processes involving patient’s agency and self-awareness during non-ordinary states of consciousness.
Acknowledgments
None.
Footnotes
ORCID iD: Eduardo Ekman Schenberg
https://orcid.org/0000-0001-7111-9891
Declarations
Ethics approval and consent to participate: Not applicable.
Consent for publication: Not applicable.
Author contributions: Eduardo Ekman Schenberg: Conceptualization; Writing – original draft; Writing – review & editing.
Funding: The author received no financial support for the research, authorship, and/or publication of this article.
The author declares that there is no conflict of interest.
Availability of data and materials: Not applicable.
References
- 1. van Elk M, Fried EI. History repeating: guidelines to address common problems in psychedelic science. Ther Adv Psychopharmacol 2023; 13: 20451253231198466. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2. Fergusson D, Glass KC, Waring D, et al. Turning a blind eye: the success of blinding reported in a random sample of randomised, placebo controlled trials. BMJ 2004; 328: 432. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3. Hróbjartsson A, Forfang E, Haahr MT, et al. Blinded trials taken to the test: an analysis of randomized clinical trials that report tests for the success of blinding. Int J Epidemiol 2007; 36: 654–663. [DOI] [PubMed] [Google Scholar]
- 4. Bello S, Moustgaard H, Hróbjartsson A. The risk of unblinding was infrequently and incompletely reported in 300 randomized clinical trial publications. J Clin Epidemiol 2014; 67: 1059–1069. [DOI] [PubMed] [Google Scholar]
- 5. Nayak SM, Bradley MK, Kleykamp BA, et al. Control conditions in randomized trials of psychedelics: an ACTTION systematic review. J Clin Psychiatry 2023; 84: 22r14518. [DOI] [PubMed] [Google Scholar]
- 6. Oram M. Efficacy and enlightenment: LSD psychotherapy and the drug amendments of 1962. J Hist Med Allied Sci 2014; 69: 221–250. [DOI] [PubMed] [Google Scholar]
- 7. Kaptchuk TJ. The double-blind, randomized, placebo-controlled trial: gold standard or golden calf? J Clin Epidemiol 2001; 54: 541–549. [DOI] [PubMed] [Google Scholar]
- 8. Deaton A, Cartwright N. Understanding and misunderstanding randomized controlled trials. Soc Sci Med 2018; 210: 2–21. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9. Andersen F, Anjum RL, Rocca E. Philosophical bias is the one bias that science cannot avoid. Elife 2019; 8: e44929. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10. Sullivan MD. Placebo controls and epistemic control in orthodox medicine. J Med Philos 1993; 18: 213–231. [DOI] [PubMed] [Google Scholar]
- 11. Szigeti B, Phillips LD, Nutt D. Bayesian analysis of real-world data as evidence for drug approval: remembering Sir Michael Rawlins. Br J Clin Pharmacol 2023; 89: 2646–2648. [DOI] [PubMed] [Google Scholar]
