Skip to main content
Canadian Journal of Psychiatry. Revue Canadienne de Psychiatrie logoLink to Canadian Journal of Psychiatry. Revue Canadienne de Psychiatrie
editorial
. 2015 Dec;60(12):531–533. doi: 10.1177/070674371506001202

Introduction to Special Section on Pseudoscience in Psychiatry

Scott O Lilienfeld 1,
PMCID: PMC4679160  PMID: 26720820

As Nobel prize–winning physicist Richard Feynman reminded us, “the first principle is that you must not fool yourself, and you are the easiest person to fool.”1, p 12 One crucial principle in psychiatry is that all of us, no matter how intelligent or well-trained, are susceptible to being duped by specious claims. Research reveals, at best, modest and often negligible correlations between measures of intelligence and critical thinking skills, suggesting that these 2 domains are largely distinct.2 Nevertheless, because of a phenomenon known as bias blind spot, whereby most of us are keenly aware of others’ mental shortcomings yet largely oblivious to our own,3 we may overestimate our capacities to distinguish dubious from well-supported psychiatric claims (for reviews of widespread biases and other errors in psychiatry, see Croskerry4 and Crumlish and Kelly5).

A careful consideration of errors in thinking is germane to psychiatry and related fields because of the continuing insinuation of pseudoscientific claims into myriad domains of mental health practice.6 Pseudoscientific claims display the superficial trappings of science but lack its substance. As a consequence, they can readily fool nonspecialists—and even specialists, on occasion—into believing that they are well-supported by evidence. In contrast to developed sciences, pseudosciences tend to lack methodological and procedural safeguards against confirmation bias, the deeply entrenched tendency to seek out evidence consistent with one’s hypotheses and to deny, dismiss, or distort evidence that is not.7 Such safeguards include randomization to conditions in the case of experimental designs; placebo controls; blinded designs; pre- and post-test measures with demonstrated reliability, construct validity, norms, and standardization; and rigorous peer review.8 These safeguards are far from foolproof and do not eliminate all sources of medical error.9 Nevertheless, they are widely accepted as desiderata in psychiatric research and are crucial bulwarks against commonplace errors in clinical inference.

Although the boundaries separating pseudoscience from science are fuzzy,10 pseudosciences are characterized by several warning signs—fallible but useful indicators that distinguish them from most scientific disciplines. Such warning signs include an emphasis on confirmation rather than refutation of hypotheses (weighing hits more than misses), overuse of ad hoc hypotheses (after-the-fact escape hatches or loopholes) for explaining away negative findings, absence of self-correction in the face of repeated negative findings, placing the burden of proof on skeptics rather than on proponents of assertions, expansive claims that greatly outstrip the available research evidence, overreliance on anecdotal evidence (anecdata), evasion of systematic peer review, and the use of scientific-sounding but largely vacuous terminology (for example, “receptors of the neuro networks with progressively lower valences”11, p 318).12,13 In contrast to most accepted medical interventions, which are prescribed for a circumscribed number of conditions, many pseudoscientific techniques lack boundary conditions of application. For example, some proponents of Thought Field Therapy, an intervention that purports to correct imbalances in unobservable energy fields, using specified bodily tapping algorithms, maintain that it can be used to treat virtually any psychological condition, and that it is helpful not only for adults but also for children, dogs, and horses.14

No indicator of pseudoscience should be used in isolation to disqualify a claim, because some scientific research programs make use of them as well. For example, ad hoc hypotheses play a legitimate role in science, especially when invoked judiciously. In most mature sciences, such hypotheses tend to enhance the theory’s content, predictive power, or both. In contrast, in pseudosciences, ad hoc hypotheses are typically introduced as desperate measures to explain away contrary findings, and rarely enhance the theory’s substance or capacity to generate successful predictions.15

Pseudoscience poses a serious threat to psychiatric practice and consumers of mental health services. Whether they be assertions concerning the use of the Rorschach Inkblot Test to detect anxiety or depression, energy therapies for trauma, gluten-free diets for autism spectrum disorder (ASD), or sensory-motor integration therapy for attention-deficit hyperactivity disorder, numerous mental health claims are contradicted or inadequately supported by research evidence (see Lilienfeld et al16 for a reasonably comprehensive overview of pseudoscientific assertions in clinical psychology and psychiatry). Moreover, many unsubstantiated psychological and psychiatric techniques are widely used. For example, although controlled evidence demonstrates that facilitated communication, sometimes called supported typing, is ineffective for ASD, survey data indicate that it continues to be widely administered in many quarters, with up to 10 per cent of people with ASD receiving it.17

Some may be tempted to dismiss pseudoscientific techniques as harmless and as best ignored. This approach would be misguided for 2 major reasons. First, growing evidence suggests that at least some psychological treatments are iatrogenic. For example, crisis debriefing (critical incident stress debriefing) for trauma-exposed victims and Scared Straight interventions for adolescents with conduct disorder have been associated with negative effect sizes in several controlled trials.18,19 Second, even techniques that are themselves innocuous can exact substantial indirect harm because of what economists term opportunity costs. Specifically, the substantial time, effort, energy, and money expended in seeking out and undergoing ineffective treatments may leave mental health consumers with few resources to obtain effective treatments.20

Unsubstantiated techniques have a lengthy history in psychiatry. Spinning chairs, tranquilizing chairs, the Utica crib, bloodletting, blistering, purging, leeching, dental removal, and prefrontal lobotomy, among scores of other treatments, were once believed by many physicians to be effective for psychiatric disorders, but are now recognized as useless or harmful.21 Although the field of psychiatry has learned from many of the painful mistakes of the past, a continued attitude of healthy self-criticism and vigilance will be essential to minimize errors in clinical practice and diminish risk to clients. Although controlled data are lacking, some authors have proposed that teaching psychiatry and psychology students about the well-intentioned but disastrous treatment mistakes of previous eras—and the ways in which such mistakes were corrected—may be an effective didactic tool for debiasing them against overconfidence in their clinical judgments.22 This educational practice underscores the ubiquitous fallibility of human reasoning processes and the necessity of rigorous research designs to minimize errors in inference.

Fortunately, the recent movement to embrace evidence-based practice (EBP), now widely accepted in most areas of medicine, has at last begun to gain traction in psychiatry and psychology.23 As Lee and Hunsley24 discuss in their article in this In Review, EBP and the accompanying push to develop clinical practice guidelines are crucial steps toward assisting practitioners with the challenging task of sifting pseudoscience from science in psychotherapy and assessment (the latter discussed in their online supplemental material). Some practitioners recoil at practice guidelines on the grounds that they place constraints on clinical judgment in the selection of techniques. Nevertheless, data consistently indicate that unbridled clinical judgment regarding the effectiveness of treatments tends to be inferior to carefully collected data on their effectiveness. The latter data should be overruled sparingly, and only when there is a clear-cut reason to believe that a given treatment is unlikely to work for a given client (for example, a past history of repeated failures to respond to well-delivered cognitive-behavioural therapy for a client with a mood disorder).25 Psychiatry’s past mistakes are a reminder than even well-intentioned techniques can do harm. Perhaps nowhere is this sobering lesson more evident than in the domain of recovered memory techniques, which have been found in numerous laboratory studies to be associated with a heightened risk of false recollections in a large proportion of people.26 As Lynn et al27 point out in their “survey of surveys” in this In Review, beliefs have consequences. Lynn and coauthors review a large corpus of data demonstrating that sizeable proportions of clinicians hold poorly supported beliefs about human memory, such as the belief that memory works like a video camera or tape recorder, or that memories are stored permanently in the brain. Many of these beliefs are shared by current and would-be clients in psychotherapy. As Lynn and colleagues note briefly in their concluding section, such beliefs may contribute to the adoption of unsupported and even dangerous clinical methods, such as suggestive techniques (for example, repeated prompting of memories, hypnosis, journalling, body work, and so-called truth serum) to recover purported memories of childhood trauma.

These 2 In Review articles24,27 underscore a key point. Science, being a human endeavour, is necessarily imperfect. Nevertheless, science—which is a set of finely honed tools designed to prevent us from fooling ourselves28—remains psychiatry’s best hope of winnowing out errors in clinical inference and improving patient care.

Acknowledgments

Dr Lilienfeld has no conflicts of interest or funding sources to declare.

The Canadian Psychiatric Association proudly supports the In Review series by providing an honorarium to the authors.

References

  • 1.Feynman R. Cargo cult science. Cal Tech commencement address. Pasadena (CA): California Institute of Technology; 1974. [Google Scholar]
  • 2.Stanovich KE. What intelligence tests miss: the psychology of rational thought. New Haven (CT): Yale University Press; 2010. [Google Scholar]
  • 3.Pronin E, Lin DY, Ross L. The bias blind spot: perceptions of bias in self versus others. Pers Soc Psychol Bull. 2002;28:369–381. [Google Scholar]
  • 4.Croskerry P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med. 2003;78:775–780. doi: 10.1097/00001888-200308000-00003. [DOI] [PubMed] [Google Scholar]
  • 5.Crumlish N, Kelly BD. How psychiatrists think. Adv Psychiatr Treat. 2009;15:72–79. [Google Scholar]
  • 6.Gambrill E. Critical thinking in clinical practice: improving the quality of judgments and decisions. New York (NY): John Wiley & Sons; 2006. [Google Scholar]
  • 7.Nickerson RS. Confirmation bias: a ubiquitous phenomenon in many guises. Rev Gen Psychol. 1998;2:175–220. [Google Scholar]
  • 8.Lilienfeld SO, Ritschel LA, Lynn SJ, et al. Why many clinical psychologists are resistant to evidence-based practice: root causes and constructive remedies. Clin Psychol Rev. 2013;33:883–900. doi: 10.1016/j.cpr.2012.09.008. [DOI] [PubMed] [Google Scholar]
  • 9.Ioannidis JP. Why most published research findings are false. Chance. 2005;18:40–47. doi: 10.1371/journal.pmed.0020124. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Pigliucci M, Boudry M, editors. Philosophy of pseudoscience: reconsidering the demarcation problem. Chicago (IL): University of Chicago Press; 2013. [Google Scholar]
  • 11.Shapiro F, Solomon RM. Eye movement desensitization and reprocessing. New York (NY): John Wiley & Sons, Inc; 1995. [Google Scholar]
  • 12.Bunge M. What is pseudoscience? Skept Inq. 1984;9(1):36–46. [Google Scholar]
  • 13.Lilienfeld SO, Ammirati R, David M. Distinguishing science from pseudoscience in school psychology: science and scientific thinking as safeguards against human error. J Sch Psychol. 2012;50:7–36. doi: 10.1016/j.jsp.2011.09.006. [DOI] [PubMed] [Google Scholar]
  • 14.Callahan RJ. Thought field therapy: response to our critics and a scrutiny of some old ideas of social science. J Clin Psychol. 2001;57:1251–1260. [Google Scholar]
  • 15.Herbert JD, Lilienfeld SO, Lohr JM, et al. Science and pseudoscience in the development of eye movement desensitization and reprocessing: implications for clinical psychology. Clin Psychol Rev. 2000;20:945–971. doi: 10.1016/s0272-7358(99)00017-3. [DOI] [PubMed] [Google Scholar]
  • 16.Lilienfeld SO, Lynn SJ, Lohr JM, editors. Science and pseudoscience in clinical psychology. 2nd ed. New York (NY): Guilford Books; 2014. [Google Scholar]
  • 17.Lilienfeld SO, Marshall J, Todd JT, et al. The persistence of fad interventions in the face of negative scientific evidence: facilitated communication for autism as a case example. Evid Based Commun Assess Interv. 2014;8:62–101. [Google Scholar]
  • 18.Lilienfeld SO. Psychological treatments that cause harm. Perspect Psychol Sci. 2007;2:53–70. doi: 10.1111/j.1745-6916.2007.00029.x. [DOI] [PubMed] [Google Scholar]
  • 19.Dimidjian S, Hollon SD. How would we know if psychotherapy were harmful? Am Psychol. 2010;65:21–33. doi: 10.1037/a0017299. [DOI] [PubMed] [Google Scholar]
  • 20.Lohr JM, Devilly GJ, Lilienfeld SO, et al. First do no harm, and then do some good: science and professional responsibility in the response to disaster and trauma. Behav Ther. 2006;29:131–135. [Google Scholar]
  • 21.Lieberman JA, Ogas O. Shrinks: the untold story of psychiatry. New York (NY): Little, Brown and Company; 2015. [Google Scholar]
  • 22.Lilienfeld SO, Ritschel LA, Lynn SJ, et al. Why ineffective psychotherapies appear to work: a taxonomy of causes of spurious therapeutic effectiveness. Perspect Psychol Sci. 2014;9:355–387. doi: 10.1177/1745691614535216. [DOI] [PubMed] [Google Scholar]
  • 23.Barlow DH, Bullis JR, Comer JS, et al. Evidence-based psychological treatments: an update and a way forward. Annu Rev Clin Psychol. 2013;9:1–27. doi: 10.1146/annurev-clinpsy-050212-185629. [DOI] [PubMed] [Google Scholar]
  • 24.Lee CM, Hunsley J. Evidence-based practice: separating science from pseudoscience. Can J Psychiatry. 2015;60(12):534–540. doi: 10.1177/070674371506001203. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Grove WM, Meehl PE. Comparative efficiency of informal (subjective, impressionistic) and formal (mechanical, algorithmic) prediction procedures: the clinical–statistical controversy. Psychol Public Policy Law. 1996;2:293–323. [Google Scholar]
  • 26.Loftus EF. Memory distortion and false memory creation. Bull Am Acad Psychiatry Law. 1996;24:281–295. [PubMed] [Google Scholar]
  • 27.Lynn SJ, Evans J, Laurence J-R, et al. What do people believe about memory? Implications for the science and pseudoscience of clinical practice. Can J Psychiatry. 2015;60(12):541–547. doi: 10.1177/070674371506001204. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Tavris C, Aronson E. Mistakes were made (but not by me): why we justify foolish beliefs, bad decisions, and hurtful acts. Boston (MA): Houghton Mifflin Harcourt; 2008. [Google Scholar]

Articles from Canadian Journal of Psychiatry. Revue Canadienne de Psychiatrie are provided here courtesy of SAGE Publications

RESOURCES