Skip to main content
Epidemiology and Psychiatric Sciences logoLink to Epidemiology and Psychiatric Sciences
letter
. 2019 Jan 15;28(3):356–357. doi: 10.1017/S204579601800080X

Is psychotherapy effective? Pretending everything is fine will not help the field forward

Pim Cuijpers 1,, Eirini Karyotaki 1, Mirjam Reijnders 1, David D Ebert 1,2
PMCID: PMC6998904  PMID: 30642408

When we wrote our paper on the question whether psychotherapies are effective or not (Cuijpers et al., 2018), some colleagues said this paper was unnecessary. Our claims that waiting list control groups are problematic, that the quality of many papers is overall low and that many negative studies are not published are well-documented and well-known among researchers. So why describe them again? Our response was that many in the field of psychotherapy research are not aware of these fundamental problems, and are very much internally oriented with little knowledge about major developments in the methodologies in the broader biomedical field. Our paper was obviously meant to stimulate discussion and awareness about these problems in the field.

Unfortunately, Munder and colleagues, in their re-analysis of our study (Munder et al., 2018), just choose to ignore the main message of our paper. Instead of admitting major problems in the field, the authors choose to do as if nothing is wrong and that both psychotherapies effects and psychotherapy research are fine.

Waitlist as the ‘natural course of disease’?

It is not possible to reply to all points raised in this paper, so we selected the most important ones. One important point is the use of waiting list control group. In our paper, we said that this comparison condition may overestimate the effects because they discourage patients to seek alternative treatment, which is strengthened by the increase expectations of receiving treatment in the future. This is by the way exactly what is said in reviews of control conditions in behavioural interventions (Mohr et al., 2009; Gold et al., 2017).

Munder and colleagues consider the waitlist to reflect the ‘natural course of the disease’. One important reason for this is that a substantial number of patients recover when they are on a waitlist. This is obviously true. However, the question is whether that recovery rate is different from the natural recovery rate. In one study remission rates in untreated cases of depression are estimated to be 23% within 3 months, 32% within 6 months and 53% within a year (Whiteford et al., 2012). To assume that ‘patients may improve as a function of being included in the trial’, may well be true, but this cannot be assumed on the basis of the fact that some patients recover during waiting. It is well possible, as we said in our paper, that being on a waiting list actually reduces the spontaneous recovery rate.

There is a clear difference between waitlist and other control groups (see our paper), and the effects of a therapy are significantly larger when compared to waitlist than when compared to care-as-usual. From a public health perspective, care-as-usual is much more interesting because it shows what an intervention adds to what already exists. Psychotherapy may be effective when compared to a waitlist, but when it is not more effective than care-as-usual it has very little meaning for public health.

Ignoring the low quality of trials on psychotherapy for depression

But suppose that we agree with Munder and colleagues and agree that the waiting list is an acceptable comparison condition. Then, we still have the problem of overestimation because of low quality and because of publication bias. Munder and colleagues have an interesting argumentation why they did not use the risk of bias estimate in their re-analysis. In brief, their reasoning is as follows: apart from the risk of bias assessed in our paper, there are also other issues that should be considered as potential risk of bias, and therefore the risk of bias is not included in the analyses. In our paper we also did not use all items of the risk of bias tool, because then hardly any study would remain.

Overall, only 23% of the studies in our meta-analysis met criteria for low risk of bias. If we follow the reasoning of Munder and colleagues, this percentage is even smaller, because we did not rate all potential sources of bias. So, how many studies will then remain? 5%? 10%? Munder and colleagues do not seem to worry about this finding, as there is no mention of this anywhere in their paper.

They did have some other, smaller arguments not to include risk of bias. Apart from some differences on how some studies should be rated, they found some errors in the online appendix (not in the data that we used for the analyses). As requested by Dr Wampold, we provided the correct data to the authors (as indicated in footnote 1 of their paper). But apparently, they still were reluctant to use these data. If they would have used these data, they would have seen the major problem of their reasoning.

If one would accept waitlist as an acceptable control, the other problems raised by us are still there. Our calculations show that the effect of therapy compared to waitlist is g  =  0.89 (95% confidence interval (CI) 0.80–0.98; N  =  159). But if the studies with low risk of bias are selected, the effects drop to g  =  0.62 (95% CI 0.52–0.73; N  =  31). And if that is adjusted for publication bias, it further drops to g  =  0.50 (95% CI 0.38–0.63; seven imputed studies). So, it is convenient of Munder and colleagues to disregard risk of bias, because otherwise they would have seen that even if one accepts the waitlist as a good control condition, the other problems still result in a considerable overestimation of the effects. Only 19% of studies have low risk of bias and there are strong and significant indications for publication bias.

Pretending everything is fine will not help the field forward

Munder and colleagues give all kinds of other, smaller arguments why our paper is wrong. We have answers to all of them but no space to write them down. But the most important issue is that they simply ignore the main problems of psychotherapy research that we tried to describe in our paper. It is clear that Munder and colleagues are far away from accepting some of these major problems. We certainly hope that other readers have understood the main points raised in our paper, that they see the major problems the field is currently facing, and that they help in advancing the field. Pretending everything is fine, will not help the field forward.

Author ORCIDs

Mirjam Reijnders 0000-0002-4272-2576

References

  1. Cuijpers P, Karyotaki E, Reijnders M and Ebert DD (2018) Was Eysenck right after all? A reassessment of the effects of psychotherapy for adult depression. Epidemiology and Psychiatric Sciences, epub ahead of print. Available at 10.1017/S2045796018000057. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Gold SM, Enck P, Hasselmann H, Friede T, Hegerl U, Mohr DC and Otte C (2017) Control conditions for randomised trials of behavioural interventions in psychiatry: a decision framework. The Lancet Psychiatry 4, 725–732. [DOI] [PubMed] [Google Scholar]
  3. Mohr DC, Spring B, Freedland KE, Beckner V, Arean P, Hollon SD, Ockene J and Kaplan R (2009) The selection and design of control conditions for randomized controlled trials of psychological interventions. Psychotherapy and Psychosomatics 78, 275–284. [DOI] [PubMed] [Google Scholar]
  4. Munder T, Flückiger C, Leichsenring F, Abbass AA, Hilsenroth MJ, Luyten P, Rabung S, Steinert C and Wampold BE (2018) Is psychotherapy effective? A re-analysis of treatments for depression. Epidemiology and Psychiatric Sciences, epub ahead of print. Available at 10.1017/S2045796018000355. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Whiteford HA, Harris MG, McKeon G, Baxter A, Pennell C, Barendregt JJ and Wang J (2012) Estimating remission from untreated major depression: a systematic review and meta-analysis. Psychological Medicine 43, 1569–1585. [DOI] [PubMed] [Google Scholar]

Articles from Epidemiology and Psychiatric Sciences are provided here courtesy of Cambridge University Press

RESOURCES