Skip to main content
Wiley Open Access Collection logoLink to Wiley Open Access Collection
. 2020 Aug 16;60(1):38–41. doi: 10.1111/bjc.12264

Ensuring that the Improving Access to Psychological Therapies (IAPT) programme does what it says on the tin

Michael J Scott 1,
PMCID: PMC7891596  PMID: 32803761

I welcome the opportunity to comment on the recent paper ‘Improving Access to Psychological Therapies (IAPT) in the United Kingdom: A systematic review and meta‐analysis of 10‐years of practice‐based evidence’ by Wakefield et al. (2020) published in the Journal. This paper is of considerable importance because the authors conclude that IAPT’s results approach the 50% recovery rate found in randomized controlled trials (RCTs) of cognitive behaviour therapy for depression and the anxiety disorders. Taken at face value, this meta‐analysis provides justification for current IAPT services, which have cost the taxpayer £4 billion. Further the study could fuel the funding of not only the provision of services to more clients but accelerate the expansion of IAPT services without any diagnostic boundary. There can be no doubt that improving access to psychological therapies is an important societal aim, as only a significant minority of those with mental health problems are beneficiaries. But this is a far cry from legitimizing the tangible expression of this goal in the guise of the IAPT service.

Allegiance bias and real‐world outcome

In the Wakefield et al. (2020) paper all the authors declare ‘no conflict of interest’. But the corresponding author of the study, Stephen Kellett, is an IAPT Programme Director. The study is therefore open to a charge of allegiance bias. It is therefore not surprising that Wakefield et al. (2020) fail to make the distinction between IAPT’s studies and IAPT studies. By definition, the former have a vested interest, akin to drug manufacturer espousing the virtues of its psychotropic drug. Whilst an IAPT study is conducted by a body or individual without a vested interest, in this connection Wakefield et al. (2020) have implicitly misclassified this author’s IAPT study, Scott (2018). In their study, Wakefield et al. (2020) make reference to the Scott (2018) study with a focus on a subsample of 29 clients (from the 90 IAPT clients) for whom psychometric test results were available in the GP records. But in Scott (2018) it was made clear that concluding anything from such a subsample was extremely hazardous. The bigger picture was that 90 IAPT clients were independently assessed using a ‘gold standard’ diagnostic interview, either before or after their personal injury (PI) claim. Independent of their PI status, it was found that only the tip of the iceberg lost their diagnostic status as a result of IAPT treatment. Wakefield et al. (2020) were strangely mute on this point. They similarly failed to acknowledge that the ‘IAPT’s studies’ involved no independent assessment of IAPT client’s functioning and there was no use of a ‘gold standard’ diagnostic interview.

The failure to demonstrate an added value

The Wakefield et al. (2020) study did not include a comparison of IAPT’s claimed outcomes with an appropriate counterfactual. For a new service to warrant continued funding, it must demonstrate that it is better than if the service never existed. But data from psychological services that pre‐dated IAPT suggest that IAPT has conferred no added value. In 2006, Mullin et al. (2006) examined the effects of counselling/therapy in more than 11,000 clients and concluded that between 5 and 6 clients out of every 10 met the criterion for recovery. These authors used the same criterion with regard to the reliable change index (Jacobsen & Truax, 1991) as used by IAPT, but used the CORE‐OM self‐report measure (Barkham et al., 2006) rather than the PHQ9 (Kroenke et al., 2001)/GAD7 (Spitzer et al., 2006).

IAPT’s studies fail to clear methodological bars for evidence supported treatments (ESTS)

The past decade has witnessed a refinement in the criteria necessary for psychological interventions to be regarded as evidence supported. This has included (1) more detailed examination of the risk of bias (Higgins et al., 2011) (2) the need for comparisons with active control conditions (Carpenter et al., 2018; Giudi et al., 2018), (3) the need for independent blind assessment, a combination of observer and patient‐reported outcome measures together with a determination of the duration of recovery (Giudi et al., 2018), and (4) the need for measures of treatment fidelity and the need to test out a supposed EST in real‐world settings with evaluators independent of those who developed the protocols (Tolin et al., 2015). In the same decade, IAPT has greatly expanded, but Wakefield et al. (2020) fail to acknowledge that IAPT’s studies are largely invalidated by these considerations. These authors seem unaware of a highering of the methodological bar, for interventions to be regarded as ESTs.

IAPT’s treatment infidelity

As Wakefield et al. (2020) acknowledge, IAPT does not utilize any measure of treatment fidelity, but appear not to appreciate the gravity of this. IAPT espouses NICE approved treatments but there can no certainty that this is delivered at the coal face, for example, that an IAPT clinicians claimed trauma focussed CBT (TFCBT) actually took place. Further IAPT eskews diagnosis (National Collaborating Centre for Mental Health, IAPT Manual, 2019), making it impossible to compare the effectiveness of TFCBT delivered in the service with the efficacy found in randomized controlled trial (RCT) of PTSD studies where TFCBT was employed.

Dubious points of reference

More generally, IAPT’s client population is so heterogeneous that no meaningful comparisons can made with the results of RCTs. A similar criticism can be made of the Stewart and Chambless (2009) meta‐analysis of effective studies which Wakefield et al. (2020) used as a reference point for IAPT’s studies. For example, the Stewart and Chambless (2009) study cites the Westbrook and Kirk (2005) effectiveness study when making comparison with RCTs in efficacy studies, but the Westbrook and Kirk (2005) population were of unknown diagnostic status, making for a doubtful comparison. Wakefield et al. (2020) also cite a study by Thimm and Antonsen (2014) as a reference point for their IAPT’s studies, but the former used a standardized diagnostic interview to establish that their clients were depressed no such yardstick has been employed by IAPT, making it uncertain whether IAPT’s claimed depressed clients are comparable to those in this study. Comparison is doubly problematic as the focus in the Thimm and Antonsen (2014) study was exclusively on those who underwent group CBT.

IAPT’s studies are of completers, despite most clients dropping out

Whilst Wakefield et al. (2020) acknowledge the importance of an intention to treat analysis, they fail to highlight how the absence of this undermines their review. IAPT’s studies focus primarily on completers, defined as attending two or more sessions. Notwithstanding this strange definition of completion, the rate of dropout was 62.5%, Richards and Borglin (2011). This pattern of engagement is identical to that found in Scott (2018). Given that clients completing treatment in IAPT are atypical, the conclusions of Wakefield et al. (2020) are of doubtful real‐world significance. Wakefield et al. (2020) seek to buttress the status of IAPT’s studies by making comparison with the Stewart and Chambless (2009) meta‐analysis, but 52 of the 56 studies cited in the latter refer to completer studies. This is an inappropriate yardstick given the ‘haemorrhaging’ of IAPT clients reported by Richards and Borglin (2011).

Towards achieving outcomes that matter to clients

The Wakefield et al. (2020) study serves to legitimate current IAPT practice. These authors are remiss in not pointing out that IAPT’s studies reveal no evidence of enduring loss of diagnostic status. As such, they display an indifference to what clients would regard as evidence of treatment making a real‐world difference. There is a pressing need for a publicly funded independent evaluation of IAPT’s work, one that appropriately takes into account the methodological rigour that has been refined in the last decade. Until such time, the burden of proof is on IAPT to provide credible evidence of effectiveness. Although the author’s own study Scott (2018) is not without its’ limitation (IAPT clients who had been PI litigants at some stage) it was at least independent, assessed outcome using a ‘gold standard’, and described client’s reported experiences of the service. There is also a need for an independent qualitative study of clients and clinician’s experiences of IAPT.

Conflicts of interest

All authors declare no conflict of interest.

References

  1. Barkham, M. , Mellor‐Clark, J. , Connell, J. , & Cahill, J. (2006). A core approach to practice‐based evidence: A brief history of the origins and applications of the CORE‐OM and CORE System. Counselling and Psychotherapy Research, 6, 3–15. 10.1080/14733140600581218 [DOI] [Google Scholar]
  2. Carpenter, J. K. , Bowers, M. B. , Andrews, L. A. , Witcraft, S. M. , Smits, J. A. J. , & Hofman, S. G. (2018). Cognitive behavioral therapy for anxiety disorders: A meta‐analysis of randomised placebo‐controlled trials. Depression and Anxiety, 35, 502–514. 10.1002/da.22728 [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Giudi, J. , Brakemeier, E. , Bockting, C. L. K. , & Fava, G. A. (2018). Methodological recommendations for trials of psychological interventions. Psychotherapy and Psychosomatics, 87, 276–284. 10.1159/000490574 [DOI] [PubMed] [Google Scholar]
  4. Higgins, J. T. , Altman, D. G. , Gotzsche, P. C. , Iuni, P. , Moher, D. , Oxman, A. D. , … Sterne, J. C. (2011). The Cochrane Collaboration’s tool for assessing risk of bias in randomised trials. BMJ, 343, d5928 10.1136/bmj.d5928 [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Jacobsen, N. S. , & Truax, P. (1991). Clinical significance: A statistical approach to defining meaningful change in psychotherapy research. Journal of Consulting and Clinical Psychology, 59, 12–19. 10.1037/0022-006X.59.1.12 [DOI] [PubMed] [Google Scholar]
  6. Kroenke, K. , Spitzer, R. L. , & Williams, J. B. W. (2001). The PHQ‐9: Validity of a brief depression severity measure. Journal of General Internal Medicine, 16, 606–613. 10.1046/j.1525-1497.2001.016009606.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Mullin, T. , Barkham, M. , Mothersole, G. , Bewick, B. M. , & Kinder, A. (2006). Recovery and improvement benchmarks for counselling and the psychological therapies in routine primary care. Counselling and Psychotherapy Research, 6, 68–80. 10.1080/14733140600581515 [DOI] [Google Scholar]
  8. National Collaborating Centre For Mental Health (2019). The improving access to psychological therapies manual. [Google Scholar]
  9. Richards, D. A. , & Borglin, G. (2011). Implementation of psychological therapies for anxiety and depression in routine practice: Two year prospective cohort study. Journal of Affective Disorders, 133, 51–60. 10.1016/j.jad.2011.03.024 [DOI] [PubMed] [Google Scholar]
  10. Scott, M. J. (2018). Improving access to psychological therapies (IAPT) – The need for radical Reform. Journal of Health Psychology, 23, 1136–1147. 10.1177/1359105318755264 [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Spitzer, R. L. , Kroenke, K. , Williams, J. B. W. , & Löwe, B. (2006). A brief measure for assessing generalized anxiety disorder: The GAD‐7. Archives of Internal Medicine, 166, 1092–1097. 10.1001/archinte.166.10.1092 [DOI] [PubMed] [Google Scholar]
  12. Stewart, R. E. , & Chambless, D. L. (2009). Cognitive‐behavioural therapy for adult anxiety disorders in clinical practice: A meta‐analysis of effectiveness studies. Journal of Consulting and Clinical Psychology, 77, 595–606. 10.1037/a0016032 [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Thimm, J. C. , & Antonsen, L. (2014). Effectiveness of cognitive behavioural group therapy for depression in routine practice. BMC Psychiatry, 14, 292 10.1186/s12888-014-0292-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Tolin, D.F. , McKay, D. , Forman, E.M. , Klonsky, E.D. , & Thombs, B.D. (2015). Empirically supported treatment: Recommendations for a new model. Clinical Psychology Science and Practice, 22(4), 317–338. 10.1111/cpsp.12122 [DOI] [Google Scholar]
  15. Wakefield, S. , Kellet, S. , Simmonds‐Buckley, M. , Stockton, D. , Bradbury, A. , & Delgadillo, J. (2020). Improving Access to Psychological Therapies (IAPT) in the United Kingdom: A systematic review and meta‐analysis of 10‐years of practice‐based evidence. British Journal of Clinical Psychology, 10.1111/bjc.12259 [DOI] [PubMed] [Google Scholar]
  16. Westbrook, D. , & Kirk, J. (2005). The clinical effectiveness of cognitive behaviour therapy: Outcome for a large sample of adults treated in routine practice. Behaviour Research and Therapy, 43, 1243–1261. 10.1016/j.brat.2004.09.006 [DOI] [PubMed] [Google Scholar]

Articles from The British Journal of Clinical Psychology are provided here courtesy of Wiley

RESOURCES