In 2005, three Editorials were published in Epidemiology and Psychiatric Sciences (Geddes, 2005; Stroup, 2005; Walwyn & Wessley, 2005) which stated the need to improve the quality of clinical trials in psychiatry. The general conclusion was that – although 60 years after Bradford Hill's pioneering work, the randomised controlled trial (RCT) had remained the major method for evaluating treatment effectiveness – it had become necessary to develop more large-scale, easier to conduct and more realistic RCTs to detect smaller but still clinically important effects. Thus, the issue of how best to deal with the complexity of interventions in the mental health care field was already appearing at the horizon of epidemiological psychiatric research.
The two Editorials published in the present issue of EPS examine the most recent methodological and practical issues facing clinicians and researchers who conduct psychosocial intervention trials in routine clinical practice. A particular challenge is that of how to accommodate these trials’ complexity, while concurrently respecting the key requisites of the RCT approach, so as to ensure the scientific reliability of trial research findings.
The mental health literature frequently compares exploratory (efficacy) trials and pragmatic (effectiveness) trials from a dichotomous perspective (Harrington et al. 2002). This distinction derives from the classic procedure of first validating a specific treatment in controlled conditions and then verifying whether the observed efficacy of this treatment will generalize into routine clinical practice. This dichotomous model, however, is better suited to drug treatment development and may be of less conceptual value in developing and testing complex mental health interventions. The latter, for example, may usually involve the specific clinical context being examined as part of the object of study, and the generally high costs of this type of RCT can render a model's planning stages impractical.
Briefly, efficacy trials must have high internal validity, i.e., a study based on this type of design must treat the most homogeneous population group possible, so as to limit the variance in its sample. It must therefore also attempt to restrict comorbidity. From the treatment perspective, an efficacy trial must ensure the highest level of fidelity and consistency of treatment administration possible (Ruggeri & Tansella, 2011), and it must pose precise questions, with hypotheses developed a priori. Yet, efficacy designs of this type are extremely difficult to achieve because obtaining pure samples or ensuring consistency of treatment intervention are unrealistic expectations in the real world of every day psychiatric practice. Even when it is possible to achieve these aims, efficacy trial findings could never thoroughly answer practical questions due to their lack of external validity. The result is that a given RCT's topic of inquiry may bear little relation to what happens in everyday clinical practice, and, moreover, the examined contexts’ organizational culture and underlying assumptions are rarely acknowledged in efficacy trials (Roy, 2012).
At the other end of the spectrum, effectiveness trials are expected to have high external validity. Specifically, they should test – insofar as possible – the ways in which treatments are actually delivered in clinical practice. Although this approach represents the strong point of effectiveness trials, their actual conduction represents a major challenge to researchers and clinicians. Indeed, they are frequently left wondering whether a complex intervention trial's failure to detect treatment effects might be due to suboptimal design, small number of patients randomly assigned to treatment groups, brief follow-up period or lack of robust evidence of difference in the effects of treatments compared (Fowler et al. 2011; Hodgson et al. 2007).
Moreover, the testing and implementing of novel psychosocial interventions can involve not only the implementation of different procedures for individual patient management but also organizational restructuring. Furthermore, if staff members, who determine the research environment's cultural norms, are not fully convinced of the proposed method's value or the need for the research initiative itself, the considerable energy, time and resources required for the trial to be successfully conducted (i.e., a collaborative atmosphere) will be unavailable.
The most prominent example of the difficulty of applying traditional RCT methodology to real-world practice is indeed that of psychosocial interventions, whose procedures must be ‘forced’ into the context of a RCT. They therefore present a number of problems such as difficulty in designing controls; impossibility of maintaining double-blind conditions; the challenge of standardizing interventions; the unpredictability of therapist–patient fit; the heterogeneity of outcomes and inevitably, reduced external validity.
No valid alternatives, however, currently exist to produce generalizable and reliable evidence. Thus, despite their limitations, RCT methodology should be still considered the gold standard for proving efficacy. The conceptualization of pragmatic RCTs has represented a milestone in the wider field of mental health service research, development that build on the conventional RCT methodology in order to make them more feasible in measuring the complexity of effectiveness studies, especially when they test psychosocial interventions. Pragmatic trials are large-scale trials, which focus on maximizing internal validity and avoiding bias while concurrently ensuring the greatest degree possible of trial procedure adherence to the ‘real world routine’, especially in terms of simplification of patient inclusion criteria and trial procedures (Purgato & Adams, 2012).
This issue's Editorials by Graham Dunn and by Ruggeri et al. illustrate a series of rationales for improving the design and analysis strategies of pragmatic trials, which aim to respond to practical management-oriented questions; they can also be useful in answering explanatory questions of scientific interest. Moreover, complex intervention trials could thereby constitute sophisticated clinical experiments designed to test the theories motivating the intervention they are testing; they could also help researchers understand the underlying nature of the clinical problems being treated, in the context of patient- and service-level characteristics.
We therefore aim to examine two issues here below, which represent main challenges to measuring complex interventions in real world mental health services: (1) defining what is the most appropriate control intervention and (2) identifying the key ‘ingredients’ of a complex intervention.
Difficulty of control intervention. The placebo procedure cannot be used in pragmatic RCTs as a control condition. Therefore, when studying complex interventions key variables should be precisely defined for testing in the control arm. In wholly pragmatic trials, there is a general consensus that the best control condition is the treatment that is currently being practised – i.e., treatment as usual (TAU). In fact, pragmatic RTCs for psychosocial interventions generally aim to address this practical question: ‘Does the test treatment confer additional benefit over best current practice treatment?’
The choice of control intervention is a critical issue for pragmatic RCTs testing psychosocial interventions in routine settings. For example, trials evaluating the effectiveness of assertive community treatment (ACT) in the UK (Thornicroft et al. 1998; Burns et al. 1999) showed minimal advantages over TAU. Burns (2008) commented on these findings by suggesting that the observed lack of effect could have been due to the control condition being ‘too good’ or ‘too similar’ to the experimental intervention. More recently Burns (2009) proposed the further consideration that the main problem lies in thinking of TAU as a control, as TAU might be a very active comparator and a very variable and potent one.
Identifying key complex intervention ‘ingredients’. As discussed in the preceding paragraph, both experimental treatment and TAU can be complex interventions, which are strongly influenced by research facility organizational and environmental contexts. This view sheds light on the possible interference of non-specific treatment aspects on both the control and experimental arms of this type of RCT. In fact, one cannot be sure whether a given treatment effect observed is due to the actual intervention's specific properties or to some other non-specific, therapeutic effect (Green, 2006; Emsley et al. 2010). Thus, the topic of treatment process variables has become one of growing investigative interest. In their Editorial, Ruggeri et al. discuss the difficulties and implications involved in identifying variables that might actually modulate observed changes (mediators), as well as the influence of crucial pre-treatment factors (moderators) on treatment effects.
Moreover, all too frequently, complex interventions end up being reduced, in the RCT study protocol, to their constituent parts, so as ‘fit’ them to the design's strict methodological requirements. This approach, however, fails to acknowledge the reality that any complex intervention has potential for being much more than the sum of its parts. Hawe et al. (2004) therefore proposed that inconclusive trials could be avoided if standardization of an intervention's function vs. its formal aspects were more widely utilized. They also propose that this type of approach would make it possible to tailor an intervention to its context level and to the local environment, which could potentially improve its efficacy.
Given the above-mentioned considerations and the detailed issues discussed in the following two Editorials, it is clear that the mental health research field's forthcoming challenge is to develop new trial designs, which focus on both efficacy and process evaluation. What is needed is a ‘new generation’ of pragmatic trials for psychosocial interventions. Care should be taken, however, in future research of this type, to avoid major methodological biases while increasing these trials’ potential for capturing ‘real world’ complexity. This modified RCT approach – which can more fully develop the immensely valuable Bradford Hill's pioneering contributions from the mid last century – will certainly help foster the advancement of mental health service research and bridge the gap between research and clinical practice.
Financial Support
This research received no specific grant from any funding agency, commercial or not-for-profit sectors.
Conflict of Interest
None.
References
- Burns T (2008). Case management and assertive community treatment: what is the difference? Epidemiologia e Psichiatria Sociale 17, 99–105. [DOI] [PubMed] [Google Scholar]
- Burns T (2009). End of the road for treatment-as-usual studies? British Journal of Psychiatry 195, 5–6. [DOI] [PubMed] [Google Scholar]
- Burns T, Creed F, Fahy T, Thompson S, Tyrer P, White I (1999). Intensive versus standard case management for severe psychotic illness: a randomised trial. Lancet 353, 2185–2189. [DOI] [PubMed] [Google Scholar]
- Emsley R, Dunn G, White IR (2010). Mediation and moderation of treatment effects in randomised controlled trials of complex interventions. Statistical Methods in Medical Research 19, 237–70. [DOI] [PubMed] [Google Scholar]
- Fowler D, Rollinson R, French P (2011). Adherence and competence assessment in studies of CBT for psychosis: current status and future directions. Epidemiology and Psychiatric Sciences 20, 121–126. [DOI] [PubMed] [Google Scholar]
- Geddes JR (2005). Large simple trials in psychiatry: providing reliable answers to important clinical questions. Epidemiologia Psichiatria Sociale 14, 122–126. [DOI] [PubMed] [Google Scholar]
- Green J (2006). The evolving randomised controlled trial in mental health: studying complexity and treatment process. Advances in Psychiatric Treatment 12, 268–279. [Google Scholar]
- Harrington RC, Cartwright-Hatton S, Stein A (2002). Annotation: randomised trials. Journal of Child Psychology and Psychiatry 43, 695–704. [DOI] [PubMed] [Google Scholar]
- Hawe P, Shiell A, Riley T (2004). Complex interventions: how ‘out of control’ can a randomised controlled trial be? British Medical Journal 328, 1561–1563. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hodgson R, Bushe C, Hunter R (2007). Measurement of long-term outcomes in observational and randomised controlled trials. British Journal of Psychiatry 191(Suppl. 50), S78–S84. [DOI] [PubMed] [Google Scholar]
- Purgato M, Adams C (2012). Heterogeneity: the issue of apples, oranges and fruit pie. Epidemiology and Psychiatric Sciences 21, 27–29. [DOI] [PubMed] [Google Scholar]
- Roy J (2012). Randomized treatment-belief trials. Contemporary Clinical Trials 33, 172–177. [DOI] [PubMed] [Google Scholar]
- Ruggeri M, Tansella M (2011). New perspectives in the psychotherapy of psychoses at onset: evidence, effectiveness, flexibility, and fidelity. Epidemiology and Psychiatric Sciences 20, 107–111. [DOI] [PubMed] [Google Scholar]
- Stroup S (2005). Practical clinical trials for schizophrenia. Epidemiologia Psichiatria Sociale 14, 132–136. [DOI] [PubMed] [Google Scholar]
- Thornicroft G, Wykes T, Holloway F, Johnson S, Szmukler G (1998). From efficacy to effectiveness in community mental health services. PRiSM Psychosis Study 10. British Journal of Psychiatry 173, 423–427. [DOI] [PubMed] [Google Scholar]
- Walwyn R, Wessely S (2005). RCTs in psychiatry: challenges and the future. Epidemiologia e Psichiatria Sociale 14, 127–131. [DOI] [PubMed] [Google Scholar]
