Skip to main content
CMAJ : Canadian Medical Association Journal logoLink to CMAJ : Canadian Medical Association Journal
. 2009 Apr 16;180(10):998–1000. doi: 10.1503/cmaj.082007

What kind of randomized trials do we need?

Merrick Zwarenstein 1,, Shaun Treweek 1
PMCID: PMC2679816  PMID: 19372438

In 1967, Daniel Schwartz and Joseph Lellouch, 2 French statisticians, and their British colleague and translator Michael Healy wrote “[M]ost therapeutic trials are inadequately formulated, and this from the earliest stages of their conception.”

The seminal paper1 from which this dramatic assertion is drawn is reprinted in the May 2009 issue of the Journal of Clinical Epidemiology as part of a joint focus with CMAJ on making randomized controlled trials (RCTs) more useful.

Schwartz and Lellouch argued that there are 2 kinds of randomized trials embodying radically different attitudes to evaluation of treatment, which they named “pragmatic” and “explanatory.” They go on to say that these 2 attitudes require different approaches to the design of a randomized trial. The pragmatic attitude seeks to directly inform real-world decisions among alternative treatments. Schwartz and Lellouch show that this purpose is satisfied in trials that select typical participants, settings and comparator care to widen real-world applicability. In contrast, the explanatory attitude seeks to understand a biological process by testing the hypothesis that the specified biological response is explained by exposure to a particular treatment. Tight restrictions on participants, treatment, control and setting maximize the contrast with the control group and increase the ability to test this kind of hypothesis.

Their assertion of inadequate formulation relates to the mismatch between the use we make of most trials (which is to inform decisions on therapy) and the design of these trials (which generally takes the opposite form, best suited to testing causal hypotheses). This mismatch between the clinical context in which clinicians must make decisions and the clinical context of the randomized trials that they must use for evidence means that health professionals (and, writ large, health care funders) are left without direct evidence upon which to base most of the patient care decisions (and funding decisions) that each must make. Since information from an explanatory trial is unlikely to inform a pragmatic question, nor vice versa, Schwartz and Lellouch proposed that investigators should explicitly specify the purpose of their trial and design it to match that purpose.

There are few trials whose purpose and design choices match. The only review of this subject2 identified fewer than 100 pragmatic designed randomized trials, of the quarter million or so RCTs listed by the US National Library of Medicine, which suggests that existing RCTs are mostly explanatory in design and thus not directly applicable to choosing between treatment options. This is ironic since the very first published randomized trial was pragmatic in purpose and in many of its design choices. It showed clear benefits for patients receiving streptomycin and usual care (bed rest) over the control group receiving only usual care.3 The decline of tuberculosis in high-income countries is thus due in part to the pragmatic trial.

Why so few pragmatic trials? Because of the size of the market, US pharmaceutical licensing regulations are the principal stimulus for the conduct of RCTs of treatments and the main influence on their design. The requirement that pharmaceutical manufacturers demonstrate efficacy of their products was first legislated in the 1962 Kefauver–Harris amendments to the US Federal Food, Drug, and Cosmetic Act, passed in the wake of the thalidomide tragedy, from which the United States had been largely spared by caution on the part of the US Food and Drug Administration (FDA).4 Perhaps because of their focus on safety, these licensing regulations5 devote most of their attention to preparatory studies in animals, safety issues in humans and documentation. The little guidance there is argues against trials with a pragmatic attitude: “One problem [with active-control trials] is that there are numerous ways of conducting a study that can obscure differences between treatments, such as poor diagnostic criteria, poor methods of measurement, poor compliance, medication errors, or poor training of observers. As a general statement, carelessness of all kinds will tend to obscure differences between treatments. Where the objective of a study is to show a difference, investigators have powerful stimuli toward assuring study excellence.”6 Much of what the FDA labels “careless” or “poor” is typical in usual care. The FDA thus equates pragmatic design choices aimed at increasing applicability with carelessness and poor study design, which results in trials that lack the attributes needed to directly support decisions about the real-world usefulness of a treatment.

FDA regulations and guidance are influential, but 2 other factors cannot be ignored: after spending many millions of dollars on development of a therapeutic drug or device, no corporation wants to give such an investment less than an ideal setting for displaying its benefits, hence the emphasis on starkly contrasting placebo control groups — often placebo controlled, enhanced adherence and highly selected patient centres and clinicians. The shared desire of the FDA and industry for conducting trials under ideal conditions with strong contrasts is reinforced by the strong preference of the US National Institutes of Health for trials that elucidate clear physiologic hypotheses. This triangle of actors contributes to the flood of stringently conducted and internally valid randomized trials, of doubtful applicability to most patients, most settings and most clinicians.

There is disquiet about the remoteness from real-world decision-making of regulatory randomized trials7 among third-party funders such as Medicare in the United States, who are concerned that the potentially lower real-world benefits of a treatment might be outweighed by potentially higher risks, which would leave decisions on use and funding unclear. As a consequence, there is a rising interest in the design of trials that would avoid these misleading design attributes and provide, as Schwartz and Lellouch long ago suggested, direct decision-making information to those who must choose whether or not to prescribe, use, pay for or promote particular treatments.

Several papers take up this critically important theme. All appear in the May 2009 issue of the Journal of Clinical Epidemiology. The article by Thorpe and colleagues,8 which also appears in this issue of CMAJ ( page 1025), offers us a first draft of a means for classifying design choices as to their degree of pragmatism, important because this is not an all-or-none phenomenon. Karanicolas and colleagues9,10 argue that a trial has a fundamental “point of view” that changes the relationship between the decision-making purpose of a trial and its design. Oxman and colleagues11,12 argue the contrary, namely that patients and clinicians benefit most from having more pragmatic evidence on which to base their decisions. Finally, Maclure,13 whose article also appears in this issue of CMAJ ( page 1001), points to the close match between decision-makers’ preferences for directly applicable evidence and the pragmatic attitude favoured by Schwartz and Lellouch. Some of these authors have also recently published an extension to the CONSORT (Consolidated Standards of Reporting Trials) statement for pragmatic trials (Table 1),14 with recommendations on reporting of trials whose aim is to inform decisions. This CONSORT extension encourages authors to include in their trial publications information to help readers judge the applicability of the results to their own settings, the better to decide whether or not to implement the tested intervention (Table 1).14

Table 1.

Key differences between trials with explanatory and pragmatic attitudes*

Feature Explanatory attitude Pragmatic attitude
Question Efficacy: Can the intervention work? Effectiveness: Does the intervention work when used in normal practice?
Setting Tightly controlled, well resourced, “ideal” setting Normal practice
Participants Highly selected; poorly adherent participants and those with conditions that might dilute the effect are often excluded Little or no selection beyond the clinical indication of interest
Intervention Strictly enforced; adherence is monitored closely Applied flexibly as it would be in normal practice
Comparator Strictly enforced; adherence is monitored closely Often usual care, with usual variation; applied flexibly as it would be in normal practice
Outcomes Often short-term surrogates or process measures Directly relevant to participants, funders, communities and health care practitioners
Relevance to practice Indirect: little effort is made to match the design of the trial to the decision-making needs of those in the usual setting in which the intervention will be implemented Direct: the trial is designed to meet the needs of those making decisions about treatment options in the setting in which the intervention will be implemented
*

Adapted, with permission, from Zwarenstein M, Treweek S, Gagnier J, et al.; CONSORT and Pragmatic Trials in Healthcare (Practihc) groups. Improving the reporting of pragmatic trials: an extension of the CONSORT statement. BMJ 2008;337:a2390. The table in BMJ was adapted from a table presented by Marion Campbell, University of Aberdeen, at the 2008 Society for Clinical Trials meeting.

The expansion of interest in the design of pragmatic randomized trials to support decision-making is surely now much needed. In Canada, for example, the regulatory authority (Health Canada) is currently considering a progressive licensing model for therapeutic agents.15 Foremost in its consideration is speeding the process of licensing while maintaining high standards for the evaluation of safety and efficacy. Health Canada might consider revisiting its regulations and encouraging the use of pragmatic designs to support its decision-making. It might be argued that this would lengthen an already prolonged licensing process, reducing the ability of inventors to profit from their discoveries. There is a counter argument: if the initial randomized trial of a treatment were pragmatic in purpose and design, funders of the treatment could immediately use that information to make decisions on usefulness in their setting and patient group, eliminating what are today entirely separate and sequential processes for regulatory approval and formulary inclusion. Combining the trials that collect information for regulatory and for formulary inclusion processes could simplify both and make more transparent the reasons for decisions on reimbursement. And since large public funders demanding information for decision-making are the world’s most important markets for new pharmaceuticals, this could lengthen the period of profitable patent protection, rather than shorten it.

Schwartz and Lellouch ended their paper with a devastating conclusion: “Most trials done hitherto have adopted the explanatory approach without question; the pragmatic approach would often have been more justifiable.” Forty years and hundreds of thousands of randomized trials later, this remains true. It is time to shift our design choices so that they match our usual purpose in conducting a trial, most often to directly inform the decisions of real-world patients, clinicians and third-party funders.

Footnotes

This article was published simultaneously in the May 2009 issue of the Journal of Clinical Epidemiology (www.jclinepi.com).

Competing interests: None declared.

Contributors: Both of the authors were involved in the preparation of the manuscript and approved the final version submitted for publication.

REFERENCES

  • 1.Schwartz D, Lellouch J. Explanatory and pragmatic attitudes in therapeutical trials. J Chronic Dis. 1967;20:637–48. doi: 10.1016/0021-9681(67)90041-0. [Reprinted in J Clin Epidemiol 2009;62:499–505.] [DOI] [PubMed] [Google Scholar]
  • 2.Vallvé C. A critical review of the pragmatic clinical trial [Spanish] Med Clin (Barc) 2003;27:384–8. doi: 10.1016/s0025-7753(03)73957-8. [DOI] [PubMed] [Google Scholar]
  • 3.Fox W, Hill A. Streptomycin treatment of pulmonary tuberculosis. BMJ. 1948;2:769–82. [PMC free article] [PubMed] [Google Scholar]
  • 4.A brief history of the Center for Drug Evaluation and Research. Rockville (MD): Center for Drug Evaluation and Research, US Food and Drug Administration; 1997. [(accessed 2009 Feb. 12)]. p. 32. Available: www.fda.gov/cder/about/history/Page32.htm. [Google Scholar]
  • 5.Investigational new drug application. [(accessed 2009 Feb. 12)];Code of federal regulations. Title 21, Pt 312 (revised 2008 Apr. 1). Available: www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfcfr/CFRsearch.cfm?CFRPart=312.
  • 6.Drug study designs. Drugs and biologics Guidance for institutional review boards and clinical investigators: 1998 update. Rockville (MD): US Food and Drug Administration; 1998. [(accessed 2009 Feb. 12)]. [information sheet] Available: wwwfdagov/oc/ohrt/irbs/drugsbiologicshtml#study. [Google Scholar]
  • 7.Tunis SR, Stryer DB, Clancy CM. Practical clinical trials: increasing the value of clinical research for decision making in clinical and health policy. JAMA. 2003;290:1624–32. doi: 10.1001/jama.290.12.1624. [DOI] [PubMed] [Google Scholar]
  • 8.Thorpe KE, Zwarenstein M, Oxman AD, et al. A pragmatic–explanatory continuum indicator summary (PRECIS): a tool to help trial designers. J Clin Epidemiol. 2009;62:464–75. doi: 10.1016/j.jclinepi.2008.12.011. [Also in CMAJ 2009;180:1025–32.] [DOI] [PubMed] [Google Scholar]
  • 9.Karanicolas PJ, Montori VM, Deveraux PJ, et al. A new “Mechanistic-Practical” framework for designing and interpreting randomized trials. J Clin Epidemiol. 2009;62:479–84. doi: 10.1016/j.jclinepi.2008.02.009. [DOI] [PubMed] [Google Scholar]
  • 10.Karanicolas PJ, Montori VM, Devereaux PJ, et al. The practicalists’ response. J Clin Epidemiol. 2009;62:489–94. doi: 10.1016/j.jclinepi.2008.08.013. [DOI] [PubMed] [Google Scholar]
  • 11.Oxman AD, Lombard C, Treweek S, et al. Why we will remain pragmatists: four problems with the impractical mechanistic framework and a better solution. J Clin Epidemiol. 2009;62:485–8. doi: 10.1016/j.jclinepi.2008.08.015. [DOI] [PubMed] [Google Scholar]
  • 12.Oxman AD, Lombard C, Treweek S, et al. A pragmatic resolution. J Clin Epidemiol. 2009;62:495–8. doi: 10.1016/j.jclinepi.2008.08.014. [DOI] [PubMed] [Google Scholar]
  • 13.Maclure M. Explaining pragmatic trials to pragmatic policymakers. J Clin Epidemiol. 2009;62:476–8. doi: 10.1016/j.jclinepi.2008.06.021. [Also in CMAJ 2009;180:1001–3.] [DOI] [PubMed] [Google Scholar]
  • 14.Zwarenstein M, Treweek S, Gagnier J, et al. CONSORT and Pragmatic Trials in Healthcare (Practihc) groups. Improving the reporting of pragmatic trials: an extension of the CONSORT statement. BMJ. 2008;337:a2390. doi: 10.1136/bmj.a2390. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Progressive licensing model. Ottawa (ON): Health Canada; 2007. [(accessed 2009 Feb. 12)]. Available: www.hc-sc.gc.ca/dhp-mps/homologation-licensing/model/index-eng.php. [Google Scholar]

Articles from CMAJ : Canadian Medical Association Journal are provided here courtesy of Canadian Medical Association

RESOURCES