Before 1950 the practice of medicine was based largely on observations of outcomes in one individual or in small groups of patients. With the development of the randomized controlled trial (RCT) around the middle of the 20th century, medicine was rapidly transformed, as inferences increasingly were made on the basis of this experimental method.1 The first of these trials were conducted mainly to investigate infectious diseases. Because large effects were expected (e.g., a reduction in mortality rate or disease outcomes by greater than half), small trials were sufficient and could often be conducted by a single investigator. However, for most chronic diseases (e.g., cardiovascular disease or cancer) treatments are likely to have at best only a moderately beneficial effect (although potential harm from treatments could be much larger). This realization transformed RCTs and led to the conduct of large, multicentre trials often involving tens of thousands of subjects.2 Such trials have provided conclusive evidence for the efficacy, lack of efficacy or harm of numerous therapies in many diseases;3 as such, they have led to the widespread use of proven effective treatments, which has prevented several tens of millions of premature deaths and much suffering. RCTs and their resulting discoveries should probably rank among the most important milestones in the history of medicine.
The first multicentre RCTs were generally organized by committed academics with either little external support or only modest financial support from governmental bodies (such as medical research councils or their equivalents in various countries) or, occasionally, the pharmaceutical industry. These trials were straightforward, had few bureaucratic hurdles to overcome and were relatively inexpensive. The value of large RCTs was increasingly recognized by health regulatory bodies (such as the US Food and Drug Administration) in the 1980s and 1990s, and these bodies came to require that such trials be performed before new drugs were approved. As a result, the number of large trials increased dramatically over the next 10 to 15 years. Previously done by select investigators in a few academic centres, RCTs came to involve large numbers of nonacademic and community centres. Indeed, well over three-quarters of patients currently entered in trials of cardiovascular diseases are recruited from nonacademic centres, and large numbers of patients from Eastern Europe, Asia and South America are now being included in such trials.
The large multicentre trials conducted in the 1980s and early 1990s were generally simple. They relied on randomization and unbiased evaluation of outcomes (based on placebo controls and “hard outcomes” such as patient death) to minimize errors.2 Variations between centres in the inclusion criteria for patients with a particular condition, differences in ancillary treatments and other variations in clinical approaches within a trial did not matter, because randomization (to the active treatment or control group within a centre) and inclusion of large numbers of patients ensured that these variations were similarly distributed among the treatment groups being compared. The very small marginal costs of doing these trials were absorbed by investigators and their institutions or necessitated only modest external funding. This situation facilitated the performance of very large trials at very low cost. For example, for the International Studies of Infarct Survival mega-trials4,5,6,7 there was only one page of data collection per subject, no sites were monitored, no events were “adjudicated” and no investigators or institutions received a fee for the very small additional work that the trials entailed. Integrity of the trial results was ensured because of randomization of large numbers of patients and unbiased outcomes evaluation. Even an occasional patient who did not quite meet the entry criteria or differences in judgements between sites regarding whether certain nonprimary events were to be reported could hardly influence the overall results of a blinded randomized trial conducted in several hundred centres.
This key inherent strength of the RCT — that randomization provides the “controls” to overcome heterogeneity of both populations and practices — seems to have been forgotten as the organization of trials has become more complex. This unfortunate step backward has resulted in recent RCTs becoming unnecessarily complicated, overly bureaucratic and substantially more expensive.
Several forces have led to this change. First, as trials became more complicated and data collection more onerous, fees were paid to centres for each patient recruited. These fees became increasingly larger as companies “competed” to conduct trials. Second, the attitudes of institutions changed, and trials came to be seen as a source of additional revenue; as a result, some institutions began to charge substantial overheads, which at times appeared to be far in excess of the real costs incurred. Third, the environment in universities and hospitals shifted to a business model, with little support available to cover researchers' salaries (not only to run the specific clinical trial, but also to cover unremunerated time spent on other research) and that of their highly skilled staff. This changing environment fuelled increases in the costs of clinical trials and forced investigators to consider participating in trials only if they were “financially viable.”
Given that a successful trial completed rapidly in patients with a common condition could lead to increases in income of several tens or hundreds of millions of dollars a year for a pharmaceutical company, such companies were willing to invest significant sums in drug studies. New “business” opportunities also arose. Companies organizing and providing services for trials (known as contract research organizations or CROs), but with little scientific interest in the questions being addressed, sprang up and now represent a multibillion-dollar industry. CROs are first and foremost businesses, making profits for their shareholders or owners, with little reinvestment in research or health. Thus, clinical and other forms of research were transformed from a quest largely for knowledge to a quest in which generating profit became a key goal. Because of this shift, everyone involved (investigators, institutions, companies and, at times, even noncommercial sponsors) now wants a piece of the pie. With the increasing amounts of funds involved in clinical trials, concerns about conflicts of interest were raised and the number of guidelines multiplied. A stream of regulations was put into place with the intention of preventing or detecting fraud, increasing patient safety and ensuring the validity of trial results. These regulations required parallel documentation (and, consequently, much additional work) in data collection, especially in industry-sponsored trials (some of which have forms that are a few hundred pages long), and further increased the costs.8
But those drafting the guidelines and regulations seem to have lost sight of the fact that investigator fraud (as opposed to errors) has been extremely rare and that in the instances where fraud has been detected, it has generally not affected the main conclusions of large multicentre trials. This should not be a surprise, given that randomization and blinded evaluation of outcomes provide inherent safeguards against the types of errors or fraud that could affect the overall trial results. Although fraud should not be condoned, the overwhelming majority of current checks by monitors (now constituting another mega-industry) and regulatory bodies at clinical sites (e.g., verification of source data, checks on a large number of regulatory “approvals” and “amendments” and certification of laboratories for usual and routine tests) could be substantially minimized without compromising the validity of trials.9
Nonetheless, regulatory bodies in several countries have signed into law many bureaucratic practices that are expensive, unnecessary and thus wasteful and have applied them indiscriminately to both industry-supported and government-supported (or even unfunded) trials. These practices may even be counterproductive, in that honest investigators who are primarily interested in furthering knowledge find the bureaucracy so burdensome that they increasingly decline to participate in trials. For example, in the Czech Republic new regulations (internal hospital rules that led to a large proportion — up to 100% in some cases — of the study fees being taken as “hospital overhead”) recently became so onerous that investigators declined involvement in new trials. Experienced cardiologists were less and less willing to take the position of “principal investigator” because of the large administrative workload. This led to the revoking of the regulations in most hospitals, and participation in trials has now resumed (P. Widimsky, University Hospital Vinohrady, Prague, Czech Republic, personal communication, 2004). Similarly, there has been a widespread outcry in the United Kingdom among clinical researchers in response to the burdensome procedures imposed by recent European directives.10,11,12,13,14
Despite the increasingly recognized value of RCTs, government funding for such research has been disproportionately small or nonexistent in most countries. For example, for several decades, less than 5% of the budget of the Medical Research Council of Canada and later the Canadian Institutes of Health Research has been devoted to randomized trials. At times in the late 1990s, this figure was as low as 2%; it has improved somewhat in recent years but still remains inadequate. Other bodies (e.g., the Canadian Foundation for Innovation) exclude clinical trials from the portfolio of research infrastructure they are willing to fund; yet others (e.g., the Heart and Stroke Foundation of Canada and its provincial bodies) cap the amount of funds per grant and have rules that place multiple barriers to the funding of interprovincial collaborative studies. These rules in effect prevent the conduct of large collaborative studies, especially studies of important public health questions in which there is no commercial interest.
All of these built-in barriers have restricted substantially the ability of investigators to address important questions independent of the pharmaceutical industry and have potentially hindered evaluation of generic therapies (e.g., new applications for old drugs, nutritional supplements, lifestyle changes, diagnostic algorithms, surgical procedures). On the rare occasion that such trials have been funded by government bodies, the funding generally has not covered the full costs of the research.9 To make up for these shortfalls, investigators are forced to seek direct additional funding from industry (which is usually difficult to obtain for questions about generic therapies) or, much more frequently, to internally cross-subsidize trials of generic questions by generating overages from the industry-funded trials in which they participate.
In this issue (see page 883) Lorraine Ferris and David Naylor propose additional safeguards.15 They suggest that all clinical trials have a standardized budget, that each study budget be disclosed to a research ethics board (REB) and that some financial information be disclosed in the patient consent form. The purported reason for their proposal is to avoid conflicts of interest. However, it is hard to see how having a standardized budget would prevent conflicts of interest in the setting of multicentre trials, especially where study contracts or budgets are already reviewed by REBs or other officials at the institutions. Furthermore, disclosure of financial arrangements to potential participants by means of the consent form would likely confuse, rather than inform (especially in the case of patients presenting with acute illnesses), since patients have no benchmark against which to assess whether any particular level of payment is justified. Informed consent forms are already too long and too complex, and patients probably understand only part of the information provided. Adding a further level of complexity to these forms might hinder discussion about key issues relevant to the patient's decision to participate in the RCT (such as potential risks and benefits). On the other hand, it is reasonable for an institution to request budget details for trials done on its premises, and this practice is already quite common.
Companies tend to have standardized formats for contracts (and budgets) in multicentre trials, but these vary between companies and also across countries, as local regulations and prevailing laws vary. Hence, I believe that further standardization of budget formats would be both difficult and unnecessary. I support Ferris and Naylor's recommendation regarding disclosure of budgets to the REB that reviews the trial. However, legitimate costs vary between sites and between trials, and it could be difficult for REBs (which are already overburdened) to assess what budget level is appropriate for a particular trial, unless the proposed budget is substantially out of line (e.g., several times higher than usual for other similar trials). In my experience, it is unusual for budgets to be excessively large, given that every $1000 extra per patient in a trial of 5000 patients adds $5 million to the budget — a situation that would not be acceptable to the sponsors of most trials. Moreover, few REBs would be in a position to realistically assess the “overhead costs” that an investigator incurs (e.g., for subsidizing unfunded but legitimate activities, supporting staff, and covering his or her own time).
Calls for additional rules should not be accepted without clear evidence of significant and widespread problems, as well as evidence that the new rules would prevent such problems without further undermining the conduct of good clinical trials. In this regard, I know of no evidence that the current web of bureaucracy and regulations has improved the integrity of trials, improved patient safety or helped to minimize real conflicts of interests. In our experience of direct or indirect involvement with several hundred investigators in numerous trials, we have never witnessed an occasion when financial payments led to improper conduct that affected the trial results. In contrast, errors or even sloppiness occur not infrequently, but because of their random nature they do not systematically affect only one of the treatments being compared.
Large RCTs have served science and our patients well. They are among the most reproducible and valid forms of research. They have an extraordinary number of built-in safeguards, external oversights and audits, all of which protect against serious biases. I believe that most reputable clinical trials investigators are careful to ensure the integrity and credibility of their work and to avoid or minimize conflicts of interest. Numerous regulatory procedures already exist, some of which are redundant, wasteful and expensive. Independent clinical trials of important questions are threatened with extinction because of constrictive and overly bureaucratic procedures. This, coupled with the relatively low level of funding from government and other nonindustry sources for such research, means that some important trials of questions that could improve health may not be done.11
Instead of new rules, what we really need is an assessment of whether the existing regulations are helpful or harmful. Indeed, there is an urgent need to streamline regulations to sensible levels, remove existing barriers and significantly increase government spending on clinical trials (and other forms of clinical research), so that large trials of important questions can be conducted independent of pharmaceutical companies. With these changes, advances in all forms of biomedical research might be more rapidly translated into better clinical practice. Will the next decade bring reform, with balanced and sensible rules, or will we see escalating bureaucracy, limited government funding and the extinction of independent clinical trials? I hope good sense and balance will prevail.
β See related articles pages 883 and 892
Acknowledgments
I thank several colleagues (Drs. G. Dagenais, K. Teo, H. Gerstein and M. Gupta) for their helpful comments. However, I take full responsibility for the article. My thanks to Marie Nikkanen for secretarial assistance.
Footnotes
Competing interests: None declared.
Correspondence to: Dr. Salim Yusuf, Population Health Research Institute, McMaster University, 252-237 Barton St. E, Hamilton ON L8L 2X2; 905 521-1166; yusuf@ccc.mcmaster.ca
References
- 1.D'Arcy Hart P. A change in scientific approach: from alternation to randomised allocation in clinical trials in the 1940s. BMJ 1999;319:572-3. [DOI] [PMC free article] [PubMed]
- 2.Yusuf S, Collins R, Peto R. Why do we need some large, simple randomized trials? Stats Med 1984;3:409-20. [DOI] [PubMed]
- 3.Yusuf S. Randomised controlled trials in cardiovascular medicine: past achievements, future challenges. BMJ 1999;319:564-8. [DOI] [PMC free article] [PubMed]
- 4.ISIS-1 (First International Study of Infarct Survival) Collaborative Group. Randomised trial of intravenous atenolol among 16 027 cases of suspected acute myocardial infarction: ISIS-1. Lancet 1986;2(8498):57-66. [PubMed]
- 5.ISIS-2 (Second International Study of Infarct Survival) Collaborative Group. Randomised trial of intravenous streptokinase, oral aspirin, both, or neither among 17 187 cases of suspected acute myocardial infarction: ISIS 2. Lancet 1988;2 (8607):349-60. [PubMed]
- 6.ISIS-3 (Third International Study of Infarct Survival) Collaborative Group. ISIS-3: a randomised comparison of streptokinase vs tissue plasminogen activator vs anistreplase and of aspirin plus heparin vs aspirin alone among 41 299 cases of suspected acute myocardial infarction. Lancet 1992;339(8796):753-70. [PubMed]
- 7.ISIS-4 (Fourth International Study of Infarct Survival) Collaborative Group. ISIS-4: a randomised factorial trial assessing early oral captopril, oral mononitrate, and intravenous magnesium sulphate in 58,050 patients with suspected acute myocardial infarction. Lancet 1995;345(8951):669-85. [PubMed]
- 8.Peto J, Fletcher O, Gilham C. Data protection, informed consent, and research. BMJ 2004;328:1029-30. [DOI] [PMC free article] [PubMed]
- 9.Califf RM, Morse MA, Wittes J, Goodman SN, Nelson DK, DeMets DL, et al. Toward protecting the safety of participants in clinical trials. Control Clin Trials 2003;24:256-71. [DOI] [PubMed]
- 10.Researchers ask European Parliament to repeal clinical trials directive [press release]. Available: www.saveeuropeanresearch.org (accessed 2004 Aug 23).
- 11.Mayor S. Squeezing academic research into a commercial straitjacket. BMJ 2004;328:1036. [DOI] [PMC free article] [PubMed]
- 12.Strengthening clinical research: a report from the Academy of Medical Sciences. London: Academy of Medical Sciences; 2003 Oct. Available: www.acmedsci.ac.uk/p_scr.pdf (accessed 2004 Aug 23).
- 13.Singer EA, Müllner M. Implications of the EU directive on clinical trials for emergency medicine. BMJ 2002;324:1169-70. [DOI] [PMC free article] [PubMed]
- 14.Who's afraid of the European clinical trials directive? [editorial]. Lancet 2003;361 (9376):2167. [DOI] [PubMed]
- 15.Ferris LE, Naylor CD. Physician remuneration in industry-sponsored clinical trials: the case for standardized clinical trial budgets [editorial]. CMAJ 2004;171(8):883-6. [DOI] [PMC free article] [PubMed]