Skip to main content

Some NLM-NCBI services and products are experiencing heavy traffic, which may affect performance and availability. We apologize for the inconvenience and appreciate your patience. For assistance, please contact our Help Desk at info@ncbi.nlm.nih.gov.

The BMJ logoLink to The BMJ
editorial
. 2004 Jul 3;329(7456):2–3. doi: 10.1136/bmj.329.7456.2

Benefits and harms of drug treatments

Observational studies and randomised trials should learn from each other

Jan P Vandenbroucke 1
PMCID: PMC443425  PMID: 15231587

However international medical science has become, communicating electronically at the speed of light, some fields are still worlds apart. The movement that is subsumed under the banner of evidence based medicine, with its sister movements such as the Cochrane Collaboration or the BMJ's Clinical Evidence, aims to evaluate whether the benefits of treatments that had been hoped for actually exist. This relies almost exclusively on randomised controlled trials, in particular in the study of drug interventions. In a world apart is the field of pharmacoepidemiology, devoting itself mainly to detection and systematic studies of the adverse effects of the very same treatments. Adverse drug effects are often unanticipated and are predominantly investigated by observational studies—for example, by using large databases that link routine prescriptions with the occurrence of unexpected disease.

The protagonists of these fields barely know each other: they publish in different journals, write and read different books, and work in different departments. They are even suspicious of each other's methods. Adepts of evidence based medicine doubt whether anything reasonable can follow from observational research; they give the impression that they believe that methods of observational research lag far behind. Pharmacoepidemiologists have hitherto made little use of systematic reviews of randomised trials in their much broader job of assessing causation of harm from a variety of pharmacological, clinical, and observational data.

Yet intellectually, both sides can and should coexist and learn from each other.1 w1 Assessment of small effects of treatment will always need randomised trials as its yardstick. At the other end of the research spectrum of all fields of observational research, research into adverse effects of drugs offers the best chances of being as unbiased as if randomised.2,3 Each medical question should be approached by using the appropriate research tools4—this effectively precludes the idea of a single grading of levels of evidence for all types of research questions.2

Individual randomised controlled trials often do not suffice to detect adverse effects, especially if the effects are rare and late.5 w2 Systematic reviews of randomised trials have offered little solace so far, even for early and relatively common adverse effects, as adverse events had not been systematically described in similar ways in the individual trials and therefore could not be compared directly for the purpose of a systematic review.6 In addition, most systematic reviews shun observational research. Although there are exceptions,w3-w5 even established adverse effects are often not assessed in systematic reviews, presumably because no randomised evidence exists. However, a balance of benefits and harms is the only reasonable way to evaluate interventions.

Randomised trials are good at sorting out what works and what doesn't. Even if they show a benefit, they often do not give sufficient insight into harms. A properly balanced review should include a systematic evaluation of adverse effects by the best methods of observational pharmacoepidemiology. For their part, pharmacoepidemiologists should abandon their exclusive preoccupation with one side of the question and one type of methods: they should explore collaboration with people who conduct systematic reviews or randomised trials. Hidden nuggets might be found by scraping the randomised barrel, as happened in a review that looked at untoward events in smaller trials of hormone replacement therapy.7 Also, long term follow up of randomised trials might be a powerful tool for late, but rather common, adverse effects, as with the effects of diethylstilbestrol.8 But, most importantly, better reporting of harms in randomised trials might make systematic reviews of such trials better able to quantify adverse effects. The cry for more and better routine reporting of potential adverse events in randomised trials is an old one: already in 1977 Skegg and Doll proposed to record all medical events in randomised trials.9

In the field of medical history, the two worlds have already met. The James Lind Library (www.jameslindlibrary.org/) traces the history of ideas on the fair evaluation of treatments; it has a place for the history of “casting of lots” as well as for the history of investigating adverse drug reactions. Still, actual practice at the coalface remains different.

The solution might be in educating people. Just imagine that a select group of people who are properly trained in systematically reviewing randomised trials on drug treatment were to receive training from academic pharmacoepidemiologists, so as to learn the tricks of how to wrestle answers from large observational datasets. They would shed their fears of case-control and retrospective cohort studies and value these studies for what they can do in elucidating the frequency and the causality of adverse effects.10 w6 They would use case reports and systematic reviews of case series.11 At the same time, suppose that some pharmacoepidemiologists started training in Cochrane style systematic reviews, to learn how to mine pooled data from randomised trials on adverse effects. That would be beneficial for their own trade, and in turn might have a salutary influence on the reporting of harms in future randomised trials. To carry this dream to its ultimate conclusion: imagine a world in which they wrote reviews together. That world would marry the best evidence on benefits with the best evidence on harm in a single balanced review to assist doctors and benefit patients.

Supplementary Material

Additional references
bmj_329_7456_2__.html (1.2KB, html)

Inline graphicAdditional references w1-w6 are on bmj.com

Competing interests: None declared.

References

  • 1.Cuervo LG, Clarke M. Balancing benefits and harms in health care. BMJ 2003;327: 65-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Glasziou P, Vandenbroucke J, Chalmers I. Assessing the quality of research. BMJ 2004;328: 39-41. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Vandenbroucke JP. When are observational studies as credible as randomised trials? Lancet 2004;363: 1728-31. [DOI] [PubMed] [Google Scholar]
  • 4.Sackett DL, Wennberg JE. Choosing the best research design for each question. BMJ 1997;315: 1636. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Jick H. The discovery of drug-induced illness. N Engl J Med 1977;296: 481-5. [DOI] [PubMed] [Google Scholar]
  • 6.Loke YK, Derry S. Reporting of adverse drug reactions in randomised controlled trials - a systematic survey. BMC Clin Pharmacol 2001;1: 3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.McPherson K, Hemminki E. Synthesising licensing data to assess drug safety. BMJ 2004;328: 518-20. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Grant A, Chalmers I. Some research strategies for investigating aetiology and assessing the effects of clinical practice. In: Macdonald RR, ed. Scientific basis of obstetrics and gynaecology. 3rd ed. London: Churchill Livingstone, 1985: 49-84.
  • 9.Skegg DCG, Doll R. The case for recording events in clinical trials. BMJ 1977;ii: 1523-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Vandenbroucke JP. Observational research and evidence-based medicine: What should we teach young physicians? J Clin Epidemiol 1998;51: 467-72. [DOI] [PubMed] [Google Scholar]
  • 11.Loke YK, Derry S, Aronson JK. A comparison of three different sources of data in assessing the frequencies of adverse reactions to amiodarone. Br J Clin Pharmacol 2004;57: 616-21. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Additional references
bmj_329_7456_2__.html (1.2KB, html)

Articles from BMJ : British Medical Journal are provided here courtesy of BMJ Publishing Group

RESOURCES