Skip to main content
Journal of the Royal Society of Medicine logoLink to Journal of the Royal Society of Medicine
. 2016 Apr 26;109(7):264–268. doi: 10.1177/0141076816643954

All health researchers should begin their training by preparing at least one systematic review

Kamal R Mahtani 1,
PMCID: PMC4940997  PMID: 27118697

One of the founding principles of evidence-based medicine is to use the best available evidence to inform decisions made by patients and clinicians.1 Systematic reviews have made significant contributions to the pool of best available evidence by systematically gathering, appraising and summarising evidence to answer a given healthcare question.2 Indeed, the current UK Chief Medical Officer, Dame Sally Davies, has emphasised that ‘by removing uncertainties in science and research, systematic reviews ensure that only the most effective and best value interventions are adopted by the NHS and social care providers’.3

The value of systematic reviews in healthcare

One of the largest collections of systematic reviews can be found in the Cochrane Library, where reviews are periodically updated to reflect new research.4 The well-known Cochrane logo depicts a real example of the value of systematically pooling data for meta-analysis, in this case demonstrating the clear benefit of corticosteroids in accelerating maturation in preterm babies.5,6

As well as providing benefits, systematic reviews can protect patients from harm. Despite growing evidence from the 1960s, a significant proportion of parents were still placing their babies to sleep on their front, an activity that extended into the 1990s. In 2005, a systematic review of observational studies showed a more than four-fold increase in deaths associated with the prone position compared with sleeping supine.6 This conclusion could (and should) have been demonstrated, with statistical certainty, 35 years before had a systematic search and pooling of available evidence been carried out – it has been estimated that tens of thousands of cot deaths could have been prevented had this been done.7 As another example, individual studies of rosiglitazone, used to treat type 2 diabetes, failed to pick up an increased risk of myocardial infarction associated with the drug. This became apparent from a systematic review, which ultimately led to withdrawal of the drug from the European market, despite its having been available for over 10 years before this.8

Systematic reviews reduce research waste

It has been estimated that as much as 85% of research investment is being avoidably ‘wasted’.9 This estimate was based on the knowledge that about 50% of clinical trials are never published. Of the remaining 50%, at least 25% are not sufficiently clear, complete and accurate for others to interpret, use or replicate. Of the final 25%, only about half will have been designed and executed well enough to have confidence in using their results in making clinical decisions. A recent series of articles has highlighted the steps to which researchers can go to reduce this waste and increase the value of their work.10 One of the recommendations was that new research should not be undertaken before systematic assessment of what is already known or being researched. If it is possible that the research question can be answered adequately using existing evidence, there should be no need to carry out a comparatively expensive new clinical trial, that is simply not needed, rather than a much less expensive evidence synthesis.11

Unnecessary research not only wastes resources but, more importantly, can harm patients. This is powerfully illustrated by the technique of cumulative meta-analysis. Whereas a traditional forest plot may order studies alphabetically or chronologically, a cumulative meta-analysis will generate a new summary effect size and confidence interval each time a new study is added to the pool.12,13 Lau et al.12 applied this technique to clinical trials of streptokinase treatment for acute myocardial infarction, antibiotic prophylaxis to reduce perioperative mortality for colorectal surgery and endoscopic treatment of upper gastrointestinal bleeding; in each case, they showed that evidence for efficacy would have been apparent, through systematic assessment, years before it was suspected. Their examples illustrate how patients enrolling into clinical trials, after efficacy could have been demonstrated, were being denied potentially lifesaving interventions. To prevent this, the authors recommended that a new meta-analysis be conducted each time data from a new trial becomes available. This would be the ‘best way to utilize the information that can be obtained from clinical trials to build evidence for exemplary medical care’. The point was further emphasised by Antman et al.,14 who demonstrated that recommendations made by experts, e.g. in medical textbooks, frequently lagged behind meta-analytical evidence from pooling randomised controlled trials.

Nearly 10 years after that recommendation, there are still examples of how failures to heed that advice have repeatedly led to patient harm. Rofecoxib (Vioxx), which was originally marketed as a safer alternative to existing non-steroidal anti-inflammatory drugs, was withdrawn from the market in 2004 after concerns emerged of an increased risk of cardiovascular events, notably myocardial infarction. A systematic review of published clinical studies of rofecoxib, conducted before the September 2004 withdrawal, identified 18 randomised controlled trials, all sponsored by the manufacturer.15 Cumulative meta-analysis of these trials showed that had a systematic review and meta-analysis of accumulating evidence been conducted by the end of 2000, it would have been clear that rofecoxib was associated with a higher incidence of myocardial infarction (Figure 1). Several thousands of participants in the studies conducted after 2000 were randomised into trials when a clear harm could (and should) already have been detected.

Figure 1.

Figure 1.

When should trials of Vioxx have stopped? Cumulative meta-analysis of randomised trials comparing rofecoxib with control.

Source: Image reproduced from reference 15 with permission from publisher.

Clinical trials should begin and end with a systematic review

Identifying, or carrying out, a systematic review before embarking on any new primary research is becoming seen by research funders as an essential early step. Prospective applicants to National Institute for Health Research (NIHR) funding are now recommended to ensure that all proposals for primary research are supported by the findings of systematic reviews of the relevant existing literature. This may include identifying relevant existing systematic reviews or carrying out an appropriate review and summarising the findings for the application.16 Researchers who identify a clear need for new studies should use information gained from their systematic review to inform the design, analysis and conduct of their study. This is an essential part of the ‘Adding Value in Research Framework’ (Figure 2), which builds on previous work to reduce research waste.9

Figure 2.

Figure 2.

National Institute for Health Research adding value in research framework (Reproduced with permission from http://www.nets.nihr.ac.uk/about/adding-value-in-research).

An analysis of trials funded during 2013 by the NIHR Health Technology Assessment (HTA) programme showed that all of them had been informed by one or more systematic reviews.17 The reasons that a systematic review was used varied, but by far the most common reason was to justify treatment comparisons. Other reasons included obtaining information about adverse events: defining outcomes, and other aspects of study design, such as recruitment and consent (Table 1).

Table 1.

The range of reasons why researchers use systematic reviews in prospective trial design.17

Reasons
 • Assess the risk of possible adverse events
 • Choice of frequency/dose
 • Duration of follow-up
 • Estimating the control group event rate
 • Estimating the difference to detect or margin of equivalence
 • Inform standard deviation
 • Intensity of interventions
 • Justification of prevalence
 • Justification of treatment comparison
 • Recruitment and consent
 • Route of intervention
 • Selection of definition or outcome
 • Withdrawal rate

While the prevalence of authors referring to a systematic review in the rationale for a new clinical trial has improved, the same cannot be said for integration of new trial data in updated systematic reviews. Clarke et al. identified randomised trials in five general medical journals in May 1997 (n = 26), May 2001 (n = 33), May 2005 (n = 18) and May 2009 (n = 29) and found that there had been no notable improvement over time in the extent to which authors interpreted their new data in the context of up-to-date systematic reviews.18

Opportunities for improvement

There are scientific, ethical and economic reasons for considering a systematic review before and after embarking on further primary research. While there has been some progress in universally adopting these principles, there are still large opportunities for improvement. For this to be accelerated, the following recommendations should be considered. First, all health researchers should be encouraged and supported to complete at least one relevant systematic review, at the start of their training, although there should be conditions to this. These include that the review should seek to answer a relevant and needed question. Health researchers should also ensure that their review is entered into an international prospective register of systematic reviews, such as PROSPERO.19 Researchers conducting their first systematic review should be supervised by more experienced reviewers and information specialists to ensure that process of conducting the review is not done in isolation and instead acts as a high-quality training opportunity. Furthermore, the type of review should ideally reflect their future planned work, whether quantitative or qualitative. These steps alone should ensure that there is value, rather than waste, in this recommendation. Obvious candidates for this are those embarking on doctoral training schemes, and it should be the responsibility of funders of these schemes and their host institutions to support this activity. There are several learning advantages to providing this training early on. A systematic review will offer inexperienced researchers the opportunity to gain transferable research skills that will provide significant value throughout their career. Examples of these skills include how to formulate a relevant research question, how to search for evidence, familiarity with a variety of study designs, critical appraisal skills that tease out the internal and external validity of a study and assess for quality and bias, data synthesis and an ability to discuss the implications of the findings for both future research and clinical practice. Researchers keen on then carrying out a prospective primary study can use the skills, and the results, gained from carrying out their review to design and inform the future study. Subsequent skills, such as how to recruit participants into a trial or how to complete an ethics application, are likely to be better informed from first having a systematic understanding of the existing literature. Supporting this training will build capacity and capability in a field short of systematic reviewers, a vision shared by funders such as the NIHR.3 Second, funders of clinical trials should make it a prerequisite of awards that investigators not only use a systematic review to inform their trial but also complete their trial with a demonstration of how the new data add to the existing evidence. This would not necessarily have to be in the form of a full systematic review (although this would be advantageous), but could take the form of a brief review, something that may become easier as automated technology responds to this need. Finally, journals and more specifically peer-reviewers (i.e. the research community) must give greater attention to ensuring that authors of any new published clinical trials follow recommendations to provide readers with sufficient information to assess how the new evidence fits in with existing evidence. Although the CONSORT checklist includes a recommendation that authors should discuss whether their ‘interpretation [is] consistent with [their] results, balancing [the] benefits and harms, and considering other relevant evidence’, it is possible that the need to place the new data in the context of a systematic review need to be more explicit. Despite the many improvements in reporting of randomised controlled trials that CONSORT statement has brought, completeness of reporting still remains sub-optimal.20

Conclusions

Judging the point at which justified replication is needed before it becomes wasteful duplication will often be challenging. And like many aspects of scientific research, there are no guarantees that using a systematic review to inform or contextualise a new trial will lead to better trials.

Nevertheless, the history of clinical research contains numerous examples of failures to consider, conduct and use systematic reviews, which have caused patients to be exposed to potential harms, as well as wasted resources in carrying out unnecessary clinical trials. Equipped with the correct training, researchers who apply for funding of any new primary study, or indeed on completion of their work, should ensure that they incorporate a systematic review before and after their clinical trial. It would be ‘ethically, scientifically and economically indefensible’ not to.11

Declarations

Competing interests

The views expressed are those of the author and not necessarily those of the NHS, the NIHR or the Department of Health. I have also provided training to individuals and groups of individuals on the conduct of a systematic review.

Funding

No specific funding was sought for this essay. KRM is supported through an NIHR Clinical Lecturer award.

Guarantor

KRM.

Ethical approval

Not applicable

Contributorship

Sole authorship.

Acknowledgements

I am grateful for helpful comments from Iain Chalmers, Jeffrey Aronson and Meena Mahtani.

Provenance

Not commissioned; peer-reviewed by Penny Whiting.

References

  • 1.Sackett DL, Rosenberg WMC, Gray JAM, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn’t. BMJ 1996; 312: 71–72. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Egger M, Smith GD and Altman D. Systematic Reviews in Health Care: Meta-Analysis in Context. Chichester: John Wiley & Sons, 2008.
  • 3.National Institute for Health Research (NIHR). Systematic Reviews Knowledge to Support Evidence-Informed Health and Social Care. See www.nihr.ac.uk/documents/about-NIHR/NIHR-Publications/NIHR-Systematic-Reviews-Infrastructure.pdf (last checked 24 March 2016).
  • 4.Cochrane Trusted evidence. Informed Decisions. Better Health. See www.cochrane.org/ (last checked 28 January 2016).
  • 5.Roberts D and Dalziel SR. Antenatal corticosteroids for accelerating fetal lung maturation for women at risk of preterm birth. Cochrane Database Syst Rev 2006; (3): Art. No.: CD004454. DOI: 10.1002/14651858.CD004454.pub2. [DOI] [PubMed]
  • 6.Gilbert R, Salanti G, Harden M, See S. Infant sleeping position and the sudden infant death syndrome: systematic review of observational studies and historical review of recommendations from 1940 to 2002. Int J Epidemiol 2005; 34: 874–887. [DOI] [PubMed] [Google Scholar]
  • 7.Evans I, Thornton H, Chalmers I and Glasziou P. Testing Treatments. See www.ncbi.nlm.nih.gov/books/NBK66204/ (2011, last checked 30 January 2016).
  • 8.Cohen D. Rosiglitazone: what went wrong? BMJ 2010; 341: c4848–c4848. [DOI] [PubMed] [Google Scholar]
  • 9.Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet 2009; 374: 86–89. [DOI] [PubMed] [Google Scholar]
  • 10.Macleod MR, Michie S, Roberts I, Dirnagl U, Chalmers I, Ioannidis JPA, et al. Biomedical research: increasing value, reducing waste. Lancet 2014; 383: 101–104. [DOI] [PubMed] [Google Scholar]
  • 11.Chalmers I, Bracken MB, Djulbegovic B, Garattini S, Grant J, Gülmezoglu AM, et al. How to increase value and reduce waste when research priorities are set. Lancet 2014; 383: 156–165. [DOI] [PubMed] [Google Scholar]
  • 12.Lau J, Schmid CH, Chalmers TC. Cumulative meta-analysis of clinical trials builds evidence for exemplary medical care. J Clin Epidemiol 1995; 48: 45–57. [DOI] [PubMed] [Google Scholar]
  • 13.Cochrane Handbook for systematic reviews of interventions. See http://handbook.cochrane.org/chapter_11/11_3_2_1_forest_plots_in_revman.htm (last checked 29 January 2016).
  • 14.Antman EM, Lau J, Kupelnick B, Mosteller F, Chalmers TC. A comparison of results of meta-analyses of randomized control trials and recommendations of clinical experts: treatments for myocardial infarction. JAMA 1992; 268: 240–248. [PubMed] [Google Scholar]
  • 15.Jüni P, Nartey L, Reichenbach S, Sterchi R, Dieppe PA, Egger M. Risk of cardiovascular events and rofecoxib: cumulative meta-analysis. Lancet 2004; 364: 2021–2029. [DOI] [PubMed] [Google Scholar]
  • 16.National Institute for Health Research (NIHR) Research design service London. Conducting a Brief Systematic Style Review in Support of a Primary Research Application. See www.rds-london.nihr.ac.uk/RDSLondon/media/RDSContent/files/PDFs/Systematic-Reviews-in-Support-of-Primary-Research-Applications.pdf (last checked 24 March 2016).
  • 17.Bhurke S, Cook A, Tallant A, Young A, Williams E, Raftery J. Using systematic reviews to inform NIHR HTA trial planning and design: a retrospective cohort. BMC Med Res Methodol 2015; 15: 1–1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Clarke M, Hopewell S, Chalmers I. Clinical trials should begin and end with systematic reviews of relevant evidence: 12 years and waiting. Lancet 2010; 376: 20–21. [DOI] [PubMed] [Google Scholar]
  • 19.PROSPERO – International Prospective Register of Systematic Reviews. See http://www.crd.york.ac.uk/PROSPERO/ (last checked 19 March 2016).
  • 20.Turner L, Shamseer L, Altman DG, Schulz KF, Moher D. Does use of the CONSORT statement impact the completeness of reporting of randomised controlled trials published in medical journals? A Cochrane Review Syst Rev 2012; 1: 60–60. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Journal of the Royal Society of Medicine are provided here courtesy of Royal Society of Medicine Press

RESOURCES