Skip to main content
The BMJ logoLink to The BMJ
. 2001 Jan 13;322(7278):98–101. doi: 10.1136/bmj.322.7278.98

Systematic reviews from astronomy to zoology: myths and misconceptions

Mark Petticrew 1
PMCID: PMC1119390  PMID: 11154628

Systematic literature reviews are widely used as an aid to evidence based decision making. For example, reviews of randomised controlled trials are regularly used to answer questions about the effectiveness of healthcare interventions. The high profile of systematic reviews as a cornerstone of evidence based medicine, however, has led to several misconceptions about their purpose and methods. Among these is the belief that systematic reviews are applicable only to randomised controlled trials and that they are incapable of dealing with other forms of evidence, such as from non-randomised studies or qualitative research.

The systematic literature review is a method of locating, appraising, and synthesising evidence. The value of regularly updated systematic reviews in the assessment of effectiveness of healthcare interventions was dramatically illustrated by Antman and colleagues, who showed that review articles failed to mention advances in treatment identified by an updated systematic review.1

It is nearly a quarter of a century since Gene Glass coined the term “meta-analysis” to refer to the quantitative synthesis of the results of primary studies.2 The importance of making explicit efforts to limit bias in the review of literature, however, has been emphasised by social scientists at least since the 1960s.3 In recent years systematic reviews have found an important role in health services research, and the growing interest in evidence based approaches to decision making makes it likely that their use will increase. Not everybody accepts that systematic reviews are necessary or desirable, and as one moves further away from the clinical applications of systematic reviews cynicism about their utility grows. Several arguments are commonly used to reject a wider role for systematic reviews, and these arguments are often based on major misconceptions about the history, purpose, methods, and uses of systematic reviews. I have examined eight common myths about systematic reviews.

Summary points

  • The use of systematic reviews is growing outside health care

  • There are still many common myths about their methods and utility

  • Some common misconceptions are that systematic reviews can include only randomised controlled trials; that they are of value only for assessing the effectiveness of healthcare interventions; that they must adopt a biomedical model; and that they have to entail some form of statistical synthesis

  • Systematic reviews have always included a wide range of study designs and study questions, have no preferred “biomedical model,” and have methodologies that are more flexible than is sometimes realised

  • Many of the common criticisms of systematic reviews are fallacious

Systematic reviews are the same as ordinary reviews, only bigger

There is a common but erroneous belief that systematic reviews are just the same as traditional reviews, only bigger; in other words, you just search more databases. Systematic reviews are not just big literature reviews, and their main aim is not simply to be “comprehensive” (many biased reviews are “comprehensive”) but to answer a specific question, to reduce bias in the selection and inclusion of studies, to appraise the quality of the included studies, and to summarise them objectively. As a result, they may actually be smaller, not bigger, partly because they apply more stringent inclusion criteria to the studies they review. They also differ in the measures they typically take to reduce bias, such as using several reviewers working independently to screen papers for inclusion and assess their quality, and even “small” systematic reviews are likely to involve several reviewers screening thousands of abstracts. As a result of these measures, systematic reviews commonly require more time, staff, and money than traditional reviews. Systematic reviews are not simply “bigger,” they are qualitatively different.

Systematic reviews include only randomised controlled trials

There is a widespread belief that systematic reviews are capable of summarising the results only of randomised controlled trials, and that they cannot be used to synthesise studies of other designs. This belief is prevalent in subjects in which randomised controlled trials are not common and perhaps reflects a concern among some researchers that the studies they consider most relevant will not “count” as evidence. There is, however, no logical reason why systematic reviews of study designs other than randomised controlled trials cannot be carried out. Systematic reviews of non-randomised studies are common, and qualitative studies, for example, can be (and often are) included in systematic reviews. UK guidelines for carrying out systematic reviews do not exclude qualitative research,4 and criteria have been developed to aid in reviewing qualitative studies.5 Even reviews of the effectiveness of interventions do not confine themselves solely to randomised controlled trials; such reviews commonly include other study designs, including non-randomised studies, and case reports.6 In short, there is simply no basis for the belief that systematic reviews can be applied only to randomised controlled trials. The systematic review is simply a methodology that aims to limit bias, and the choice of which study designs to include is a choice that is made by the reviewers. It is not a restriction of the methodology.

Systematic reviews require the adoption of a biomedical model of health

This common myth holds that systematic reviews intrinsically adopt a biomedical model that is of relevance only to medicine and that should not be applied to other domains. Related to this is a belief that as health is more than an “absence of illness” other important outcomes of interventions (such as social impacts) need to be considered and that these are somehow inappropriate for inclusion in systematic reviews. Many health and non-health outcomes, however, are regularly defined, measured, and summarised in both qualitative and quantitative primary studies, and these studies can be (and are) included in systematic reviews. Reviews on the Cochrane Database of Systematic Reviews, for example, commonly include “quality of life” as an outcome alongside clinical indicators of the effects of interventions. The argument that it is somehow inappropriate to do systematic reviews of broader health (or non-health) outcomes is simply fallacious. Systematic reviews do not have any preferred “biomedical model,” which is why there are systematic reviews in such diverse topics as advertising, agriculture, archaeology, astronomy, biology, chemistry, criminology, ecology, education, entomology, law, manufacturing, parapsychology, psychology, public policy, and zoology.713 A recent paper even adopted systematic review methods to summarise eyewitness accounts of the Indian rope trick.14 In short, the systematic review is an efficient technique for hypothesis testing, for summarising the results of existing studies, and for assessing consistency among previous studies; these tasks are clearly not unique to medicine.15,16

Systematic reviews are of no relevance to the real world

Systematic reviews have been portrayed as being obsessed solely with disease outcomes and with randomised controlled clinical trials carried out in simple, closed healthcare systems, which are of no relevance to the complex social world outside evidence based medicine. In fact researchers have been carrying out systematic reviews of policy and other social interventions since the 1970s. For example, there have been at least a dozen systematic reviews investigating the effectiveness of delinquency and correctional programmes for the treatment of offenders, one of which reviewed 400 studies to detect a 10% reduction in delinquency, when previous (non-systematic) reviews had been unable to discern any positive effect of correctional treatments.17

Systematic reviews have also been widely used to examine an array of contemporary and often contentious “real world” issues. These range from reviews of the effectiveness of policy and other interventions to systematic reviews of social issues. Complex “real world” issues are not beyond the remit of systematic reviews. This is highlighted by a recent report that summarised systematic reviews of both randomised and non-randomised studies of issues such as prevention of vandalism, crime deterrence, drug misuse, domestic violence, child abuse, and many others.18 These and many other examples show that systematic reviews can provide a credible evidence base to support policymaking.

Systematic reviews necessarily involve statistical synthesis

This myth derives from a misunderstanding about the different methods used by systematic reviews. Some reviews summarise the primary studies by narratively describing their methods and results. Other reviews take a statistical approach (meta-analysis) by converting the data from each study into a common measurement scale and combining the studies statistically. The above myth assumes that such reviews can only be done this way. Many systematic reviews, however, do not use meta-analytic methods. Some of those which do, probably shouldn't; for example, it is common practice to pool studies without taking into account variations in study quality, which can bias the review's conclusions. It has been pointed out that one of the allures of meta-analysis is that it gives an answer, no matter whether studies are being combined meaningfully or not.19 Systematic reviews should not therefore be seen as automatically involving statistical pooling as narrative synthesis of the included studies is often more appropriate and sometimes all that is possible. A recent methodological review provides clear guidance on when and how to carry out meta-analyses of randomised and non-randomised studies.19

Systematic reviews have to be done by experts

Although expert practitioners are often involved in systematic reviews, most systematic reviewers are not expert practitioners. Even among those carrying out reviews of healthcare interventions, clinical experts are often in the minority. This is not to suggest that clinical input is irrelevant in systematic reviews of clinical interventions. Clearly, such input is invaluable in the location and interpretation of the evidence, and expert opinion is particularly valuable when evidence is sparse.20 Systematic reviews, however, are not the sole provenance of expert practitioners (such as clinical experts). For example, potential users of systematic reviews, such as consumers and policymakers, can be involved in the process. This can help to ensure that reviews are well focused, ask relevant questions, and are disseminated effectively to appropriate audiences.21

Systematic reviews can be done without experienced information/library support

Systematic reviews can indeed be carried out without proper information or library support, though researchers are not typically experienced in information retrieval and their searches are likely to be less sensitive, less specific, and slower than those done by information professionals.22 Improvements to information technology are likely to facilitate the retrieval and filtering of information from electronic databases, but currently this remains a challenging task.23 Producing a good systematic review requires skill in the design of search strategies and benefits from professional advice on the selection of sources of published and unpublished studies.

Systematic reviews are a substitute for doing good quality individual studies

It would be comforting to think that systematic reviews were a sort of panacea, producing final definitive answers and precluding the need for further primary studies. Yet they do not always provide definitive answers and are not intended to be a substitute for primary research. Rather, they often identify the need for additional primary studies as they are an efficient method of identifying where research is currently lacking. Systematic reviews can therefore lead to more, not less, primary research. They can also prevent unnecessary new primary studies being carried out—for example, when meta-analyses show the effectiveness of an intervention by pooling many primary studies.

Conclusion

I have covered a selection of some of the more common myths and misunderstandings about systematic reviews. There are others (such as the myth that systematic reviews are not research but are something that researchers should be expected to do anyway without particular skills, training, or funding). Awareness of the non-clinical applications of systematic reviews is increasing, and the establishment of the Campbell Collaboration, a sibling of the Cochrane Collaboration, will contribute to this by preparing, maintaining, and disseminating systematic reviews of the effects of social and educational policies and practices.24 There are undoubtedly many methodological challenges to be faced in the application of systematic reviews outside clinical specialties. For example, there may be difficulties in incorporating appropriate contextual information and in incorporating the results of relevant qualitative research; and there may be problems of implementation and dissemination. There may also be considerable problems relating to the identification of unpublished studies and “grey” literature. These problems are also common in reviews of healthcare interventions and do not themselves preclude the use of systematic review methods.

In conclusion, I suggest that many criticisms of systematic reviews are ill founded. In particular, systematic reviews are commonly and erroneously perceived solely to be aids to clinical decision making, and this underestimates their wider uses. Despite methodological and other challenges, systematic reviews are already helping to identify “what works” beyond the world of evidence based medicine, and their potential role is more wide ranging than is often realised.

Supplementary Material

[extra: extra references]

Table 1.

Systematic reviews and traditional narrative reviews compared

Good quality systematic reviews Traditional narrative reviews
Deciding on review question Start with clear question to be answered or hypothesis to be tested May also start with clear question to be answered, but they more often involve general discussion of subject with no stated hypothesis
Searching for relevant studies Strive to locate all relevant published and unpublished studies to limit impact of publication and other biases Do not usually attempt to locate all relevant literature
Deciding which studies to include and exclude Involve explicit description of what types of studies are to be included to limit selection bias on behalf of reviewer Usually do not describe why certain studies are included and others excluded
Assessing study quality Examine in systematic manner methods used in primary studies, and investigate potential biases in those studies and sources of heterogeneity between study results Often do not consider differences in study methods or study quality
Synthesising study results Base their conclusions on those studies which are most methodologically sound Often do not differentiate between methodologically sound and unsound studies

Table 2.

Examples of systematic reviews in the “real world”

Review question Methods Authors' conclusions
Does spending more money on schools improve educational outcomes? Meta-analysis of effect sizes from 38 publicationsw1 Systematic positive relation between resources and student outcomes
Do women or men make better leaders? Review of organisational and laboratory experimental studies of relative effectiveness of women and men in leadership and managerial rolesw2 Aggregated over organisational and laboratory experimental studies in sample, male and female leaders were equally effective
Does sexual orientation of the parent matter? Review investigating impact having homosexual as opposed to heterosexual parents has on emotional wellbeing and sexual orientation of childw3 Results show no differences between heterosexual and homosexual parents in terms of parenting styles, emotional adjustment, and sexual orientation of child(ren)
Are fathers more likely than mothers to treat their sons and daughters differently? Review of 39 published studiesw4 Fathers' treatment of boys and girls differed most in areas of discipline and physical involvement and least in affection or everyday speech. Few differences for mothers
Is job absenteeism an indicator of job dissatisfaction? Review of 23 research studiesw5 Yes; stronger association was observed between job satisfaction and frequency of absence than between satisfaction and duration of absence
Are jurors influenced by defendants' race? Meta-analytic review of experimental studiesw6 Results are consistent in finding that race influences sentencing decisions
Is there a relation between poverty, income inequality, and violence? Review of 34 studies reporting on violent crime, poverty, and income inequalityw7 Results suggest that homicide and assault may be more closely associated with poverty or income inequality than rape or robbery

References in this table are given on the BMJ's website. 

Acknowledgments

I thank Iain Chalmers, Sally Macintyre, and Trevor Sheldon for comments and for suggesting myths.

Footnotes

Funding: Chief Scientist Office of the Scottish Executive Department of Health

Competing interests: None declared.

Extra references can be found on the BMJ's website

References

  • 1.Antman E, Lau J, Kupelnick B, Mosteller F, Chalmers T. A comparison of results of meta-analyses of randomized control trials and recommendations of clinical experts. Treatments for myocardial infarction. JAMA. 1992;268:240–248. [PubMed] [Google Scholar]
  • 2.Glass G. Primary, secondary, and meta-analysis of research. Educ Res. 1976;10:3–8. [Google Scholar]
  • 3.Chalmers I, Hedges L. A brief history of research synthesis. Eval Health Prof (in press). [DOI] [PubMed]
  • 4.NHS Centre for Reviews and Dissemination. Undertaking systematic reviews of research on effectiveness: CRD report No 4. York: University of York; 1996. [Google Scholar]
  • 5.Popay J, Rogers A, Williams G. Rationale and standards for the systematic review of qualitative literature in health services research. Qual Health Res. 1998;8:341–351. doi: 10.1177/104973239800800305. [DOI] [PubMed] [Google Scholar]
  • 6.Petticrew M, Song F, Wilson P, Wright K. Quality-assessed reviews of health care interventions and the database of abstracts of reviews of effectiveness (DARE) Int J Technol Assess Health Care. 1999;15:671–678. [PubMed] [Google Scholar]
  • 7.Gartrell C, Gartrell J. Social status and agricultural innovations: a meta-analysis. Rural Sociol. 1985;50:38–50. [Google Scholar]
  • 8.US General Accounting Office. Head start: report to the chairman, committee on the budget, house of representatives. Washington, DC: United States General Accounting Office; 1997. (GAO/HEHS-97-59). [Google Scholar]
  • 9.Deneve K, Cooper H. The happy personality: a meta-analysis of 137 personality traits and subjective well-being. Psychol Bull. 1998;124:197–229. doi: 10.1037/0033-2909.124.2.197. [DOI] [PubMed] [Google Scholar]
  • 10.Fiske P, Rintamaeki P, Karvonen E. Mating success in lekking males: a meta-analysis. Behav Ecol. 1998;9:328–338. [Google Scholar]
  • 11.Forza C, DiNuzzo F. Meta-analysis applied to operations management: summarizing the results of empirical research. Int J Prod Res. 1998;36:837–861. [Google Scholar]
  • 12.Milton J, Wiseman R. Does Psi exist? Lack of replication of an anomalous process of information transfer. Psychol Bull. 1999;125:387–391. doi: 10.1037/0033-2909.125.4.387. [DOI] [PubMed] [Google Scholar]
  • 13.Grewal D, Kavanoor S, Fern E, Costley C, Barnes J. Comparative versus noncomparative advertising: a meta-analysis. J Marketing. 1997;61:1–15. [Google Scholar]
  • 14.Wiseman R, Lamont P. Unravelling the Indian rope-trick. Nature. 1996;383:212–213. [Google Scholar]
  • 15.Mulrow C. Rationale for systematic reviews. In: Chalmers I, Altman D, editors. Systematic reviews. London: BMJ Publishing; 1995. [Google Scholar]
  • 16.Davies P. What is evidence-based education? Br J Educ Stud. 1999;47:108–121. [Google Scholar]
  • 17.Lipsey M. What do we learn from 400 research studies on the effectiveness of treatment with juvenile delinquents? In: McGuire J, editor. What works: reducing re-offending. Chichester: Wiley; 1995. [Google Scholar]
  • 18.Contributors to the Cochrane Collaboration and the Campbell Collaboration. Evidence from systematic reviews of research relevant to implementing the ‘wider public health’ agenda. NHS Centre for Reviews and Dissemination http://www.york.ac.uk/inst/crd/wph.htm August 2000.
  • 19.Sutton A, Abrams K, Jones D, Sheldon T, Song F. Systematic reviews of trials and other studies. Health Technol Assess. 1998;2:1–276. [PubMed] [Google Scholar]
  • 20.McManus R, Wilson S, Delaney B, Fitzmaurice D, Hyde C, Tobias R, et al. Review of the usefulness of contacting other experts when conducting a literature search for systematic reviews. BMJ. 1998;317:1562–1563. doi: 10.1136/bmj.317.7172.1562. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Bero L, Jadad A. How consumers and policymakers can use systematic reviews for decision making. Ann Intern Med. 1997;127:37–42. doi: 10.7326/0003-4819-127-1-199707010-00007. [DOI] [PubMed] [Google Scholar]
  • 22.Dickersin K, Scherer R, Lefebvre C. Identifying relevant studies for systematic reviews. BMJ. 1994;309:1286–1291. doi: 10.1136/bmj.309.6964.1286. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Glanville J, Haines M, Auston I. Getting research findings into practice: finding information on clinical effectiveness. BMJ. 1998;317:200–203. doi: 10.1136/bmj.317.7152.200. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Campbell Collaboration website: http://campbell.gse.upenn.edu/ (accessed 21 Nov 2000).

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

[extra: extra references]

Articles from BMJ : British Medical Journal are provided here courtesy of BMJ Publishing Group

RESOURCES