Skip to main content
American Journal of Public Health logoLink to American Journal of Public Health
editorial
. 2017 Jan;107(1):97–99. doi: 10.2105/AJPH.2016.303557

Systematic Reviews for Policymaking: Muddling Through

Trisha Greenhalgh 1,, Kirsti Malterud 1
PMCID: PMC5397017  PMID: 27925823

Fox and Bero have reviewed two issues of pivotal importance to health policy: the poor fit between systematic reviews and policy needs, and the poor quality of many systematic reviews. We address a third issue: the question of what policymaking is.

We agree with Fox and Bero that there is a problem (systematic reviews are currently informing policy only to a limited extent). But we believe that their proposed solutions (involve policymakers earlier in the systematic review process and improve the methodological standards for systematic reviews) will only ever be partial ones. We suggest that their implicit linear (“reviews-into-policy”) model be replaced by an acknowledgment that the word is—and always will be—messier and less rational.

THE “KNOW-DO-GAP”

Scientists, especially those raised in the evidence-based health care tradition, tend to view policymaking as—broadly speaking—an exercise in decision science. Prioritize the problems, feed in the (methodologically robust, peer-reviewed, critically appraised, synthesized, summarized) evidence, and the preferred course of action will emerge from the data. Such a conception represents the “know-do-gap” to be bridged between scientific facts and policymaking as a simple pipeline model, in which incoming evidence underpins decisions.

Policymaking is a battle of ideas and values. Policymakers search for courses of action that are possible, acceptable, and reasonable in a particular set of circumstances (constraints, competing priorities, vested interests, and so on). The policymaking process is a messy struggle over how problems should be conceptualized, categorized, prioritized, and addressed.1

Should we consider obesity to be the result merely of modifiable individual behaviors (diet and exercise)—or should we depict it as the product of complex interactions in a complex system (featuring an obesogenic environment, corporate lobbying by the big food industry, social determinants and genetic—hence, ethnic—predispositions)? Only in the former framing will conventional systematic reviews of experimental interventions generate simple and broadly transferable answers.

Policymaking is all about framing. It is a rhetorical argumentation game in which language and drama play crucial roles.2 In this game, systematic reviews—along with primary research evidence, testimony, routinely collected data, legal judgements, anecdote, clinical wisdom, myth, and more—are all used instrumentally (to bring issues onto the agenda and frame them in particular ways), rhetorically (to emphasize a point or depict something as moral or immoral), and tactically (to stall a decision until more evidence has been collected).2,3 Often, the chief task is prioritization based on values and political goals, to which science may contribute little.4

DEPTH AND BREADTH

Science tends to define quality in terms of methods (such things as study designs, instruments, analytic approaches). Policy views quality more in terms of values (justice, fairness, accountability, timeliness). Scientists want to discover the truth (what works). Policymakers are mired in practicalities (deadlines, budgets, expectations). Science and policy thus represent two profoundly different cultures, classically characterized by mutual misunderstanding, mistrust, and sometimes disrespect. Interact they must—but we should not expect harmony.

One key area of divergence is the trade-off between depth and breadth. Systematic reviews often focus on a single, precise question, defined in terms of abstracted variables (e.g., population-intervention-comparison-outcome) with a view to producing a definitive answer (perhaps, a transferable effect size). Policy questions are more broadly and concretely framed: what should we do about here-and-now problem X, given budget Y, timescale Z, vested interest V, and person-with-wrecking-power P. Realist review, described by Bero, may offer some potential to generate the kind of evidence that takes context into account. But the question of whether and how this approach can be applied prospectively to inform policy remains largely unanswered.

KNOWLEDGE BASE AND ACTION

Another area of divergence, illustrated by Malterud et al.’s recent case study of the Norwegian Knowledge Centre,5 is the tension between building the knowledge base and informing action. While systematic reviews are methodologically designed to answer questions about intervention effects assumed to be standardized, they are usually applied for much more complex issues and contexts. Therefore, they may conclude (not surprisingly) that adequate evidence does not exist. High-quality systematic reviews that address broad, policy-relevant topic areas tend to increase the uncertainty around a topic by illuminating its complexity and identifying numerous areas where the evidence is limited or contested. In other words, a much-awaited systematic review may show not that an intervention works nor that it unequivocally does not work, but that we still do not know whether it works or not.

Given that systematic reviews so often prove unhelpful for current policy decisions, should we abandon the effort? In an article titled “What makes an academic paper useful for health policy?,” Christopher Whitty (a UK health services researcher who has worked extensively with policymakers) proposes an emphatic “no”: he considers that most policy questions require several analytic lenses, from different scientific disciplines, to be brought to bear on them, including economics and the qualitative social sciences.6 He suggests that the single most important contribution an academic can make to the policy process is the accurate synthesis of information from these disparate sources—what we might call a scoping review.

The pragmatic gold standard for a policy-useful systematic review might thus be a timely, mixed-method, broad-scope review that embraces multiple disciplinary perspectives and gives a comprehensive (though not exhaustive) summary of the state of knowledge, ignorance and uncertainty in a field.

Whitty also warns reviewers not to feel the need to spell out policy implications. Policymaking is a professional skill (which, by and large, academics do not have); otherwise excellent scientific reviews may be let down by simplistic, grandiose, or unrealistic policy recommendations.6

LEARN FROM HISTORY

Fox proposes that questions raised by policymakers should to some extent drive the systematic review process. He also invites us to learn from history. It is worth considering what is (to our knowledge) the only independently researched, large-scale historical case study of a national program of scientific research that was explicitly and proactively driven by policymakers’ questions: the United Kingdom’s failed Rothschild experiment (see the box on the next page).7

The UK Rothschild experiment

Rothschild, a politician, had recommended establishing a rational (planned, structured, efficient) system of policymakers (“commissioners”) asking questions of university scientists (“contractors”), who would undertake research to answer them. To that end, each government department appointed a Chief Scientist, a named controller of research and development, and topic-themed intersectoral liaison groups. To fund this, a quarter of UK Research Council funding was passed (to vocal protests from scientists) directly to government departments.

Rothschild’s infrastructure was set up in 1972. By 1978, it had been disbanded. Kogan and Henkel’s classic analysis of what went wrong7 is highly relevant to the contemporary question of how to optimize the policy process through the use of targeted systematic reviews. In sum, despite strong political backing and generous resources for the Rotschild experiment:

  • policymakers and scientists interacted awkwardly and did not adopt the clear “customer”–“contractor” roles expected of them;

  • priority research topics did not prove readily identifiable;

  • the (long) research commissioning cycle failed to align with the (short) policy cycle; and

  • the quality and value of research was frequently questioned.7

Rothschild’s dream of systematically commissioned scientific research efficiently serving the policy process in what Weiss would later call the problem-solving mode of research utilization3 was never realized. Research commissioned through the Rothschild budget consisted mostly of primary studies, but the case offers important lessons for those who would envision a rational, efficient systematic review industry working in the service of (national or local) policymaking.

In the decades since the Rothschild experiment, the science–policy relationship has become increasingly interdependent—and hence increasingly problematic. On the one hand, policymakers face overriding budget and time constraints, and they may be unable to accommodate scientific findings that intensify uncertainty or challenge prevailing ways of working. On the other hand, systematic reviewers have limited tolerance for requests for “quick and dirty” studies or for questions or conceptualizations that run counter to their own definitions of rigor.

COCHRANE COLLABORATION

In Bero’s example of the Cochrane Collaboration’s “advocating for evidence” program, we suggest that not only is the World Health Organization becoming more accepting of Cochrane reviews, but also that the Cochrane Collaboration is learning a great deal about the political, financial, geographical, and regulatory constraints that limit the possibilities for the World Health Organization and the programs it funds. As a result, it is shaping its processes and outputs accordingly.

MUDDLING THROUGH

Perhaps counterintuitively, effective utilization of systematic reviews by policymakers may be best achieved through awkward compromises, hammered out over time through two-way dialogue (“muddling through,” or what Weiss called the interactional mode of research utilization3). We reject the possibility of an easy fix for the problem, and suggest that the best we can hope for is that as these intersectoral relationships develop and mature over time, both systematic reviewers and policymakers will become progressively enlightened about one another’s worlds and, hence, better able to negotiate compromises that are acceptable in both cultural worlds.

REFERENCES

  • 1.Fischer J, Forester J. The Argumentative Turn in Policy Analysis and Planning. Durham, NC: Duke University Press; 1993. [Google Scholar]
  • 2.Greenhalgh T, Russell J. Reframing evidence synthesis as rhetorical action in the policy making drama. Healthc Policy. 2006;1(2):34–42. [PMC free article] [PubMed] [Google Scholar]
  • 3.Weiss CH. The many meanings of research utilization. Public Adm Rev. 1979;39(5):426–431. [Google Scholar]
  • 4.Contandriopoulos D, Lemire M, Denis JL, Tremblay É. Knowledge exchange processes in organizations and policy arenas: a narrative systematic review of the literature. Milbank Q. 2010;88(4):444–483. doi: 10.1111/j.1468-0009.2010.00608.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Malterud K, Bjelland AK, Elvbakken KT. Evidence-based medicine—an appropriate tool for evidence-based health policy? A case study from Norway. Health Res Policy Syst. 2016;14:15. doi: 10.1186/s12961-016-0088-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Whitty CJ. What makes an academic paper useful for health policy? BMC Med. 2015;13(1):301. doi: 10.1186/s12916-015-0544-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Kogan M, Henkel M. Government and Research: The Rothschild Experiment in a Government Department. London, UK: Heinemann Educational Books; 1983. [Google Scholar]

Articles from American Journal of Public Health are provided here courtesy of American Public Health Association

RESOURCES