'I may observe that in my service I have never followed any one... but have deduced my conclusions... solely from close and important collection of evidence.'1
So wrote Edwin Chadwick more than a hundred years ago. Not only was he one of the pioneers of the public health movement, equally, he was the 19th century father of much 20th century social inquiry. To cite him is to underline that there is nothing new—except, possibly, the phrase itself—about evidence-based policy (EBP). As Royal Commissions and government committees of inquiry have demonstrated over the decades, policy makers have always sought evidence. The extent to which subsequent policy decisions were actually based on that evidence, as distinct from the use of such evidence to legitimate them, is another question: Chadwick himself has often been criticized for manipulating evidence to support his preconceived ideas, notoriously so in the case of the 1834 Poor Law Report.
So why the sudden rush of enthusiasm for EBP today? One answer must be that it reflects the success of the evidence-based medicine (EBM) movement (and this is a 'movement' insofar as it has prophets, missionaries and zealots). If medicine can be based on evidence, so surely can policy. The proposition seems self-evident. The trouble is that a sleight of hand is involved in making the transition. EBM is distinguished by the fact that it privileges particular kinds of evidence—'scientific' evidence, with a strong emphasis on randomized controlled trials and systematic reviews. It is not at all self-evident that this model is appropriate for, or indeed relevant to, the making of policy. In the case of policy, evidence tends to be something of a Delphic oracle—difficult to decipher and apt to be misinterpreted.
Much has been written about the pitfalls of, and the delusive hopes held out by, EBP. So this paper asks whether it is possible to plot out a sensible course between a platitude and a nonsense. The platitude is that policy should be informed by evidence. Who could possibly disagree? The nonsense is that policy should be based on scientific evidence. This is to misunderstand the nature of both the policy process and the role of evidence in it.2–4 The way forward is to disaggregate the notions of both policy and evidence: different stages of the policy process may call for different types of evidence.
Consider, first, the different types of evidence or knowledge that are relevant for the policy process. There does exist scientific evidence—i.e. research-based, usually peer-reviewed, evidence. Next there is what might be called organizational evidence—in the case of health policy, the experience of those actually working in the NHS. As a Cabinet Office paper on policy-making5 has put it:
'There is a tendency to think of evidence as something that is only generated by major pieces of research. In any policy areas there is a great deal of critical evidence held in the minds of both front-line staff in departments, agencies and local authorities and those to whom the policy is directed. Very often they will have a clearer idea than the policy makers about why a situation is as it is and why previous initiatives have failed. Gathering that evidence through interviews or surveys can provide a very valuable input to the policy making process and can often be done much more quickly than more conventional research.'
Finally, there is the evidence provided by the media and feedback from the public—for example, through the complaints brought to the surgeries of MPs. If organizational experience can provide clues about the feasibility of different policy options, media and public reactions can provide evidence about political acceptability.
Next, consider different types of policy. Policies involving structural change (for example, in the organization of the NHS) will call for different types of evidence from those which are a reaction to new shocks, such as AIDS or BSE. And, complicating the analysis still further, there are different stages or steps in the policy process. First, there is the need to delineate and if possible quantify more precisely the nature of the problem once it has been put on the policy agenda. Second, there is a need to identify the range of policy instruments that are available and their likely effectiveness. Lastly, there is a need to map out the implementation process by exploring what financial, managerial and organizational resources are required.
Different types of policy and different stages in the process call for different types of evidence. Contrast, for example, the kinds of evidence that are available (or can be sought) in the case of structural change in the NHS, on the one hand, and coping with new phenomena such as AIDS or BSE, on the other hand. In the former case, little or no scientific evidence in the strict sense is available: at best, there will be inferential knowledge drawn from previous attempts to change the structure of the NHS or from the experience of other countries. Nor does introducing organizational change on an experimental basis provide a way out of this dilemma: social experiments tend to evolve in the course of being implemented, and it is seldom clear precisely what is being evaluated—the original or the evolved model (only think of how the 'internal market' changed from conception to implementation). With AIDS or BSE, however, there could be an immediate appeal to scientific evidence about causation and prevalence. Note, however, that even in such cases there is a need for a different kind of evidence when designing policies. In the case of AIDS, for example, the decision to target the Government's messages at the population as a whole rather than at high-risk groups was driven not by scientific evidence, which might have pointed in the opposite direction, but by ministerial values and the desire to avoid the stigmatization of those groups.6
It is perhaps at the stage of delineating and quantifying problems that scientific evidence comes into its own. Witness, here, the Acheson Report on Inequalities in Health.7 The Acheson Report was indeed able to draw on, and synthesize, a mass of scientific evidence about the extent of inequalities in health. But the interpretation of that evidence sparked controversy as did the report's policy recommendations.8 Lack of knowledge about the effectiveness of different policy instruments, dispute about the causes of inequalities in health status, arguments about the very concept of inequality, all meant that the report started a debate rather than resolving the issues.
The Acheson Report illustrates a larger point. This is that evidence, even scientific evidence, rarely speaks with a single clear voice about complex public issues (back to the Delphic oracle). Coming to policy conclusions is not a simple process of reading off simple prescriptions from evidence. They are the product of interrogation, interpretation and debate. As the Government's Chief Scientific Adviser has put it, in devising guidelines about scientific advice and policy-making:9
'As all experts will come to issues with views shaped to some extent by their own interests and experience, departments should also consider how to avoid unconscious bias, by ensuring that there is a good balance in terms of the type of institutions and organisations from which experts are sought. Experts from other disciplines, not necessarily scientific, should also be invited to contribute, to ensure that the evidence is subjected to a sufficiently questioning review from a wide-ranging set of viewpoints.'
Scientific evidence may have very little to say about the implementability or political acceptability of a policy. Take one of the great policy fiascos of recent times, the Poll Tax.10 The policy-makers searched out all the available evidence about the incidence, distribution and impact of different types of local taxation. But they failed completely to test out the feasibility of the tax by consulting those who would have to implement it: organizational knowledge (in local authorities) was neglected. And the same was true of another fiasco, the Child Support Scheme for getting absent fathers to contribute towards the cost of bringing up their children.11 Policy-makers scoured the globe for evidence about how such schemes worked in other countries, but their scheme failed the test of both feasibility (the computers did not deliver as expected) and political acceptability (the fathers mobilized the media very effectively).
Does all this suggest a thumbs-down, negative, conclusion about EBP? Not necessarily. It argues rather for a more nuanced approach and strategy. It suggests the importance of recognizing the complexity of the policy process and the diversity of the kinds of relevant evidence. Above all, it underlines the importance of recognizing that policy itself provides most of the evidence. If we see policy as experiment,12 if we acknowledge that policy is largely a trial-and-error process, then it follows that the scientific community can make a crucial contribution not by deriving policy prescriptions from the research it produces (the delusional vanity of some members of that community) but by providing rigorous and fast evaluations.
To make this point is also to conclude that history may be one of the best sources of evidence for policy-making. This is not to argue that politicians should—as they did until the decline of classical studies—go back to Thucydides or Tacitus for their policy exemplars. Nor is it to imply that history gives clear messages: for instance, misreading and misapplication of the lessons of Munich, that dictators must not be appeased, led to a series of policy disasters, starting with the Suez adventure. It is, however, to suggest that recent history provides a series of case studies whose systematic review can provide highly relevant evidence for policy-makers. A start on this has been made by the Economic and Social Research Council's Centre for Evidence Based Policy and Practice—for example, producing a review of what we know about 'naming and shaming' strategies.13
In summary, then, provided that we do not simplistically apply the EBM model, there remains a modest case for EBP. The kind of techniques used in EBM—notably randomized controlled trials—are not applicable in the case of policy-making. However, the intellectual rigour that EBM applies to systematic reviews of evidence is transferable. If we enlarge the meaning of evidence, there is indeed scope for bringing more intellectual edge to the analysis of what we can learn from the past. But, equally important, if we remember that evidence speaks with many voices—and that our values drive facts14 and shape the conclusions we draw from them—we will also conclude that any such exercise will be no more, and should be no more, than one contribution to the process of policy-making.
Note This article is based on a lecture given at the London School of Hygiene and Tropical Medicine where Rudolf Klein is a Visiting Professor.
References
- 1.Flynn MW, ed. Introduction to Edwin Chadwick, Report on the Sanitary Condition of the Labouring Population of Great Britain (1842). Edinburgh: Edinburgh University Press, 1965
- 2.Klein R. From evidence-based medicine to evidence-based policy? J Health Serv Res Policy 2000;5: 65-6 [DOI] [PubMed] [Google Scholar]
- 3.Black N. Evidence-based policy: proceed with care. BMJ 2001;323: 275-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Hunter DJ. Evidence-based policy and practice: riding for a fall? J R Soc Med 2003;96: 194-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Strategic Policy Making Team. Professional Policy-Making for the Twenty-First Century. London: Cabinet Office, 1999
- 6.Day P, Klein R. Interpreting the unexpected: the case of AIDS policy making in Britain. J Publ Policy 1989;9: 337-53 [Google Scholar]
- 7.Independent Inquiry into Inequalities in Health (Chairman: Sir Donald Acheson): Report. London: Stationery Office, 1998
- 8.Oliver A, Cookson R, McDaid D, eds. The Issues Panel for Equity in Health: The Discussion Papers. London: Nuffield Trust, 2001
- 9.Chief Scientific Adviser. Guidelines: Scientific Advice and Policy-Making. London: Office of Science & Technology, 2000
- 10.Butler D, Adonis A, Travers T. Failure in British Government: The Politics of the Poll Tax. Oxford: Oxford University Press, 1994
- 11.Barnes H, Day P, Cronin N. Trial and Error: A Review of UK Child Support Policy (Occasional Paper No. 24). London: Family Policy Studies Centre/Nuffield Foundation, 1998
- 12.Majone G. Evidence, Argument and Persuasion in the Policy Process. New Haven: Yale University Press, 1989
- 13.Pawson R. Evidence, Policy and Naming and Shaming (Working Paper No. 5). London: ESRC UK Centre for Evidence Based Policy and Practice, 2001
- 14.Wildavsky A, Tenenbaum E. The Politics of Mistrust. London: Sage Publications, 1981