Skip to main content
The BMJ logoLink to The BMJ
. 2006 Jan 14;332(7533):109–112. doi: 10.1136/bmj.332.7533.109

Evaluating and implementing new services

Ann McDonnell 1, Richard Wilson 2, Steve Goodacre 2
PMCID: PMC1326943  PMID: 16410590

Short abstract

Evidence based health care should apply to the way that services are delivered as much as it does to treatments


Changes to the delivery and organisation of health services should be evaluated before they are widely implemented. Evaluation should be sequential, moving from theory to modelling, explanatory trials, pragmatic trails, and ultimately long term implementation.1 However, this sequence is rarely followed. New services are often implemented, or existing services are changed, before evaluation can take place. Any subsequent evaluation will have to use unreliable methods (such as an uncontrolled, before and after design) and is, of course, too late to influence implementation. We use three examples from the NHS to show how enthusiasm can overtake evidence and the benefits of a more considered approach.

Changing the organisation of services

Implementing organisational change in health services requires substantial effort and typically needs to be driven by enthusiastic groups and individuals. There are many examples of delays in getting existing evidence into practice. The slow pace of organisational change is often seen as problematic in the drive towards an evidence based health service. However, sometimes the converse is true. Too much momentum may lead to inappropriate implementation of change before evaluation is complete. Managing this momentum offers the key to rational evaluation and implementation of changes in service organisation and delivery.

The drive for change in the way services are delivered can spring from various sources, including political imperatives, policy drivers, and enthusiasm from clinicians. Enthusiasm for improved services is desirable but can blind enthusiasts to the possible downsides of an intervention. Evidence based care may mean delaying the introduction of new treatments until robust evidence exists of their effectiveness. This approach is well suited to simple interventions aimed at individual patients, such as drugs. Here, momentum is often driven primarily by commercial imperatives. Although political and professional influences are brought to bear in the introduction of new drugs, as shown by recent controversies over treatments for Alzheimer's disease and multiple sclerosis, current regulatory frameworks attempt to ensure that new drugs cannot be prescribed before they have been thoroughly evaluated.

Figure 1.

Figure 1

Would this service exist if it had been evaluated first?

Credit: ITV/REX

When an existing device or operative technique is modified for a new purpose, intervention is more complex. But even here, a framework of regulation helps to curb the enthusiasm of pioneers and ensure that use of the new technique is based on evidence as well as passion and commitment. Since 2003, The National Institute for Health and Clinical Excellence (NICE) interventional procedures programme and the Review Body for Interventional Procedures have been assessing the safety and efficacy of new procedures. They gather evidence by systematic review and formulate guidelines. Use of a new procedure may be restricted to certain circumstances or to specific healthcare facilities. This review process has attracted criticism from some people who believe it will stifle change and innovation.

Achieving a balance between controlling the momentum for change and maintaining enthusiasm is more difficult for complex innovations such as new clinical services. We have selected three examples which show the importance of managing momentum as part of a planned framework for the development, implementation, and evaluation of new clinical services. In the first two, the pace of implementation outstrips the emergence of evidence. Both are top-down innovations, one driven by professional bodies and one by policy makers. The third is a bottom-up approach, where the pace of implementation and evidence are more evenly matched.

Acute pain teams

Acute pain teams were introduced in the United Kingdom in response to concerns from many professional groups that postoperative pain control was unacceptably poor and that new techniques such as patient controlled analgesia and epidural techniques should be used with appropriate safeguards. In 1990, a report by the royal colleges of surgeons and anaesthetists recommended the introduction of acute pain teams in every hospital that did inpatient surgery.2 Although rigorous evidence of the effectiveness of these teams was lacking,3 84% of acute hospitals in England had an acute pain team by 2000, and surveys reported wide variation in terms of membership and activities.4,5

Currently, many teams are experiencing difficulties with funding, which is hampering development of the service.5 In cases such as this, where momentum for change overtakes the search for evidence, it may be difficult in the future to maintain established services in the face of competing financial pressures. This is also likely to affect staff morale.

NHS Direct

NHS Direct was set up in December 1997 as a telephone advice line run by nurses to provide “easier and faster advice and information for people about health, illness and the NHS so that they are better able to care for themselves and their families.” It was not primarily intended to reduce demand on other services, but the chief medical officer hoped that it would “help reduce or limit the demand” on immediate care services.6

An observational study in the three areas where NHS Direct was first established found that it did not reduce pressure on immediate care services but may have restrained increasing demand on part time general practitioners' out of hours services.7 However, by the time this study was published the service had been extended to cover large parts of the country.

Audit of NHS Direct estimated that about half the £90m ($159m; €133m) annual cost of NHS Direct was offset by encouraging more appropriate use of NHS services.8 This raises questions about the value of the remaining £45m spent on NHS Direct each year. NHS Direct is associated with high consumer satisfaction9 but so are most health services. It is underused by older people, ethnic minorities, and other disadvantaged groups.

NHS Direct now covers the whole of England and Wales, and it would be difficult to withdraw the service without substantial reorganisation and disruption of other services. Yet we are still uncertain whether the resources currently used to support NHS Direct are being well spent.

Examples of organisational changes without robust evidence

NHS diagnostic and treatment centres Rapid access chest pain clinics Critical care outreach services Emergency department “see and treat” Advanced access in general practice NHS walk-in centres Modern matrons Nurse consultant roles The internal market in the NHS

Stroke units

The development of stroke units in the United Kingdom has been slower and more organic. Stroke units began to appear in the 1950s in the early days of the NHS. The underlying premise was that care of stroke patients could be improved if it was delivered in a more organised fashion. Only a few of these units were established in the 1950s, and one in Northern Ireland published an observational study of its performance before the end of the decade.10 Randomised controlled trials were first done in 1962, and in the years up to the 1980s a few formal trials were reported. Results from the initial studies suggested that stroke units produced benefits. However the growth of stroke units remained slow and uneven, even into the 1990s. Further randomised controlled trials, with increasingly rigorous designs, continued to show benefits in outcome.11 Recent systematic reviews have also confirmed the effectiveness of stroke units.12

Overall, the pattern here has been of innovation followed by a period of evaluation and reflection. Development and implementation has been incremental and supported, at least latterly, by a rigorous evaluation of the benefits.

Power of momentum

Although introduction of acute pain teams was clinically driven whereas NHS Direct was politically driven, the process by which implementation overtook evaluation was similar. In both cases there was a perceived imperative to take prompt action based on clinical need or perceptions of public demand. Evaluation, to determine whether implementation would be effective, was an afterthought. The goal of action seemed to be service innovation itself, so the outcomes of any subsequent evaluation were poorly defined and could potentially be redefined in the light of negative evaluation. The box lists other examples in which delivery of services has been changed without robust evidence.

Conclusion

Health services are constantly changing. It is not always clear why change happens and how the tipping point is reached.22 Greenhalgh and colleagues have identified the key role that opinion leaders and champions have in organisational innovation.23 These champions may be politicians or professionals, but if they value action (or the appearance of action) over effective change it is not surprising that evaluation will be a low priority.

Evaluation should precede implementation and follow a staged approach, as recommended by the Medical Research Council.1 Explicit strategies to manage the pace of change need to be developed at an early stage and should include organisations responsible for changing service delivery in the NHS and health services research. It should be explicitly recognised, particularly when change is driven by politicians or professional groups, that implementation of change is not an end in itself but should have clearly defined goals which are measured as part of planned strategy for evaluation.

Although changing the way in which an existing clinical service is delivered may seem to present little risk, our preconceptions about what works in practice can often be wrong. For example, the use of air ambulances makes sound sense intuitively. However, formal evaluation showed that the benefits are limited and the costs substantial.24

We have focused on the role of politicians and professionals in driving implementation before proper evaluation, but in future, with increasing commercialisation of health services and the development of public-private partnerships, other players may be involved. If the health service community fails to develop explicit strategies to manage momentum, we risk being swept along by a tide of change driven in part by the need to improve profit margins rather than patient care.

Summary points

Changes to the delivery and organisation of health services should be evaluated before they are widely implemented.

Too much momentum may lead to inappropriate implementation of change before evaluation is complete

A regulatory framework has been established to assess the safety and efficacy of therapeutic interventions

A similar approach needs to be taken for the development of new clinical services

Editorial by Gabbay and Walley and pp 107, 112

We thank James Munro for his helpful comments.

Contributors and sources: The idea for this article came originally from AM and RW. The article was written jointly by AM, RW, and SG. AM is the guarantor.

Competing interests: RW is project manager of the Review Body for Interventional Procedures and, as such, his salary is reimbursed to the University of Sheffield. AM and SG work on Department of Health funded research into the delivery and organisation of health services.

References

  • 1.Campbell M, Fitzpatrick R, Haines A, Kinmouth AL, Sandercock P, Spiegelhalter D, et al. Framework for design and evaluation of complex interventions to improve health. BMJ 2000;321: 694-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Royal College of Surgeons of England, College of Anaesthetists. Commission on the provision of surgical services. Report of the working party on pain after surgery. London: RCS, CoA, 1990.
  • 3.McDonnell A, Nicholl J, Read SM. Acute pain teams and the management of postoperative pain: a systematic review and meta-analysis. J Adv Nurs 2003;41: 261-73. [DOI] [PubMed] [Google Scholar]
  • 4.McDonnell A, Nicholl J, Read S. Acute pain teams in England: current provision and their role in postoperative pain management. J Clin Nurs 2003;12: 387-93. [DOI] [PubMed] [Google Scholar]
  • 5.Clinical Standards Advisory Group. Services for patients with pain. London: DoH, 1999.
  • 6.Calman K. Developing emergency services in the community. The final report. London: NHS Executive, 1997.
  • 7.Munro J, Nicholl J, O'Cathain A, Knowles E. Impact of NHS Direct on demand for immediate care: observational study. BMJ 2000;321: 150-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Comptroller and Auditor General. NHS Direct in England. London: Stationery Office, 2002.
  • 9.O'Cathain A, Munro JF, Nicholl JP, Knowles E. How helpful is NHS Direct? Postal survey of callers. BMJ 2000;320: 1035. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Langhorne P, Dennis M. Stroke units: an evidence based approach. London: BMJ Books, 1998.
  • 11.Sinha S, Warburton EA. The evolution of stroke units: towards a more intensive approach? QJM 2000;93: 633-8. [DOI] [PubMed] [Google Scholar]
  • 12.Stroke Unit Trialists' Collaboration. Organised inpatient (stroke unit) care for stroke. Cochrane Database Syst Rev 2002;(1): CD000197. [DOI] [PubMed]
  • 13.Dash P. New providers in UK health care. BMJ 2004;328: 340-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Wood D, Timmis A, Halinen M. Rapid assessment of chest pain. BMJ 2001;323: 586-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Department of Health, NHS Modernisation Agency. The national outreach report. London: DoH, 2003.
  • 16.Leaman AM. See and Treat: a management driven method of achieving targets or a tool for better patient care? One size does not fit all. Emerg Med J 2003;20: 118. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Salisbury C. Does advanced access work for patients and practices? Br J Gen Pract 2004;54: 330-1. [PMC free article] [PubMed] [Google Scholar]
  • 18.Chalder M, Sharp D, Moore L, Salisbury C. Impact of NHS walk-in centres on the workload of other local healthcare providers: time series analysis. BMJ 2003;326: 532-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Read S, Ashman M, Savage J, Scott C. Evaluation of the modern matron role in a sample of NHS trusts. University of Sheffield School of Nursing and Midwifery and Royal College of Nursing Institute; 2004. http://www.shef.ac.uk/snm/research/modern_matron_evaluation.html (accessed 15 Dec 2005).
  • 20.Guest D, Peccei R, Rosenthal P, Redfern S, Wilson-Barnett J, Dewe P, et al. An evaluation of the impact of nurse, midwife and health visitor consultants. London: Kings College, 2004.
  • 21.Propper C, Burgess S, Gossage D. Competition and quality: evidence from the NHS internal market 1991-1999. Bristol: University of Bristol, 2003. (Centre for Market and Public Organisation working paper series No 03/077.)
  • 22.Gladwell M. The Tipping Point: how little things can make a big difference. Illinois: Abacus, 2001.
  • 23.Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O. Diffusion of innovations in service organizations: systematic review and recommendations. Millbank Q 2004;82: 581-629. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Nicholl JP, Brazier JE, Snooks HA. Effects of London helicopter emergency medical service on survival after trauma. BMJ 1995;311: 217-22. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from BMJ : British Medical Journal are provided here courtesy of BMJ Publishing Group

RESOURCES