Abstract
Yes, but we do not know why
The need to improve the quality of care is well recognised. Yet accomplishing this is complicated, messy, and uncertain, requiring that researchers tackle technical (science) and adaptive (emotional, social, cultural, and political) challenges.1 Tension exists between those who say “just do something” to improve quality and those who say “science should be the guide.”2
The two linked studies (doi:10.1136/bmj.d195; doi:10.1136/bmj.d199) suggest that more science is needed. Benning and colleagues evaluated a large patient safety programme (the Safer Patients Initiative; SPI) in the United Kingdom, led by the Institute for Healthcare Improvement.3 4 The Health Foundation initiated and supported the initiative and, laudably, an independent evaluation, grounded in theory and conducted by experts in epidemiology, biostatistics, medical sociology, health services research, and clinical medicine. They performed a quantitative and qualitative evaluation at organisational and patient levels. The evaluation included five substudies that looked at whether the interventions worked and why. In addition to using a rigorous research design, the authors conducted a state of the art analysis, which included using different approaches to evaluate changes over time in treatment and comparison hospitals. This evaluation will serve as a model for the field. It required, however, an interdisciplinary team of experts and appropriate research funding, both of which are rare.
The study’s findings are partly encouraging and partly worrying. On the encouraging side, the study provides convincing evidence that safety and quality improved in NHS hospitals in England over the study period (about 18 months). This should provide comfort to UK citizens, the NHS, and parliament. Patients are less likely to be harmed from the care they receive. The NHS should try to understand why these improvements occurred and how they can be strengthened and replicated broadly across the UK.
For those who hoped SPI would transform care, the findings are disconcerting. The authors found that the initiative had no discernible additional effect on patient safety; care improved to the same extent in both treatment and comparison hospitals, highlighting the need for robust evaluation with concurrent controls. It is, of course, difficult to measure the impact of patient safety interventions, especially diffuse interventions like SPI. The initiative might possibly have provided benefits that were not measured or may emerge over time. It is also difficult in these types of large scale evaluations to find an appropriate comparison group. In areas where intervention hospitals were performing well at baseline, it would be difficult to show improvement.
The study should be a wakeup call to those implementing patient safety programmes. Too many patients in the UK, and the rest of the world, continue to experience preventable harm. The quality improvement field needs to embrace science, favour evidence over anecdote, and move beyond using only one generic framework for improvement (the plan, do, study, act cycle).5 Different types of patient safety challenges exist, such as translating evidence into practice, improving teamwork and organisational culture, identifying and mitigating hazards, and reducing diagnostic errors. Each type of problem should be informed by specific theories, methods, and measures.6
Although well intentioned, it is not surprising that the SPI had less of an impact than the investigators anticipated. It was not grounded in a theory of organisational change.7 It asked hospitals to implement 43 interventions, when most hospitals would find it difficult to implement three. Clinicians thought that many of the interventions were supported by weak evidence and that some measures were not valid. The initiative was largely top down, with limited input from local clinicians. Moreover, it did not target areas where teams performed poorly (in many of the areas, teams were performing nearly flawlessly before the initiative). The interventions and measures were not sufficiently pilot tested, and quality control over the quality improvement data collected by the local teams was virtually non-existent.
Clinicians who push back against patient safety interventions are often viewed as “knaves.”8 This study suggests that some of that resistance may be warranted. Some interventions focused on areas that were not problematic and used evidence and measures that doctors did not always perceive as valid, potentially souring clinicians’ attitudes towards efforts to improve patient safety. We need clinicians to lead patient safety efforts. For this to happen, they must believe that interventions and measures are based on science and that their patients will benefit.
Yet when interventions deal with both technical and adaptive challenges, broad scale improvement in patient safety is possible. Several patient safety interventions have shown significant improvements in patient outcomes by having clinicians and researchers collaborate when developing and pilot testing the programme. Such innovations include centralised collection of performance measures and evidence summaries by the researchers, and local innovation of programme implementation by the clinicians.9 10 11 12
These studies provide three important lessons. Firstly, patient safety studies require robust design and evaluation.13 Funding agencies need to support the development and implementation of patient safety programmes that include rigorous evaluation. These programmes should be grounded in change theory, include evidence based interventions, valid measures, and data quality control. Although theory and interventions evolve over time, patient safety programmes should be developed in collaboration with clinicians and be pilot tested, and measures should be validated before broad implementation. Secondly, care in the UK is improving; we should understand how and why. Thirdly, quality improvement efforts must improve, embracing rather than running from science. The science for quality improvement differs from basic and clinical research. It requires input from clinicians, health service researchers, social scientists, and human factors and systems engineers; in addition, it uses change theory, mixed methods, and robust evaluation. These studies provide a model.
Competing interests: The authors have completed the Unified Competing Interest form at www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declare: no support from any organisation for the submitted work; PJP and SMB have received grants or had contracts with the Agency for Healthcare Research and Quality, the World Health Organization, the Michigan Health and Hospital Association Keystone Center for Patient Safety and Quality, NIH/NHLBI, the Centers for Disease Control and Prevention, JHPIEGO, and the Robert Wood Johnson Foundation in the past three years; PJP has been a paid consultant for the Association for Professionals in Infection Control and Epidemiology, received speaking honorariums and travel expenses from various hospitals and the Leigh Speaker’s Bureau, and received royalties from his book Safe Patients, Smart Hospitals in the previous three years; no other relationships or activities that could appear to have influenced the submitted work.
Provenance and peer review: Commissioned; not externally peer reviewed.
Cite this as: BMJ 2011;342:c6646
References
- 1.Heifetz RA. Leadership without easy answers. Belknap Press at Harvard Press, 1994.
- 2.Auerbach AD, Landefeld CS, Shojania KG. The tension between needing to improve care and knowing how to do it. N Engl J Med 2007;357:608-613. [DOI] [PubMed] [Google Scholar]
- 3.Benning A, Dixon-Woods M, Ghaleb M, Suokas A, Dawson J, Barber N, et al. Large scale organisational intervention to improve patient safety in four UK hospitals: mixed method evaluation. BMJ 2011;342:d195. check [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Benning A, Dixon-Woods M, Nwulu U, Ghaleb M, Dawson J, Barber N, et al. Multiple component patient safety intervention in English hospitals: controlled evaluation of second phase. BMJ 2011;342:d199. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Berwick DM. A primer on leading the improvement of systems. BMJ 1996;312:619-22. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Pronovost PJ, Goeschel CA, Marsteller JA, Sexton JB, Pham JC, Berenholtz SM. A framework for patient safety research and improvement. Circulation 2009;119:330-7. [DOI] [PubMed] [Google Scholar]
- 7.Weiner BJ. A theory of organizational readiness for change. Implement Sci 2009;4:67. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Jain SH, Cassel CK. Societal perceptions of physicians: knights, knaves, or pawns? JAMA 2010;304:1009-10. [DOI] [PubMed] [Google Scholar]
- 9.Pronovost PJ, Goeschel CA, Colantuoni E, Watson S, Lubomski LH, Berenholtz SM, et al. Sustaining reductions in catheter related bloodstream infections in Michigan intensive care units: Observational study. BMJ 2010;340:c309. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Miller MR, Griswold M, Harris JM, Yenokyan G, Huskins WC, Moss M, et al. Decreasing PICU catheter-associated bloodstream infections: NACHRI’s quality transformation efforts. Pediatrics 2010;125:206-13. [DOI] [PubMed] [Google Scholar]
- 11.Pronovost PJ, Freischlag JA. Improving teamwork to reduce surgical mortality. JAMA 2010;304:1721-2. [DOI] [PubMed] [Google Scholar]
- 12.Neily J, Mills PD, Young-Xu Y, Carney BT, West P, Berger DH, et al. Association between implementation of a medical team training program and surgical mortality. JAMA 2010. (forthcoming). [DOI] [PubMed]
- 13.Berenholtz SM, Needham DM, Lubomski LH, Goeschel CA, Pronovost PJ. Improving the quality of quality improvement projects. Joint Commission Journal on Quality and Patient Safety 2010;36:468-73. [DOI] [PubMed] [Google Scholar]