Skip to main content
Journal of Infection Prevention logoLink to Journal of Infection Prevention
editorial
. 2017 Dec 5;19(1):3–4. doi: 10.1177/1757177417746732

Rethinking the use of audit to drive improvement

Jennie Wilson 1,
PMCID: PMC5753951  PMID: 29317908

Audit of clinical practice is considered a cornerstone of assuring the quality of infection prevention and control practice and yet how often do practitioners stop to consider the purpose of the audit and its efficacy in achieving this goal? One of the key challenges is exemplified by the comment made to me recently by an Infection Prevention and Control Practitioner (IPCP) that ‘the audit scores completed by clinical staff were always 100%’, although the IPC team were well aware that this misrepresented the true situation. This drive to achieve results that avoid negative attention rather than reflect reality not only damages efforts to improve quality of care but also wastes valuable resources in generating fictional data.

This is not to say that audit is not a useful tool, just that sometimes the underpinning principles are forgotten. Clinical audit is a ‘quality improvement process that seeks to improve patient care and outcomes though systematic review of care against explicit criteria and the implementation of change’ (National Institute for Clinical Excellence, 2002). Rather than designing audit to address and improve specific processes in a planned way, all too often it becomes a routine ‘monitoring tool’ that is not explicitly linked to driving improvement.

A key feature of designing an effective audit is the definition of explicit criteria that can be measured. While there may be many aspects of IPC practice that we consider important, unless they can be measured they are not suitable criteria for audit. In addition, measurement must be linked to a clear standard that defines the level of care to be achieved. For example, best practice suggests that when a patient has a urinary catheter, it must have a documented indication. This makes for clear and measurable criteria as the indication is either documented or not. A criterion that expects a management plan for the urinary catheter is more problematic as there are a range of possible components to a plan and the presence of a plan does not mean that it is an appropriate one. If a management plan is to be audited, it is likely that several different criteria will have to be measured in order to determine the level of care achieved.

The paper by Upadhyaya et al. (2018) in this issue of JIP is a good example of some of the issues surrounding audit. The authors describe the introduction of a ‘sticker’ that enabled clinicians to document the criteria of the High Impact Intervention care bundle for peripheral venous catheter (PVC). These criteria would not otherwise be routinely documented and audit would depend on observing practice. Of course, this also highlights another aspect of audit, the reliability of self-reporting. The sticker developed by the authors provides a useful prompt to the good practice outlined in the HII and does encourage documentation of criteria useful for audit. However, whether a clinician inserting a PVC would document non-compliance with any of the criteria listed on the sticker is less certain.

Developing criteria that are measurable and methods of measuring adherence that are reliable are essential components of effective audit. Just because a particular IPC practice is considered important does not make it amenable to reliable measurement. The purpose of the measurement is to generate data that can be used to provide feedback on performance. If feedback is to be effective, the clinicians concerned must have confidence that the data are a reliable representation of practice. Many healthcare workers will have a high level of cynicism about the reliability of routine infection control audit data such as hand hygiene compliance, and this immediately reduces its impact as a quality improvement tool.

A systematic review of the effectiveness of audit and feedback on professional practice concluded that although it was associated with important improvements in practice, the effect was small and most marked when the baseline performance was low. Interestingly, audit and feedback were most likely to be effective when it included explicit targets and an action plan, and was communicated on a regular basis by a more senior colleague (Jamtvedt et al, 2012). In the case of IPC audit, an additional factor is that reports are generally directed at wards or units rather than individuals and the impact of such data on the behaviour of individuals may be further reduced.

Measurement is both challenging and resource intensive and in the context of limited resources, the efficiency and effectiveness of specific audit programmes in achieving clearly defined improvement goals is of paramount importance. This is where the concept of Care Bundles developed by the Institute for Healthcare Improvement is helpful. They define a care bundle as ‘small set of evidence-based interventions for a defined patient population and care setting that, when implemented together, will result in significantly better outcomes than when implemented individually’ (Resar et al., 2012). The key features of a care bundles are that:

  • they include three to five interventions (elements), with strong evidence/clinician agreement;

  • the bundle elements are relatively independent;

  • they are used with a defined patient population in one location;

  • the bundle is developed by the multidisciplinary team;

  • compliance with bundles is measured using all-or-none measurement, with a goal of 95% or greater.

A central assumption is that all components are required and equally important. The bundle cannot include elements that are not specific, cannot be easily measured or are optional, otherwise it is not possible to generate meaningful compliance scores. Care bundles therefore need to be focused on a small number of elements for which there is robust evidence for a strong link between compliance and improved outcomes, rather than a long list of expected practice or the whole process of care. Since care bundles measure care received by the patient, the patient must be the denominator for each element of the bundle. General IPC procedures such as hand hygiene are not appropriate for care bundles as they are not specific to single processes.

The power of the simple, targeted approach to clinical audit provided by well-designed care bundles is clearly demonstrated by the work of Daniel et al. (2015) on reducing the incidence of ventilator-associated pneumonia (VAP) in two intensive care units in Scotland. A specific goal was defined of reducing VAP to zero or 300 calendar days between events. A VAP prevention bundle, together with a VAP surveillance system, were implemented to drive improvement towards achieving this goal. Adherence to the bundle was increased from 35% to 80% and the incidence of VAP reduced from approximately 7 to 1 per 1000 ventilation days. This study clearly illustrates the power of audit used effectively: clearly communicated purpose and goals; adaption of evidence to the local context; simple defined standards of practice; and effective and reliable systems for measuring and feeding back outcomes with clear relevance to patients and clinicians.

The fundamental mechanism by which the implementation of care bundles improves performance is not the measurement itself but the impact that it has on changing how work is done though collaboration and teamwork (Resar et al., 2012). Perhaps it is time to evaluate ICP audit programmes to ensure that they are more clearly located within the Model for Improvement (Nolan and Berwick, 2006) and constructed around the three key questions: What are we trying to accomplish? How will we know change is an improvement? And what changes can we make that will result in improvement?

References

  1. Daniel M, Booth M, Ellis K, Maher S, Longmate A. (2015) Details behind the dots: how different intensive care units used common and contrasting methods to prevent ventilator associated pneumonia. BMJ Quality Improvement Reports 4: u207660.w3069. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Jamtvedt G, Flottorp S, Young JM, Odgaard-Jensen J, French SD, O’Brien MA, Johansen M, Grimshaw J, Oxman AD. (2012) Audit and feedback: effects on professional practice and healthcare outcomes. Cochrane Database of Systematic Reviews 6: CD000259. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. National Institute for Clinical Excellence. (2002) Principles for best practice in clinical audit. London: NICE; Available at: https://www.nice.org.uk/media/default/About/what-we-do/Into-practice/principles-for-best-practice-in-clinical-audit.pdf. [Google Scholar]
  4. Nolan T, Berwick DM. (2006) All-or-none measurement raises the bar on performance. Journal of the American Medical Association 295: 1168–1170. [DOI] [PubMed] [Google Scholar]
  5. Resar R, Griffin FA, Haraden C, Nolan TW. (2012) Using Care Bundles to Improve Health Care Quality. IHI Innovation Series white paper. Cambridge, MA: Institute for Healthcare Improvement; Available at: www.IHI.org. [Google Scholar]
  6. Upadhyaya K, Hendra H, Wilson N. (2018) A high impact intervention for a high impact intervention: improving documentation of peripheral venous access insertion in theatre. Journal of Infection Prevention. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Journal of Infection Prevention are provided here courtesy of SAGE Publications

RESOURCES