Abstract
Audit and feedback are widely used in quality improvement. Robbie Foy and colleagues argue that their full potential to improve patient care could be realised through a more evidence based and imaginative approach
Key messages.
Clinical audit and feedback entail reviewing clinical performance against explicit standards and delivering feedback to enable data driven improvement
The impact of audit could be maximised by applying implementation science, considering the needs of clinicians and patients, and emphasising action over measurement
Embedding research on how to improve audit and feedback in large scale programmes can further enhance their effectiveness and efficiency
Healthcare systems face challenges in tackling variations in patient care and outcomes.1 2 Audit and feedback aim to improve patient care by reviewing clinical performance against explicit standards and directing action towards areas not meeting those standards.3 It is a widely used foundational component of quality improvement, included in around 60 national clinical audit programmes in the United Kingdom.
Ironically, there is currently a gap between what audit and feedback can achieve and what they actually deliver, whether led locally or nationally. Several national audits have been successful in driving improvement and reducing variations in care, such as for stroke and lung cancer, but progress is also slower than hoped for in other aspects of care (table 1).4 5 Audit and feedback have a chequered past.6 Clinicians might feel threatened rather than supported by top-down feedback and rightly question whether rewards outweigh efforts invested in poorly designed audit. Healthcare organisations have limited resources to support and act on audit and feedback. Dysfunctional clinical and managerial relationships undermine effective responses to feedback, particularly when it is not clearly part of an integrated approach to quality assurance and improvement. Unsurprisingly, the full potential of audit and feedback has not been realised.
Table 1.
Examples of national clinical audit programmes and randomised trials evaluating audit and feedback
Objective | Methods | Illustrative findings |
---|---|---|
National clinical audit programmes | ||
To measure and improve the structure, processes, and outcomes of stroke care | The National Clinical Audit for Stroke operates a prospective, continuous audit of the processes and outcomes of NHS funded stroke care and rehabilitation in acute and post-acute settings in England and Wales. It also reviews care at six months and beyond to assess how longer term needs are met | Stroke unit performance in key aspects of care improved over five years; eg, the proportion of patients assessed by a stroke specialist consultant physician within 24 hours rose from 74% to 83%, whereas the proportion of applicable patients screened for nutrition and seen by a dietitian by discharge rose from 66% to 81%.4 However, significant gaps in provision remain; eg, fewer than one in three patients receive a six month review |
To measure and improve care and outcomes for lung cancer | For the National Lung Cancer Audit, secondary and tertiary care NHS hospitals in England and Wales submit data via the National Cancer Registration and Analysis Service as part of the Cancer Outcomes and Services Dataset. The data are linked to Hospital Episode Statistics, the National Radiotherapy Dataset, the Systemic Anti-Cancer Dataset, pathology reports, and death certificate data | The proportion of patients alive at least one year after diagnosis rose from 31% in 2010 to 37% in 2017.5 However, almost a third of patients still lack access to the benefits of specialist nursing support |
Randomised trials of audit and feedback | ||
To assess the effect of adding an action implementation toolbox to electronic audit and feedback targeting quality of pain management in intensive care units16 | 21 Dutch intensive care units were randomly assigned to receive usual electronic feedback only or to feedback with an implementation toolbox suggesting practical actions staff could take to improve pain management | Over six months, the proportion of patient shifts with adequate pain management increased by 14.8% compared with 4.8% in the feedback only group |
To assess the effects of feedback including “social norm” persuasive messaging and patient focused information on antibiotic prescribing in higher prescribing general practices7 26 | 1581 English general practices whose prescribing rate for antibiotics was in the top 20% for their locality were randomly assigned to receive feedback including a letter from England’s chief medical officer highlighting the higher rate of antibiotic prescribing or to no communication. They were then randomly assigned to receive patient focused information promoting reduced use of antibiotics or to no communication | Over six months, the rate of antibiotic items dispensed per 1000 population was 127 in the feedback intervention group and 131 in the control group, representing an estimated 73 406 fewer antibiotic items dispensed. The patient focused intervention did not significantly affect prescribing |
Clinical, patient, and academic communities might need to have more sophisticated conversations about audit and feedback to achieve substantial, data driven, continuous improvement. They can also act now. There are ways to maximise returns from the considerable resources, including clinician time, invested in audit programmes. These include applying what is already known, paying attention to the whole audit cycle, getting the right message to the right recipients, making more out of less data, embedding research to improve impact, and harnessing public and patient involvement.
Apply what is already known
Audit and feedback generally work. A Cochrane review of 140 randomised trials found that they produced a median 4.3% absolute improvement (interquartile range 0.5% to 16%) in healthcare professionals’ compliance with desired practice, such as recommended investigations or prescribing.3 This is a modest effect, but cumulative incremental gains through repeated audit cycles can deliver transformative change. Audit and feedback also influence reach and population through scaled up national programmes, which other quality improvement approaches (such as financial incentives or educational outreach visits) might not achieve with similar resources; for example, social norm feedback (presenting information to show that individuals are outliers in their behaviour) from a high profile messenger can reduce antibiotic prescribing in primary care at low cost and at national scale (table 1).7
The interquartile range in the Cochrane review indicates that a quarter of audit and feedback interventions had a relatively large, positive effect of up to 16% on patient care, whereas a quarter had a negative or null effect. The effects of feedback can be amplified by ensuring that it is given by a supervisor or colleague, provided more than once, delivered in both verbal and written formats, and includes both explicit targets for change and action plans.3 A synthesis of expert interviews and systematic reviews identified 15 “state of the science,” theory informed suggestions for effective feedback (box 1).8 These are practical ways to maximise the impact and value of existing audit programmes.
Box 1. Questions for audit programmes and healthcare organisations to consider in designing, implementing, and responding to audit and feedback8 .
Nature of the desired action
Can you recommend actions that are consistent with established goals and priorities?
Can you recommend actions that can improve and are under the recipient’s control?
Can you recommend specific actions?
Nature of the data available for feedback
Can you provide multiple instances of feedback?
Can you provide feedback as soon as possible and data frequency informed by the number of new patient cases?
Can you provide individual rather than general data?
Can you choose comparators that reinforce desired behaviour change?
Feedback display
Can you closely link the visual display and summary message?
Can you provide feedback in more than one way?
Have you minimised extraneous cognitive load for feedback recipients?
Delivering feedback
Have you addressed barriers to feedback use?
Can you provide short, actionable messages followed by optional detail?
Have you addressed credibility of the information?
Can you prevent defensive reactions to feedback?
Can you construct feedback through social interaction?
Pay attention to the whole cycle
The audit and feedback process comprises one or more cycles of establishing best practice criteria, measuring current practice, feeding back findings, implementing changes, and further monitoring. This chain is only as strong as its weakest link. Feedback effects can be weakened by information-intention gaps (feedback fails to convince recipients that change is necessary), intention-behaviour gaps (intentions are not translated into action), or behaviour-impact gaps (actions do not yield the desired effect on patient care).9 The success of national audit programmes depends on local arrangements that promote action as well as measurement.10
A synthesis of 65 qualitative evaluations proposed ways of designing audit programmes to better align with local capacity, identity, and culture and to promote greater changes in clinical behaviour.11 Healthcare organisations have finite capacity, so audit programmes should be designed so that they require less work, make best use of limited local resources, and clearly state why any investment is justified. Clinician beliefs about what constitutes best practice can influence how they respond to feedback, so audit programmes need to consider these while also challenging the status quo. All aspects of audit programmes should be designed with a focus on the desired changes in behaviour by recipients to achieve better outcomes; for example, feedback tackling unnecessary blood transfusions could include suggested alternative approaches to minimise blood loss during surgery.12 Because the purpose of an audit programme is not measurement alone but using data to inform quality improvement, we need to understand existing barriers to desired change and have a plan for how feedback helps to tackle those barriers.
Without functioning local networks and systems, national audit programmes can become echo chambers, where good intentions and blame for limited progress reverberate. Audit and feedback will flounder if local quality improvement is based on repeated, unconnected, and inappropriately delegated projects conducted in isolation from mainstream pursuits and if any learning is dissipated in collective amnesia. Clinical and managerial leaders should ask questions about their organisational performance in response to feedback (box 2)13 and set clear goals, mobilise resources, and promote continuous improvement.14 Audit and feedback by themselves cannot solve ingrained deficiencies but can emphasise priorities for change, inform focused actions, and evaluate progress.
Box 2. Questions that healthcare organisations can ask themselves about performance13 .
Do we know how good we are?
Do we know where we stand relative to the best?
Do we know where and understand why variation exists in our organisation?
Over time, where are the gaps in our practice that indicate a need for change?
In our efforts to improve, what’s working?
Get the right message to the right recipients
Feedback comparing performance among different healthcare organisations and clinicians can leverage competitive instincts. This might not always work as intended. Nobody likes being told they are getting it wrong, repeatedly. Yet this is how clinicians and organisations often experience feedback suggesting suboptimal performance. Low baseline performance is associated with greater improvement after feedback3 but can elicit defensive reactions (“I don’t believe these data”), especially if feedback does not align with recipient perceptions (”My patients are different”). Such responses are not uncommon given that clinicians tend to overestimate their own performance.15 Continued negative feedback perceived as punitive can also be demotivating and risk creating burnout (“What else can I do?”).
Giving feedback to professionals who take pride in their work requires careful thought. Consider, for example, providing feedback to high performers—will positive feedback lead to reduced effort or increase motivation? Should audit programmes switch attention to new topics where performance is poorer, at risk of inducing fatigue in higher performers? Given the law of diminishing returns, attempts to improve already high levels of performance might be less fruitful than switching attention to other priorities. Many clinical actions have a “ceiling” beyond which improvement is restricted because healthcare organisations or clinicians are functioning at or near their maximum capabilities.
A range of approaches can help tailor feedback to recipients’ needs. First, feedback can include comparators that show like for like (such as similar types of organisations with similar case mixes) and set realistic goals for change relative to performance levels (such as lower but more achievable targets for poorer performers). Second, feedback can be delivered alongside a range of tangible action plans to support improvement; for example, an implementation toolbox improved pain management in intensive care units.16 17 Third, new audit criteria need to be convincing, based on robust evidence and with scope for patient and population benefit.
Make more out of less data
Healthcare organisations and clinicians need to juggle competing priorities and therefore struggle to act on all feedback from national and local audit programmes. A 2012 snapshot identified 107 National Institute for Health and Care Excellence clinical guidelines relevant to primary care, resulting in 2365 recommendations.18 Audit programmes can help to identify which recommendations have the greatest potential to benefit patients and populations.
One of the highest costs associated with audit programmes is the time and effort involved in data collection, particularly the manual review of patient records. The burden of this data collection can be compounded by temptations to add in more variables for analyses that marginally improve precision.19 The resulting feedback might reinforce the credibility of data and enable recipients to explore associations in the data. Providing larger amounts of complex data, however, risks cognitive overload and distracting recipients from key messages. The diminishing returns of continuing efforts to perfect data come at the expense of focusing energy on improvement.19
The increasing availability of electronic patient record systems and routinely collected data on quality of care offer opportunities for large scale, efficient feedback programmes. Such approaches offer greater population coverage, which can overcome risks of biased sampling associated with manual review, such as the loss of records of patients with poorer outcomes. Routine data can also be collected and analysed in real time, thereby enabling faster, continuous feedback and countering objections voiced by clinicians (“These data are out of date”).
Data quality is only as good as coding at the point of care. Validity checks and quality control of the data might compound the burden on clinical teams. Data linkage and extraction across different information requires compliance with data protection and information governance requirements. Even with all this in place, we must acknowledge Einstein’s advice that not everything that counts can be counted, and not everything that can be counted counts.
Embed research to improve impact
Poor research design, conduct, and dissemination contribute to “research waste.”20 Implementation science aims to translate research evidence into routine practice and policy but is also affected by research waste. A cumulative meta-analysis of the Cochrane review of audit and feedback indicated that the effect size stabilised in 2003 after 30 trials.21 By 2011, 47 more trials of audit and feedback versus control were published that did not substantially advance knowledge, many omitting feedback features likely to enhance effectiveness. This indicated a growing literature but “stagnant science.”
Implementation laboratories offer a means of enhancing the impact of audit and feedback while also producing generalisable knowledge about how to optimise effectiveness.22 A “radical incrementalist” approach entails making serial, small changes, supported by tightly focused evaluations to cumulatively improve outcomes.23 It is already used in public policy and in business. Amazon and eBay randomly assign potential customers to see different presentations of their products online to understand what drives purchases. It is also applicable to healthcare24 and can help answer many questions about how best to organise and deliver feedback (such as, does feedback on performance indicating an organisation’s position against top performing peers stimulate more improvement than showing its position against average performance? What is the effect of shorter versus longer feedback reports? Does adding additional persuasive messages have any effect?). Embedding sequential head-to-head trials testing different feedback methods in an audit programme provides a robust empirical driver for change. Modifications identified as more effective than the current standard become the new standard; those that are not are discarded.
Harness public and patient involvement
Healthcare providers and researchers are still learning how to work meaningfully with patients and the public, and there are opportunities in audit programmes. This means moving beyond current models of involvement—typically advisory group roles to ensure accountability and contribute to strategy—towards active participation in feedback and service improvement.
Patients and the public are often surprised by the extent of unwarranted variations in healthcare delivery, which is the core business of audit programmes.25 They express frustration at the difficulties in routinely measuring less technical aspects of care, such as consultation skills and patient centredness. Involving patients and the public, including seldom heard communities, early in the process of developing indicators is important. Audit programmes can be at the forefront of innovating and evaluating different approaches to involvement, asking questions such as, does incorporating the patient voice in feedback lead to greater improvement? Can feedback reports be better designed to improve understanding for both lay and professional board members of healthcare organisations? Patients and the public represent an underexplored and untapped force for change, which audit programmes can learn to harness.
Conclusion
Audit and feedback are widely used, sometimes abused, and often under-realised in healthcare. More imaginative design and responses are overdue; these require evidence informed conversations between clinicians, patients, and academic communities. It is time to fully leverage national audits to accelerate data guided improvement and reduce unwarranted variations in healthcare. The status quo is no longer ethical.
Contributors and sources: RF, SA, and NMI are general practitioners and implementation researchers with international experience of designing and evaluating large scale audit and feedback programmes. MS, JS, JI, and DK work for the Healthcare Quality Improvement Partnership, a charity led by the Academy of Medical Royal Colleges, the Royal College of Nursing, and National Voices. MS has experience in local, regional, and national delivery of quality improvement programmes, including commissioning of national clinical audits. BM is a service user with involvement, activation, and empowerment expertise in quality improvement and health equality programmes. JS has operational expertise and leadership in the design of national clinical audit programmes in the UK and abroad and is a non-executive director at the Mid Essex Hospital Trust. JI has leadership and strategic management expertise of healthcare providers and charities in quality improvement, including clinical audit. DK has expertise in leading clinical, regulatory, executive participation in national clinical audit and patient outcome programmes driving local quality improvement in acute hospitals in the UK. RF and MS drafted the initial manuscript. All authors contributed to and commented on subsequent drafts and approved the final manuscript. RF is the guarantor.
NMI is supported by the Department of Family and Community Medicine at the University of Toronto and by a Canada Research Chair in Implementation of Evidence Based Practice.
Patient involvement: BM coauthored the manuscript and emphasised the need to focus on tackling unwarranted variations in healthcare delivery and involve a diverse range of patients and members of the public in improving national audit programmes.
This article is one of a series commissioned by The BMJ based on ideas generated by a joint editorial group with members from the Health Foundation and The BMJ, including a patient/carer. The BMJ retained full editorial control over external peer review, editing, and publication. Open access fees and The BMJ’s quality improvement editor post are funded by the Health Foundation.
Competing interests: We have read and understood BMJ policy on declaration of interests and have the following interests to declare: JI, JS, DK, and MS declare that they commission the NCAPOP on behalf of NHS England and Welsh government. The other authors declare no competing interests.
References
- 1. Majeed A, Allwood D, Foley K, Bindman A. Healthcare outcomes and quality in the NHS: how do we compare and how might the NHS improve? BMJ 2018;362:k3036. 10.1136/bmj.k3036 [DOI] [PubMed] [Google Scholar]
- 2. Levine DM, Linder JA, Landon BE. The quality of outpatient care delivered to adults in the United States, 2002 to 2013. JAMA Intern Med 2016;176:1778-90. 10.1001/jamainternmed.2016.6217 [DOI] [PubMed] [Google Scholar]
- 3. Ivers N, Jamtvedt G, Flottorp S, et al. Audit and feedback: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev 2012;6:CD000259. 10.1002/14651858.CD000259.pub3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Sentinel Stroke National Audit Programme. Annual public report April 2013-March 2018. 2019. https://www.hqip.org.uk/resource/sentinel-stroke-national-audit-programme-annual-report-2019
- 5.Royal College of Physicians. National Lung Cancer Audit annual report 2018. 2019. https://www.rcplondon.ac.uk/projects/outputs/national-lung-cancer-audit-nlca-annual-report-2018
- 6. Johnston G, Crombie IK, Davies HTO, Alder EM, Millard A. Reviewing audit: barriers and facilitating factors for effective clinical audit. Qual Health Care 2000;9:23-36. 10.1136/qhc.9.1.23 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7. Hallsworth M, Chadborn T, Sallis A, et al. Provision of social norm feedback to high prescribers of antibiotics in general practice: a pragmatic national randomised controlled trial. Lancet 2016;387:1743-52. 10.1016/S0140-6736(16)00215-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8. Brehaut JC, Colquhoun HL, Eva KW, et al. Practice feedback interventions: 15 suggestions for optimizing effectiveness. Ann Intern Med 2016;164:435-41. 10.7326/M15-2248 [DOI] [PubMed] [Google Scholar]
- 9. Gude WT, van Engen-Verheul MM, van der Veer SN, de Keizer NF, Peek N. How does audit and feedback influence intentions of health professionals to improve practice? A laboratory experiment and field study in cardiac rehabilitation. BMJ Qual Saf 2017;26:279-87. 10.1136/bmjqs-2015-004795 [DOI] [PubMed] [Google Scholar]
- 10. Wagner DJ, Durbin J, Barnsley J, Ivers NM. Measurement without management: qualitative evaluation of a voluntary audit & feedback intervention for primary care teams. BMC Health Serv Res 2019;19:419. 10.1186/s12913-019-4226-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11. Brown B, Gude WT, Blakeman T, et al. Clinical Performance Feedback Intervention Theory (CP-FIT): a new theory for designing, implementing, and evaluating feedback in health care based on a systematic review and meta-synthesis of qualitative research. Implement Sci 2019;14:40. 10.1186/s13012-019-0883-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.National Blood Transfusion Committee. Patient blood management: An evidence-based approach to patient care. 2014. https://www.transfusionguidelines.org/uk-transfusion-committees/national-blood-transfusion-committee/patient-blood-management
- 13. Lloyd R. Quality health care. a guide to developing and using indicators. Jones and Bartlett Publishers, Inc, 2017. [Google Scholar]
- 14. Dixon-Woods M, Martin GP. Does quality improvement improve quality? Future Hosp J 2016;3:191-4. 10.7861/futurehosp.3-3-191 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15. Eccles M, Ford GA, Duggan S, Steen N. Are postal questionnaire surveys of reported activity valid? An exploration using general practitioner management of hypertension in older people. Br J Gen Pract 1999;49:35-8. [PMC free article] [PubMed] [Google Scholar]
- 16. Roos-Blom MJ, Gude WT, de Jonge E, et al. Impact of audit and feedback with action implementation toolbox on improving ICU pain management: cluster-randomised controlled trial. BMJ Qual Saf 2019;28:1007-15. 10.1136/bmjqs-2019-009588 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17. Gude WT, Roos-Blom MJ, van der Veer SN, et al. Facilitating action planning within audit and feedback interventions: a mixed-methods process evaluation of an action implementation toolbox in intensive care. Implement Sci 2019;14:90. 10.1186/s13012-019-0937-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18. Rushforth B, Stokes T, Andrews E, et al. Developing ‘high impact’ guideline-based quality indicators for UK primary care: a multi-stage consensus process. BMC Fam Pract 2015;16:156. 10.1186/s12875-015-0350-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19. Dixon-Woods M. How to improve healthcare improvement-an essay by Mary Dixon-Woods. BMJ 2019;367:l5514. 10.1136/bmj.l5514 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20. Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet 2009;374:86-9. 10.1016/S0140-6736(09)60329-9 [DOI] [PubMed] [Google Scholar]
- 21. Ivers NM, Grimshaw JM, Jamtvedt G, et al. Growing literature, stagnant science? Systematic review, meta-regression and cumulative analysis of audit and feedback interventions in health care. J Gen Intern Med 2014;29:1534-41. 10.1007/s11606-014-2913-y [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22. Grimshaw JM, Ivers N, Linklater S, et al. Audit and Feedback MetaLab Reinvigorating stagnant science: implementation laboratories and a meta-laboratory to efficiently advance the science of audit and feedback. BMJ Qual Saf 2019;28:416-23. 10.1136/bmjqs-2018-008355 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23. Halpern D, Mason D. Radical incrementalism. Evaluation 2015;21:143-9 10.1177/1356389015578895 . [DOI] [Google Scholar]
- 24. Horwitz LI, Kuznetsova M, Jones SA. Creating a learning health system through rapid-cycle, randomized testing. N Engl J Med 2019;381:1175-9. 10.1056/NEJMsb1900856 [DOI] [PubMed] [Google Scholar]
- 25. Ivers NM, Maybee A, Ontario Healthcare Implementation Laboratory team Engaging patients to select measures for a primary care audit and feedback initiative. CMAJ 2018;190(Suppl):S42-3. 10.1503/cmaj.180334 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26. Elouafkaoui P, Young L, Newlands R, et al. Translation Research in a Dental Setting (TRiaDS) Research Methodology Group An audit and feedback intervention for reducing antibiotic prescribing in general dental practice: the RAPiD cluster randomised controlled trial. PLoS Med 2016;13:e1002115. 10.1371/journal.pmed.1002115 [DOI] [PMC free article] [PubMed] [Google Scholar]