Skip to main content
Journal of Infection Prevention logoLink to Journal of Infection Prevention
. 2015 Jun 24;16(6):248–254. doi: 10.1177/1757177415592010

A case study of healthcare professional views on the meaning of data produced by hand hygiene auditing

Carolyn H Dawson 1,2,
PMCID: PMC5074167  PMID: 28989439

Abstract

Background:

Measurement of hand hygiene (HH), crucial for patient safety, has acknowledged flaws stemming from methods available. Even direct observation, the World Health Organization gold standard, may lead to behaviour changes which can affect outcome validity. However, it remains important to understand current levels of HH to allow targeted interventions to be developed. This has resulted in wider adoption of auditing processes.

Aim:

This study addressed how healthcare professionals perceive data generated by HH auditing processes.

Methods:

Qualitative study involving participatory observation and semi-structured interviews with 30 healthcare professionals recruited from a large National Health Service (NHS) two-hospital site in England.

Findings:

Healthcare professionals perceived two main problems with HH measurement, both associated with feedback: (1) lack of clarity with regard to feedback; and (2) lack of association between training and measurement. In addition, concerns about data accuracy led the majority of participants (22/30) to conclude audit feedback is often ‘meaningless’.

Conclusion:

Healthcare professionals require meaningful data on compliance with HH to engender change, as part of a multimodal strategy. Currently healthcare professionals perceive that data lack meaning, and are not seen as drivers to improve HH performance. Potential opportunities to change practice and improve HH are being missed.

Keywords: audit, hand hygiene (HH), healthcare professional involvement, infection control, patient safety, qualitative research, WHO 5 Moments

Introduction

Hand hygiene (HH) is considered a fundamental infection prevention strategy. It can prevent cross-transmission, both endogenous and exogenous, which could cause healthcare-associated infections (HCAI) (Sax et al., 2007; Creamer et al., 2010). While ensuring accurate and efficient measurement of HH is challenging, (Gould et al., 2007; Haas and Larson, 2007) auditing tools offer a way to standardise measurement allowing output data to be compared against targets to monitor progress (Benjamin, 2008). Kilpatrick (2008) comments on the commonality of auditing for monitoring HH with multiple tools developed explicitly for this purpose (e.g. ICNA, 2004; Kilpatrick, 2008; Sax et al., 2009). Technological advancements have seen a rise in electronic monitoring systems, which may offer efficiency benefits over manual auditing methods; however system limitations require further exploration (WHO, 2013; Dawson and Mackrill, 2014).

The ICNA Hand Hygiene Audit Tool is part of the ICNA (2004) Audit Tools for Monitoring Infection Control Standards 2004 document. This provides 40 criteria (32 alphabetised questions) enabling assessment of environmental factors (e.g. availability of HH equipment, including paper towels, soap and alcohol-based hand rub), observational factors (e.g. whether HH is performed at key moments of patient care) and knowledge factors (e.g. whether healthcare professionals are aware of when to perform HH). Data are then translated into compliance scores: Compliant (>=85%), Partial Compliance (76–84%) and Minimal Compliance (<=75%).

The aim of the study was to determine how healthcare professionals at a large NHS two-hospital site in England perceive data generated by an auditing process based on the ICNA (2004) tool.

This research was motivated by two issues, highlighted at scoping meetings held prior to data collection between senior Infection Prevention and Control Team (IPCT) members and the researcher:

  1. Proposals to consider technology for HH auditing, based on: (1) potential to increase audit efficiency; and (2) a preconception held by senior staff that staff would have a preference for technology over current manual methods.

  2. Study site concerns regarding the validity and reliability of data currently being produced compared to commonly observed behaviour.

As individuals experiencing a process are acknowledged as valuable information sources (Hovenga et al., 2005), understanding the process as perceived by those involved was identified as a critical first step, prior to identifying what, if any, improvements were needed.

Method

Study design

The study was conducted over a 10-month period, employing a qualitative approach, using semi-structured interviews and participatory observation. As with Randle and Bellamy’s (2011) semi-structured interview-based research, the rationale for focusing on perceptions of healthcare professionals stemmed from the explicit wish to understand the experiences of those involved in a particular phenomenon – here HH auditing. Approval was granted by the NHS research ethics committee board and the site’s Research and Development department. Informed consent was obtained from all participants.

Thirty participants were recruited from wards recommended by the IPCT Matron and Chief Nursing Officer of the study site, and literature regarding the influence of ward type on HH (Pittet et al., 1999; Hugonnet et al., 2002). (Table 1). While the study site used the same auditing process regardless of ward, the amount of feedback required was affected by compliance level measured. Therefore this work attempted to recruit staff from wards likely to have both higher/lower levels of compliance, therefore leading to less/more feedback requirements.

Table 1.

Participant details, including the Audit Process Involvement (API) group, interview format and gender split.

Participant API Group Participants (n) Interviews (n)
Generators of Data (GoD) 7 (5 women, 2 men) 7
Recipients of Feedback (RoF) 7 (6 women, 1 man) 6 (including 1 group interview)
Subjects of Observation (SoO) 14 (13 women, 1 man) 4 (including 3 group interviews)
Additional sources 2 (1 woman, 1 man) 2
Totals 30 (25 women, 5 man) 19 (15 individual, 4 group)

Participant selection was purposeful, to ensure representation of the following of the following areas: Cardio-Thoracics, Critical Care, Elderly Care, General Medical, Infection Prevention and Control, Maternity, Renal Medicine, and Trauma and Orthopaedics.

Healthcare professional categories

The Scoping meetings identified that healthcare professionals had three different roles within the Audit Process Involvement (API):

  1. Generators of Data (GoD): those responsible for measuring HH, and managing the audit process.

  2. Recipients of Feedback (RoF): those for whom data were prepared.

  3. Subjects of Observation (SoO): any individual whose behaviour may be assessed during the audit process (e.g. Doctor, Nurse).

Any healthcare professional in a role explicitly identified by the GoD as being the destination for audit results was classified as a RoF, even if their HH was also observed during audit processes.

The IPCT were identified as having individual responsibilities, therefore all were invited to participate. The RoF included Modern Matrons (Lawrence and Richardson, 2012) and Ward Managers from a diverse range of wards. Recruitment of SoO occurred via Modern Matrons and Ward Managers, and through personal recommendation from the IPCT. Sampling for RoF and SoO participants ceased when theoretical saturation was obtained, with no new themes being raised in two subsequent interviews (Cavazos et al., 2008). Two additional participants (one responsible for data management, one from the Trust Board) were also recruited for interview (AS).

Data collection

Data was collected during April to October 2012. Involvement of participants from the three API groups allowed the current method of HH measurement (ICNA, 2004) to be explored from the perspectives of those involved.

An interview schedule was developed based on study aims, with questions allowing participants to share views on the current audit process. Each interview included discussion of a Current State Map, a visual tool shown to participants outlining the current audit process as described to the researcher in scoping meetings.

Participants were invited to comment on their interpretation of this process. The visual tool enabled discussion of how data generation occurred, with focus placed on areas of strength, weakness and uncertainty. As data feedback had not been covered in scoping meetings it was shown to be unclear on the Current State Map, encouraging discussion.

Interviews took place within participant workplaces, involving individual or group design (Table 1). Participants from the same clinical area and API group chose to be interviewed individually or within a group. Interviews were recorded and transcribed verbatim unless unfeasible, in which case detailed notes were taken.

To observe the audit process in operation the researcher undertook participatory observation sessions with three members of the IPCT members undertaking audits across various settings. This provided the researcher with experience of practical challenges faced in measuring HH, and an understanding of how data were generated, analysed and fed back.

Data analysis

Thematic analysis of interview transcripts followed the approach outlined by Boyatzis (1998) whereby initial deductive themes are ‘looked for’ in the data, while inductive themes are noted as they emerge. A coding schedule was developed by the researcher. Each interview question became an initial deductive theme, with inductive themes noted and added. Portions of interview text were assigned categories relating either to initial deductive themes or new, inductive themes. Abbreviated codes representing these categories were used during this annotation phase.

Data rigour

To ensure representativeness of data, verification (Patton, 2002) and member checking (Baxter and Jack, 2008) were conducted with a sample of participants (2 GoD, 2 RoF, 1 AS).

Results

Data were captured from a total of 30 participants in 15 interviews and four group interviews (Table 1). A total of 351 min of participant observation was conducted over 10 sessions.

Following thematic analysis (Table 2), axial coding was used to explore links between categories, leading to the creation of three main emergent themes:

Table 2.

Inductive and deductive themes used as part of a coding schedule during data analysis, resulting in emergence of three themes.

Initial theme
Deductive
 1. Identify tools used.
 2. Understand/Portray how HH compliance is currently measured/monitored within an NHS acute setting.
 3. Clarify whether healthcare professionals consider this process to be a ‘burden’ AND whether they think it has the potential to be improved.
 4. Clarify whether healthcare professionals consider the tool being used (ICNA) is exacerbating the ‘burden’, e.g. would a change of tool help?
 5. Clarify whether healthcare professionals have concerns over data accuracy.
Inductive
 1. Overuse of gloves.
 2. Lack of education and feedback association (linked to Deductive Theme 3) (link: Not seen as tool per se, but the content of tool not being linked to educational/training priorities i.e. 5 Moments).
 3. Role models.
 4. Workload.
 5. Need for public education (view of SoO).
 6. Concepts of HH (simplicity, difficulty).
Emergent
 1. Lack of clarity with regard to feedback.
 2. Lack of association between training and feedback.
 3. Data accuracy.
  1. Lack of clarity with regard to feedback.

  2. Lack of association between training and feedback.

  3. Data Accuracy.

As some categories related to more than one emergent theme each portion of text was assessed to identify which emergent theme best represented it.

A number of other inductive themes were identified such as overuse of gloves, but were not followed up in this analysis which was focused on HH measurement.

Causes of problems

The emergent themes identified two main problems, both associated with feedback:

  1. Lack of clarity with regard to feedback.

  2. Lack of association between training and measurement.

The manual aspect of the measurement process, while acknowledged by GoD and RoF as time-consuming, did not emerge as a major cause of concern. This contradicted the perception raised in the scoping meetings, that staff would prefer a move away from manual methods.

There was a perception held by participants across all API groups that data generated by the current measurement process was ‘meaningless’. However three sub-categories emerged, reflecting concerns of the participants within each API group.

a) Generators of Data: Where does data go?

Those in the GoD group raised concerns about how data generated by the audit process was used to engender change and found it hard to perceive any change stemming from the audit process. One example highlights frustration felt:

‘You fail it, you fail it, you fail it, you fail it. There’s no answer to it. No one has to … answer why it still continues.’ (GoD1)

Concern about the concept of ‘closing the loop’ was expressed:

‘…although we’re carrying out the audits, and we do get the results, they are disseminated back through the Key Performance Indicators … to the Modern Matrons, but how they take that back to the wards we never get fed back on – what they’ve done, or how they’ve actioned it, to remedy the shortfalls…’ (GoD2)

…we know what needs to be done, we go and audit it, we get the results, [but] we don’t ever sort of, really, complete the cycle.’ (GoD3)

b) Recipients of feedback: How to use data?

Participants in the RoF group also highlighted a lack of focus on whether actions based on feedback data were carried out:

…I think there is an Action Plan … normally what I do is action them, but we just keep them for our own recordsI don’t know whether all Matrons are the same … but if we’ve done alright then they [IPCT] don’t seem to come back, but I do seem to find that even if we haven’t done alright they don’t always come and talk to us…’ (RoF1)

Personal interpretation of HH audit data was also described as being necessary and the feedback process could sometimes feel ‘incomplete’:

[Researcher: ‘You don’t get a copy of that actual report (ICNA tool)?’]

‘No. That would be useful actually … [I] like to see where you’ve gone wrong … rather than just getting feedback I like to see for myself what we did…’ (RoF2)

Perceptions of incompleteness led participants to reveal concerns, clarified as ‘lack of meaning’ in audit feedback. They felt unable to relate received data to clinical practice in their setting. Therefore planning how to move forward (e.g. interventions, training plans) using the data was unclear. The received data had no meaning to them.

c) Subjects of Observation: What does data mean?

While SoO agreed they had access to HH audit data, the meaning of the data was a prime concern. This was related to a lack of association between what was measured by the audit tool and how they were trained since the latter was based on the WHO 5 Moments.

While the ICNA (2004) tool used at the study site covers environmental, observational and knowledge factors, its use in relation to training priorities appeared to cause puzzlement. SoO felt a tool with no explicit reference to the 5 Moments was confusing. Frustration and a sense of unfairness about the audit process was also voiced:

‘I just can’t see the point really –everything is 5 Moments, 5 Moments, 5 Moments – but if that’s not what we’re measured on, then how does anyone know if we’re doing it right? I haven’t even seen that [ICNA] so I feel like I’m being cheated…’ (SoO1)

GoD were highly supportive of a move towards a measurement tool reflecting recent training based on the WHO 5 Moments:

‘I think … if we’re implementing WHO and the 5 Moments, we need to be reflecting that in our audit … While this is relevant [ICNA, 2004, points 32a–g], very relevant, [it] doesn’t reflect what now we teach.’ (GoD2)

This support occurred despite acknowledgment that a changeover would involve disruption.

Data accuracy

Issues around data accuracy added to the perceptions that the current measurement process produced meaningless data. While data management was perceived positively because of the involvement of a data analyst, direct observation was highlighted as a weakness:

‘They’re corrupt. These HH audits … there’s nothing valid about them.’ (GoD1)

‘People definitely perform differently when we’re auditing them … we know that, but you have to accept it.’ (RoF3)

Study output

Data obtained from the interviews and participatory observation sessions was used to construct a New Current State Map, providing visual clarity to the role of feedback. Data were perceived to travel ‘UP’ to the management of the Trust, and ‘DOWN’ to areas it was collected from, with participants noting Modern Matrons being chief gatekeepers (Figure 1).

Figure 1.

Figure 1.

New Current State Map: produced using data from interviews and participatory observation. Provides visual representation of perceived causes of problems relating to the current auditing process. Showing Feedback Loops 1 and 2, described as representing HH measurement data flowing ‘up’ and ‘down’, respectively. Incomplete feedback loops are shown as using dashed lines, with question marks included to indicate the lack of clarity expressed by participants regarding how feedback was used.

The New Current State Map has formed the basis for process improvement work for the IPCT, with emphasis placed on making feedback data meaningful and applicable to the recipient.

Discussion

Meaningful data

Meaningful feedback has been identified as a key component in successful audit and quality improvement (Ivers et al., 2012; Larson et al., 2013). Therefore this study’s findings, that healthcare professionals perceived the data generated from the audit process as effectively meaningless, are of interest.

The GoD perceived that a current focus on data collection was at the expense of following up actions highlighted by the measurement process (i.e. where do the data go?). This has previously been noted as an issue where audit occurs in isolation from other quality management activities (Powell et al., 2009).

While RoF also felt the feedback loop was not being closed, they highlighted a lack of usefulness of the data, citing difficulties relating data to specific clinical practice and opportunities for change. Elsewhere individualised feedback has been positively received. In a study on the role of audit feedback within Surgical Site Infections, participants perceived being able to link actions with specific outcomes as useful, along with having an objective performance measure to identify potential areas prime for change (Nessim et al., 2012).

Strong motivations for the use of audit and feedback for improvement are the assumptions that data recipients are willing and able to change their behaviour, and agree with those generating audit data regarding necessary improvement goals (Larson et al., 2013). Here data from RoF indicates that ‘willingness’ is present; however the perception that feedback loops are incomplete does not support their ability to change behaviour. Audit feedback does not provide healthcare professionals with meaningful instruction regarding how they should change their behaviours to achieve infection prevention goals.

Feedback relevance is seen as vital for recipients to be able to understand how to interpret and act on data received (Larson et al., 2013). Both RoF and SoO highlighted this as a problem linked to variation between training and measurement criteria. While the SoO noted difference in tools, the RoF alluded to lack of association through their inability to integrate feedback data with clinical practice. During interviews RoF displayed knowledge of the role of HH within clinical practice, using the WHO 5 Moments as a reference point. This indicated that initial training around these moments had been successful. The 5 Moments may therefore provide the basis for future, meaningful measurement criteria at the study site.

Limitations

While individuals from all API groups participated, the sample only contained one Consultant, and no Doctors; remaining participants were either nurses, healthcare support staff, IPCT members or persons with managerial/administrative roles. As past studies have demonstrated the effect of role on HH (WHO, 2009) lack of representation from these quarters may be a limitation. Additionally it is acknowledged that IPCT participant backgrounds may have influenced their perceptions, however this was not explored in this study.

Conclusion

In this study healthcare professionals viewed data generated by current HH auditing as meaningless. Individual roles within the measurement process (i.e. GoD, RoF, SoO) provided further details as to specific perceptions of weakness regarding audit data. Participants raised neither the need for HH measurement, nor the fact that the process was manual, as particular problems. The requirement for a process that could provide an output which would support change was seen as the primary aim, and there was a frustration that this was absent in the current approach to HH audit.

Acknowledgments

With thanks to the Infection Prevention and Control team and staff at University Hospitals Coventry and Warwickshire NHS Trust for their help and participation in this study.

Footnotes

Declaration of conflicting interests: The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding: The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was conducted as part of a fully funded PhD, financed by an EPSRC grant through a WIMRC award from WMG, University of Warwick, and through further funding from the Institute of Digital Healthcare (IDH), WMG, University of Warwick.

References

  1. Baxter P, Jack S. (2008) Qualitative Case Study Methodology: Study Design and Implementation for Novice Researchers. The Qualitative Report 13(4): 544–559. [Google Scholar]
  2. Benjamin A. (2008) Audit: how to do it in practice. British Medical Journal 336: 1241–1245. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Boyatzis RE. (1998) Transforming Qualitative Information: Thematic Analysis and Code Development. London: Sage Publications. [Google Scholar]
  4. Cavazos JM, Naik AD, Woofter A, et al. (2008) Barriers to physician adherence to nonsteroidal anti-inflammatory drug guidelines: a qualitative study. Alimentary Pharmacology & Therapeutics 28(6): 789–798. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Creamer E, Dorrian S, Dolan A, et al. (2010) When are the hands of healthcare workers positive for meticillin-resistant Staphylococcus aureus? Journal of Hospital Infection 75(2): 107–111. [DOI] [PubMed] [Google Scholar]
  6. Dawson CH, Mackrill JB. (2014) Review of technologies available to improve hand hygiene compliance – are they fit for purpose? Journal of Infection Prevention 15(6): 222–228. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Gould DJ, Chudleigh J, Drey NS, et al. (2007) Measuring handwashing performance in health service audits and research studies. Journal of Hospital Infection 66(2): 109–115. [DOI] [PubMed] [Google Scholar]
  8. Haas JP, Larson EL. (2007) Measurement of compliance with HH. Journal of Hospital Infection 66(1): 6–14. [DOI] [PubMed] [Google Scholar]
  9. Hovenga E, Garde S, Heard S. (2005) Nursing constraint models for electronic health records: A vision for domain knowledge governance. International Journal of Medical Informatics 74(11–12): 886–898. [DOI] [PubMed] [Google Scholar]
  10. Hugonnet S, Perneger TV, Pittet D. (2002).Alcohol-based handrub improves compliance with HH in intensive care units. Archives of Internal Medicine 162(9): 1037–1043. [DOI] [PubMed] [Google Scholar]
  11. ICNA. (2004) Audit tools for monitoring infection control standards 2004. London: Department of Health. [Google Scholar]
  12. Ivers N, Jamtvedt G, Flottorp S, et al. (2012) Audit and feedback: effects on professional practice and healthcare outcomes. Cochrane Database of Systematic Reviews 6: CD000259. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Kilpatrick C. (2008) The development of a minimum dataset audit tool for Scotland’s NHS HH Campaign. British Journal of Infection Control 9(2): 8–11. [Google Scholar]
  14. Larson EL, Patel SJ, Evans D, et al. (2013) Feedback as a strategy to change behaviour: the devil is in the details. Journal of Evaluation in Clinical Practice 19(2): 230–234. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Lawrence N, Richardson J. (2012) To explore and understand the leadership experiences of modern matrons, within an acute NHS Trust. Journal of Nursing Management 22: 70–79. [DOI] [PubMed] [Google Scholar]
  16. Nessim C, Bensimon CM, Hales B, et al. (2012) Surgical site infection prevention: a qualitative analysis of an individualized audit and feedback model. Journal of the American College of Surgeons 215(6): 850–857. [DOI] [PubMed] [Google Scholar]
  17. Patton MQ. (2002) Qualitative research & evaluation methods (3rd edn). London: Sage Publications Ltd. [Google Scholar]
  18. Pittet D, Mourouga P, Perneger TV. (1999) Compliance with handwashing in a teaching hospital Infection Control Program. Annals of Internal Medicine 130(2): 126–130. [DOI] [PubMed] [Google Scholar]
  19. Powell AE, Rushmer RK, Davies HTO. (2009) A systematic narrative review of quality improvement models in quality health care. Scotland: Social Dimensions of Health Institute at The Universities of Dundee and St Andrews/NHS Quality Improvement Scotland. [Google Scholar]
  20. Sax H, Allegranzi B, Uçkay I, et al. (2007) ‘My five moments for HH’: a user-centred design approach to understand, train, monitor and report HH. Journal of Hospital Infection 67(1): 9–21. [DOI] [PubMed] [Google Scholar]
  21. Sax H, Allegranzi B, Chraïti M-N, et al. (2009) The World Health Organization HH observation method. American Journal of Infection Control 37(10): 827–834. [DOI] [PubMed] [Google Scholar]
  22. World Health Organization. (2009) WHO Guidelines on HH in Health Care: First Global Patient Safety Challenge Clean Care is Safer Care. Geneva: WHO Press. [PubMed] [Google Scholar]
  23. World Health Organization. (2013) HH monitoring and feedback. Geneva: WHO Press. Available at: http://www.who.int/gpsc/5may/automated-hand-hygiene-monitoring.pdf?ua=1 (accessed 14 May 2014).

Articles from Journal of Infection Prevention are provided here courtesy of SAGE Publications

RESOURCES