Abstract
Background
The Centers for Disease Control and Prevention (CDC) National Healthcare Safety Network (NHSN) surveillance definitions are the most widely used criteria for health care-associated infection (HAI) surveillance. NHSN participants agree to conduct surveillance in accordance with the NHSN protocol and criteria. To assess the application of these standardized surveillance specifications and offer infection preventionists (IPs) opportunities for ongoing education, a series of case studies, with questions related to NHSN definitions and criteria were published.
Methods
Beginning in 2010, case studies with multiple-choice questions based on standard surveillance criteria and protocols were written and published in the American Journal of Infection Control with a link to an online survey. Participants anonymously submitted their responses before receiving the correct answers.
Results
The 22 case studies had 7,950 respondents who provided 27,790 responses to 75 questions during the first 6 years. Correct responses were selected 62.5% of the time (17,376 out of 27,290), but ranged widely (16%-87%). In a subset analysis, 93% of participants self-identified as IPs (3,387 out of 3,640), 4.5% were public health professionals (163 out of 3,640), and 2.5% were physicians (90 out of 3,640). IPs responded correctly (62%) more often than physicians (55%) (P = .006).
Conclusions
Among a cohort of voluntary participants, accurate application of surveillance criteria to case studies was suboptimal, highlighting the need for continuing education, competency development, and auditing.
Keywords: Surveillance, Competency, Health care-associated infection reporting
Surveillance is “the ongoing, systematic collection, analysis, and interpretation of health data. . .integrated with the timely dissemination of these data to those who need to know.”1 Health care-associated infection (HAI) surveillance metrics used for public reporting and Centers for Medicare and Medicaid Services incentive-based programs as well as many surveillance programs worldwide, rely on the National Healthcare Safety Network (NHSN) Patient Safety Component Manual for surveillance definitions and criteria for reportable HAIs.2 One characteristic of optimal surveillance definitions is that they are consistently applied by the individuals conducting the surveillance. In response to NHSN user feedback and changes in diagnostic tests and practices, the Centers for Disease Control an Prevention periodically revises the NHSN HAI surveillance definitions. In recent years, there have been reports of inconsistent application of the surveillance criteria amongst infection preventionists (IPs) as a result of state-level audits,3 among Veteran’s Affairs facilities,4 the Society for Healthcare Epidemiology of America Research Network,5 and when IP surveillance efforts are compared with the performance of a computer algorithm.6
To provide an opportunity for education and to assess the application of NHSN criteria by IPs, a series of case studies was published in American Journal of Infection Control beginning in 2010 and continuing today. Case studies included a link to a set of questions for anonymous use by readers seeking to test their knowledge of the NHSN definitions and criteria relevant to the case. Readers who submitted responses received the correct answers. This article aims to summarize the first 6 years of this project and describe the accuracy of volunteer participants in applying NHSN definitions to case studies developed by the authors.
METHODS
Based on questions submitted to the NHSN user-support mailbox by NHSN users and pertinent definitional changes in 2013 (eg, HAI and mucosal barrier injury) and 2015 (eg, infection window period and repeat infection time frame exclusion of fungal organisms in urinary tract infections) or new surveillance modules (eg, ventilator-associated events and laboratory-identified events), case studies with multiple-choice questions were developed by the authors. Each question and correct response required a detailed explanation and citations from the current NHSN manual for specific justification. Once the subject matter experts reached agreement with the draft of the case study, it was reviewed by staff members at NHSN for accuracy before being submitted and published in American Journal of Infection Control with a link to an online survey (SurveyMonkey Inc, San Mateo, CA). A June 2012 supplement offered continuing education credits and was hosted on the Centers for Disease Control and Prevention Training and Continuing Education Online Web site (http://www.cdc.gov/tceonline). Beginning with the ninth case study, published in October 2013, the introduction included a specific recommendation to use the appropriate NHSN manual section(s), with an external link to the particular section of the manual that would be needed for answering the questions.
Demographic characteristic questions asked of the participants pertaining to professional role (IP, physician director of infection prevention, or public health sector) and board certification in infection control as provided by the Certification Board of Infection Control and Epidemiology, were added to select case studies. Participants were advised that their responses to these questions were completely voluntary and did not impede their ability to complete the case study and receive the correct responses/explanations. Individuals anonymously volunteered to participate and submitted their responses through the online survey before receiving the correct answers and associated explanations. Participants who completed the case study were provided with the correct responses and explanations. Citations from the current Patient Safety Protocol were supplied for the rationale used in selecting the appropriate response. Surveys remained open for varying periods of time, but were closed in advance of any pertinent modifications to the NHSN modules to avoid any discordance between the rationale developed for the case study and the current NHSN specifications outlined in the manual.
The total number of participants per case study is the minimal number of persons who completed all questions. Whereas multiple respondents from the same Internet protocol address were not accepted, incomplete submissions—in which the participant partially completed the case study—were accepted and included with the complete submissions in the analysis. Correct responses are presented as proportions with rate ratios, confidence intervals (CIs), and Pearson χ2 P values for significance testing. Statistical analysis was performed using WinPEPI 8.1 (http://www.brixtonhealth.com/pepi4windows.html).7
RESULTS
The cases studies, in chronological order with brief descriptions and proportion of correct responses by question and overall, are presented in Table 1.
Table 1.
Case studies, date of publication in American Journal of Infection Control, topics covered, and proportions of responses answered correctly by participants
Case | AJIC Publication | Reference | Participants | Topics | Question 1 | Question 2 | Question 3 | Question 4 | Question 5 | Question 6 | Question 7 | Question 8 | Total |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | June-10 | Am J Infect Control. 2010 Jun;38(5):416-8. | 811 | CLABSI, SSI | 607/811 | 447/811 | 682/811 | 472/811 | 2208/3244 (68.1%) | ||||
2 | September-10 | Am J Infect Control. 2010 Sep;38(7):557-8. | 807 | CLABSI, Skin Contaminants, UTI | 657/807 | 555/807 | 735/807 | 1947/2421 (80.4%) | |||||
3 | October-10 | Am J Infect Control. 2010 Oct;38(8):642-3. | 524 | VAP | 335/524 | 301/524 | 636/1048 (60.7%) | ||||||
4 | February-11 | Am J Infect Control. 2011 Feb;39(1):64-65. | 705 | UTI, CLABSI, Organ Space SSI | 445/705 | 615/705 | 115/705 | 403/705 | 1578/2820 (56.0%) | ||||
5 | June-11 | Am J Infect Control. 2011 Jun;39(5):431-2. | 727 | SSI, Onset Date, Reporting Month | 567/753 | 208/727 | 574/727 | 275/727 | 1624/2934 (55.4%) | ||||
6 | August-11 | Am J Infect Control. 2011 Aug;39(6):515-6. | 374 | SSI, Device- associated meningitis | 258/374 | 202/374 | 81/374 | 541/1122 (48.2%) | |||||
7 | Supplement June 2012 | Am J Infect Control. 2012 Jun;40(5 Suppl):S32-40. | 297 | Non NHSN procedures, SSI, Endoscope reporting | 181/297 | 151/297 | 142/297 | 245/297 | 230/297 | 949/1485 (63.9%) | |||
8 | Supplement June 2012 | ibid | 297 | Skin and Soft Tissue Infection, CLABSI | 165/297 | 152/297 | 157/297 | 474/891 (53.2%) | |||||
9 | Supplement June 2012 | ibid | 297 | SSI, 2° CLABSIs | 232/297 | 211/297 | 165/297 | 608/891 (68.2%) | |||||
10 | Supplement June 2012 | ibid | 297 | SSI | 228/297 | 125/297 | 353/594 (59.4%) | ||||||
11 | Supplement June 2012 | ibid | 297 | Wound Class Changes, Procedure Duration, Event Date, SSI | 203/297 | 257/297 | 220/297 | 185/297 | 175/297 | 1040/1485 (70.0%) | |||
12 | Supplement June 2012 | ibid | 297 | UTI, ABUTI | 223/297 | 206/297 | 179/297 | 608/891 (68.2%) | |||||
13 | Supplement June 2012 | ibid | 297 | CLABSI, SSI | 177/297 | 149/297 | 326/594 (54.9%) | ||||||
14 | Supplement June 2012 | ibid | 297 | LCBI to PIV, CLABSI, Unit Attribution | 176/297 | 246/297 | 232/297 | 654/891 (73.4%) | |||||
15 | Supplement June 2012 | ibid | 297 | UTI, SSI | 210/297 | 174/297 | 171/297 | 555/891 (62.3%) | |||||
16 | August-12 | Am J Infect Control. 2012 Aug;40(6):554-5. | 117 | CLABSI, Organism Susceptibility Sameness | 91/117 | 94/117 | 44/117 | 229/351 (65.2%) | |||||
17 | September-12 | Am J Infect Control. 2012 Sep;40(7):670-1. | 124 | CLABSI, Phlebitis, SSI | 96/124 | 94/124 | 38/124 | 228/372 (61.3%) | |||||
18 | October-13 | Am J Infect Control. 2013 Oct;41(10):916-7. | 85 | LabID, Transfer Rule, Community-Onset, RIT | 64/87 | 29/85 | 67/86 | 47/87 | 55/87 | 262/432 (60.6%) | |||
19 | November-13 | Am J Infect Control. 2013 Nov;41(11):1085-6. | 30 | VAE, Possible VAP | 27/30 | 20/30 | 47/60 (78.3%) | ||||||
20 | September-15 | Am J Infect Control. 2015 Sep 1;43(9):987-8. | 247 | UTI, 2° BSI, CLABSI | 219/247 | 169/247 | 380/484 (78.5%) | ||||||
21 | October-15 | Am J Infect Control. 2015 Oct 1;43(10):1099-101. | 418 | CLABSI, MBI, RIT, ABUTI | 263/466 | 324/436 | 305/418 | 886/1308 (67.7%) | |||||
22 | July-16 | Am J Infect Control. 2016 Jul 1;44(7):761-3. | 308 | LabID, Observation Units, Transfer Rule, RIT | 236/327 | 55/327 | 156/327 | 234/327 | 163/327 | 178/308 | 91/308 | 115/308 | 1228/2559 (48.0%) |
2°, secondary; ABUTI, asymptomatic bacteremic urinary tract infection; BSI, bloodstream infection; CLABSI, central line-associated bloodstream infection; LCBI, laboratory-confirmed bloodstream infection; MBI, mucosal barrier injury; NHSN, National Healthcare Safety Network; PIV, peripheral intravascular catheter; PMID, PubMed Identifier; RIT, repeat infection timeframe; SSI, surgical site infection; UTI, urinary tract infection; VAE, ventilator-associated event; VAP, ventilator-associated pneumonia.
There were 7,950 respondents who completed the 22 case studies published between June 2010 and July 2016. This includes 297 participants in each of the 9 case studies published as a supplement in June 2012 and ranged from 30 respondents to the ventilator-associated events case study in November 2013 to 811 respondents for the first case study in 2010. Of the 27,290 answers provided, 17,376 (62.5%) were correct. Correct responses by the participants varied widely between the case studies overall (48.0% for case #22 to 80.4% for case #2) as well as within the same case study between different questions (16.0%-87.0% for case #4). Of the 75 questions, the question with the lowest proportion of correct responses was the third question in case #4. The question presented the scenario in which a patient’s maximum temperature was 38°C, and the 705 respondents were asked if the patient, whose blood and urine cultures were positive for Providencia stuartii, had a urinary tract infection. Nearly 32% of respondents indicated that the patient had a symptomatic urinary tract infection despite the fever criteria included in the NHSN definition, which specifies that the temperature must be greater than, and not simply equal to, 38°C. Another 32% cited lack of symptoms for the patient not having any HAI, despite the organism being a recognized pathogen for bloodstream infection surveillance. Lastly, the remaining 20% of incorrect responses ascribed the condition to being an asymptomatic bacteremic urinary tract infection despite the colony count of the urine culture being too low. The most successfully answered question came in case #2, where all but 9% of respondents recognized that the growth of an organism on a catheter tip culture was irrelevant in determining whether or not a patient had central line-associated bacteremia.
Participants were asked to self-identify as an IP, medical director of infection prevention, or employee in the public health sector for 12 of the case studies. These demographic characteristic questions were optional, but 82% (3,640 out of 4,466 participants) volunteered. IPs had the greatest participation rate at 93.0% (3,387 out of 3,640), followed by public health professionals (163 out of 3,640 [4.5%]) and physicians (90 out of 3,640 [2.5%]). Both IPs (7,375 out of 11,861 answers [62%]; rate ratio [RR], 1.14; 95% CI, 1.03–1.26) and public health professionals (346 out of 578 [60%]; RR, 1.09; 95% CI, 0.97–1.24) were more likely to respond to questions more accurately than program medical directors (168 out of 308 [55%]), although this finding was statistically significant for the IPs (P = .006) but not public health professionals (P = .13). IPs were no more likely to respond correctly to the case study questions than public health professionals (RR, 1.04; 95% CI, 0.97–1.11; P = .262). In the most recent case study, participants were asked to volunteer whether or not they were board certified in infection prevention and 83% (256 out 308) responded. The majority of respondents (168 out of 256 [65.6%]) were board certified, yet this was not associated with an increased rate of correct responses to the questions (653 out of 1,344 vs 352 out of 704; RR, 0.97; 95% CI, 0.89–1.07; P = .54).
DISCUSSION
With more than 7,900 participants who provided nearly 28,000 answers over 6 years, the case study respondents are the largest cohort of individuals to be assessed regarding their ability to collectively apply the NHSN surveillance criteria to case studies. Most previously published similar reports of smaller cohorts have focused on interrater reliability; the extent to which 2 or more persons reviewing the same case agree with one another—regardless of whether they are both correct or not, emphasizing reliability and reproducibility over validity.4,5,8–13 Some studies have assessed whether participants answered the question(s) correctly, but it is less clear how the vignette itself was assessed for concordance with NHSN criteria. The use of vignettes and tests developed by active NHSN staff affords a high degree of confidence in the findings obtained and results in highly useful data for educational planning. In a recent report,14 Australian researchers assessed the performance of IPs in applying standardized surveillance definitions to clinical vignettes for surgical site infections and bloodstream infections. Participants responded correctly 64.9% of the time, which was not significantly different from the results in this report among all respondents (P = .36). The authors identified larger bed size, geographic locale, and full-time employment as being associated with a propensity to answer correctly, none of which were assessed in this report. Keller et al5 recruited participants from the Society for Healthcare Epidemiology of America Research Network and similarly included a broad array of HAI types with clinical vignettes that underwent review by NHSN-affiliated subject matter experts. Participants were asked a single yes-or-no question whether the case met the criteria for a specific declared HAI type. Their proportion of correct responses (282 out of 436 [64.7%] after excluding positive and negative controls) did not differ significantly from the proportion reported here (P = .36). However, they examined but did not find any significant difference between IPs and hospital epidemiologists, whereas this report found that IPs answered correctly more often. Similarly, they found no significant association between board certification in infection prevention and accuracy in applying the NHSN definitions, but did find a positive association with having a clinical background.5 This report lacks demographic characteristic data to assess the value of a clinical background. However, the findings of this project, that IPs answered more accurately than program medical directors, suggests that adjudication of surveillance findings by physicians or committee, as is required in some health care facilities, may not result in more accurate data. A report limited to pediatric hematology/oncology and intensive care units also evaluated central line-associated bloodstream infection (CLABSI) outcomes, but compared participant responses to an NHSN staff member’s response used as the gold standard. Although 78% concordance was found, determining presence on admission and distinguishing primary versus secondary sources of infection were more commonly associated with incorrect responses.15 Previous reports are not limited to CLABSI surveillance. A cohort of randomly selected hospitals in the United States demonstrated near-random decision making in applying surveillance criteria to potential ventilator-associated pneumonia cases.13 In a European study, physicians working in infection prevention had higher rates of interrater reliability than surgeons for determining surgical site infections among case studies and this agreement improved after reading the surveillance criteria, whereas no such improvement was observed amongst surgeons.8
IPs devote slightly more than 25% of their professional time to conducting surveillance.16 The results of this report and others suggest that although IPs accurately apply surveillance criteria the majority of the time, there remains substantial opportunity for improvement. An ethnographic study of applying the Michigan Keystone project principles for CLABSI prevention in England reported a widespread perception amongst health care providers that the surveillance criteria used (which reflected NHSN criteria) was more subjective than objective and prone to unfair application by persons conducting surveillance.17 According to results recently published from the Association for Professionals in Infection Control and Epidemiology MegaSurvey, IPs self-assessed their own competency in surveillance and epidemiologic detection (including application of NHSN criteria) as being proficient (48.5%) or expert (34.9%).18 This self-assessment is somewhat contraindicated by the results presented here. Previous reports suggest that automation via computer algorithm detection would outperform IPs by reducing inter- and intrafacility variability and, in effect, level the playing field for reimbursement.6,11 Such automation would require some degree of information technology infrastructure development or standardization and in effect require a federal mandate. Gains in reduced variability from algorithmic detection may be offset with reductions in clinical correlation. Although this may be the future of national surveillance data upon which reimbursement levels are determined, the future is not likely to come quickly. Perla et al10 proposed developing a system of ongoing mandatory proficiency testing for IPs as a condition of participation in NHSN. In light of the primary goal of mandated public reporting, which is to reduce HAIs, and the financial implications for health care facilities, auditing of IP proficiency may be an enforceable strategy to drive improved accuracy of the surveillance definition criteria. However, this would require a significant commitment of resources that may not be readily available. Until that time, the findings in this project can, and have been, used to identify educational needs of IPs. Such information is key to planning and developing training opportunities of NHSN users, on the correct application of HAI surveillance definitions.
This report is unique in several aspects. The content of each case study and the associated questions and answers were written and reviewed in a joint effort that included multiple personnel actively working at NHSN at the time of their development, whereas previous reports concerned individuals working in the field of infection prevention or previous NHSN staff members. Participation was not limited to select institutions, states, or engagement networks and the degree of participation is in excess of all published accounts combined. Previous reports, when clinical vignettes were made available, were often dichotomous scenarios in which the participant was asked a single question as to whether the patient did or did not have a particular declared NHSN-defined HAI.5 The questions in this series of case studies offered various responses from no HAI being present to multiple scenarios of select HAIs with or without secondary sources in the same question and thus were more comparable to the practice of conducting HAI surveillance. Furthermore, questions were not limited to whether or not a scenario represented an NHSN-defined HAI. They also included various aspects of NSHN reporting such as event date, community or hospital onset, organism sameness, transfer rule, and repeat infection timeframe, all of which are essential to accurate surveillance and reporting. Participants were provided with not only the correct responses, but also a detailed explanation for each response, citing the current Patient Safety Component Manual and thus expanding an assessment of surveillance competency into a learning exercise.
The findings presented in this article are subject to several limitations due to study design. The case studies assessed minimal demographic characteristic data and participation with these particular questions was voluntary. Case studies were developed by the authors and often preferentially selected topics that were believed to be areas of difficulty for IP surveillance, or recurring themes from the NHSN inbox, as well as recent changes. As such, the majority of cases may be described as challenging, may not reflect the entirety of the scenarios experienced by IPs performing surveillance, and may underestimate HAI surveillance accuracy. Participants were anonymized, all responses are self-reported, and it is unknown whether they reflect a representative sample of IPs working in the field today. Therefore, it would be inappropriate to apply any conclusions derived from these results to the competency of IPs overall. These case studies assessed competency in applying the NHSN definitions. They did not measure the accuracy of reporting or the practice of gaming, which can arguably best be assessed through frequent external validation efforts of real data. The spectre of gaming looms over the practice of HAI surveillance and the current reimbursement structure in the United States accentuates this unease. Horowitz19 noted in a recent commentary that “a destructive triangulation has arisen between hospital administrators, clinicians and infection control departments that has led to consequences beyond those intended by monitoring agencies.” These case studies assessed participants’ ability to apply the NHSN criteria to theoretical cases, thereby avoiding outcome bias, and provided information important for the ongoing education and training of NHSN users.
CONCLUSIONS
This article summarizes IPs’ abilities in in applying standardized NHSN criteria to case studies. Overall, participants were correct 62.5% of the time, reinforcing the need for ongoing education and training, as well as external validation to improve the accuracy and consistency in the application and assessment of these metrics as quality indicators and redefining reimbursement.
Acknowledgments
The authors thank Captain Teresa Horan for her initial support of the project and contributions as a coauthor to the early case studies; Dr. Elaine Larson, who supported the project from the beginning and facilitated publication of the case studies through the editorial office at American Journal of Infection Control; Gloria C. Morrell, who coauthored several of the early case studies; the staff at National Healthcare Safety Network, which provided valuable insight and edits to all case studies during the review process, including Angela Anttila, Janet Brooks, Cindy Gross, Denise Leaptrot, Georganne Ryan, Eileen Scalise, and Henrietta Smith; and the participants for whom these studies were created and who generously volunteered their time in completing the tasks.
Footnotes
Conflicts of interest: None to report.
References
- 1.CDC. Comprehensive plan for epidemiologic surveillance: Centers for Disease Control, August 1986. Atlanta, GA: U.S. Department of Health and Human Services, CDC; 1986. [Google Scholar]
- 2.Division of Healthcare Quality Promotion, National Center for Emerging and Zoonotic Infectious Disease, Centers for Disease Control and Prevention. [Accessed December 23, 2016];NHSN Patient Safety Component Manual. 2017 Available from: https://www.cdc.gov/nhsn/pdfs/pscmanual/pcsmanual_current.pdf.
- 3.Backman LA, Melchreit R, Rodriguez R. Validation of the surveillance and reporting of central line–associated bloodstream infection data to a state health department. Am J Infect Control. 2010;38:832–8. doi: 10.1016/j.ajic.2010.05.016. [DOI] [PubMed] [Google Scholar]
- 4.Mayer J, Greene T, Howell J, Ying J, Rubin MA, Trick WE, et al. Agreement in classifying bloodstream infections among multiple reviewers conducting surveillance. Clin Infect Dis. 2012;55:364–70. doi: 10.1093/cid/cis410. [DOI] [PubMed] [Google Scholar]
- 5.Keller SC, Linkin DR, Fishman NO, Lautenbach E. Variations in identification of healthcare-associated infections. Infect Control Hosp Epidemiol. 2013;34:678–86. doi: 10.1086/670999. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Lin MY, Hota B, Khan YM, Woeltje KF, Borlawsky TB, Doherty JA, et al. Quality of traditional surveillance for public reporting of nosocomial bloodstream infection rates. JAMA. 2010;304:2035–41. doi: 10.1001/jama.2010.1637. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Abramson JH. WINPEPI updated: computer programs for epidemiologists, and their teaching potential. Epidemiol Perspect Innovat. 2011;8:1. doi: 10.1186/1742-5573-8-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Birgand G, Lepelletier D, Baron G, Barrett S, Breier AC, Buke C, et al. Agreement among healthcare professionals in ten European countries in diagnosing case-vignettes of surgical-site infections. PLoS ONE. 2013;8:e68618. doi: 10.1371/journal.pone.0068618. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Lepelletier D, Ravaud P, Baron G, Lucet JC. Agreement among health care professionals in diagnosing case Vignette-based surgical site infections. PLoS ONE. 2012;7:e35131. doi: 10.1371/journal.pone.0035131. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Perla RJ, Peden CJ, Goldmann D, Lloyd R. Health care-associated infection reporting: the need for ongoing reliability and validity assessment. Am J Infect Control. 2009;37:615–8. doi: 10.1016/j.ajic.2009.03.003. [DOI] [PubMed] [Google Scholar]
- 11.Klompas M, Kleinman K, Khan Y, Evans RS, Lloyd JF, Stevenson K, et al. Rapid and reproducible surveillance for ventillator-associated pneumonia. Clin Infect Dis. 2012;54:370–7. doi: 10.1093/cid/cir832. [DOI] [PubMed] [Google Scholar]
- 12.McBryde ES, Brett J, Russo PL, Worth LJ, Bull AL, Richards MJ. Validation of statewide surveillance system data on central line-associated bloodstream infection in intensive care units in Australia. Infect Control Hosp Epidemiol. 2009;30:1045–9. doi: 10.1086/606168. [DOI] [PubMed] [Google Scholar]
- 13.Stevens JP, Kachniarz B, Wright SB, Gillis J, Talmor D, Clardy P, et al. When policy gets it right: variability in U.S. hospitals’ diagnosis of ventilator-associated pneumonia. Crit Care Med. 2014;42:497–503. doi: 10.1097/CCM.0b013e3182a66903. [DOI] [PubMed] [Google Scholar]
- 14.Russo PL, Barnett AG, Cheng AC, Richards M, Graves N, Hall L. Differences in identifying healthcare associated infections using clinical vignettes and the influence of respondent characteristics: a cross-sectional survey of Australian infection prevention staff. Antimicrob Resist Infect Control. 2015;4:29. doi: 10.1186/s13756-015-0070-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Gaur AH, Miller MR, Gao C, Rosenberg C, Morrell GC, Coffin SE, et al. Evaluating application of the National Healthcare Safety Network central line-associated bloodstream infection surveillance definition: a survey of pediatric intensive care and hematology/oncology units. Infect Control Hosp Epidemiol. 2013;34:663–70. doi: 10.1086/671005. [DOI] [PubMed] [Google Scholar]
- 16.Landers T, Davis J, Crist K, Malik C. APIC MegaSurvey: methodology and overview. Am J Infect Control. 2017 doi: 10.1016/j.ajic.2016.12.012. In Press. [DOI] [PubMed] [Google Scholar]
- 17.Dixon-Woods M, Leslie M, Bion J, Tarrant C. What counts? An ethnographic study of infection data reported to a patient safety program. Milbank Q. 2012;90:548–91. doi: 10.1111/j.1468-0009.2012.00674.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Kalp EL, Marx J, Davis J. Understanding the current state of infection preventioists through competency, role and activity self-assessment. Am J Infect Control. 2017;45:589–96. doi: 10.1016/j.ajic.2017.03.021. [DOI] [PubMed] [Google Scholar]
- 19.Horowitz HW. Infection control: public reporting, disincentives, and bad behavior. Am J Infect Control. 2015:989–91. doi: 10.1016/j.ajic.2015.02.033. [DOI] [PubMed] [Google Scholar]