Abstract
Background
The nature of infection prevention and control is changing; however, little is known about current staffing and structure of infection prevention and control programs.
Methods
Our objectives were to provide a snapshot of the staffing and structure of hospital-based infection prevention and control programs in the United States. A Web-based survey was sent to 441 hospitals that participate in the National Healthcare Safety Network.
Results
The response rate was 66% (n = 289); data were examined on 821 professionals. Infection preventionist (IP) staffing was significantly negatively related to bed size, with higher staffing in smaller hospitals (P < .001). Median staffing was 1 IP per 167 beds. Forty-seven percent of IPs were certified, and 24 percent had less than 2 years of experience. Most directors or hospital epidemiologists were reported to have authority to close beds for outbreaks always or most of the time (n = 225, 78%). Only 32% (n = 92) reported using an electronic surveillance system to track infections.
Conclusion
This study is the first to provide a comprehensive description of current infection prevention and control staffing, organization, and support in a select group of hospitals across the nation. Further research is needed to identify effective staffing levels for various hospital types as well as examine how the IP role is changing over time.
Over 30 years ago, the Centers for Disease Control and Prevention (CDC) undertook the national Study on the Efficacy of Nosocomial Infection Control (SENIC) in which staffing of infection control programs and intensity of surveillance, prevention, and control activities were measured in US hospitals.1 SENIC established a connection between structure and process elements of infection control programs and provided strong evidence that hospitals with better infection control professional staffing and those programs headed by physicians dedicated to hospital epidemiology had more intense infection prevention and control activities and lower health care–associated infection (HAI) rates.1-5 The investigators from SENIC recommended that hospitals have at least 1 full-time equivalent (FTE) infection control professional for every 250 occupied beds. In the 1990s, participation in the CDC's National Nosocomial Infections Surveillance System (NNIS) required 1 infection control professional FTE for the first 100 beds and then 1 FTE for each additional 250 beds.6 Currently, the CDC's National Healthcare Safety Network (NHSN) requires a trained infection control professional or hospital epidemiologist (HE) to be in charge of the program. In addition, the NHSN requires that data reporters complete online training courses related to the methods and definitions used in the surveillance protocols.
The nature of infection surveillance, prevention, and control is rapidly changing, and the roles and responsibilities of those working in the field are expanding. Infection control professionals are increasingly expected not only to monitor infection rates and provider behaviors but also to intervene, implement, and/or lead other clinicians in the implementation of processes aimed at reducing rates of infections.7 Even their title has recently changed, and they are now referred to as infection preventionists (IP), which will be the nomenclature used in the remainder of this report.8 Specifically, our aim was to describe the staffing, organization, and support of infection prevention and control programs across the nation; describe the current spectrum of roles and responsibilities of IP and how they spend their time; and examine differences in staffing across these select hospitals.
Methods
The results reported here are from phase I of a larger research project designed to examine the cost-effectiveness of infection prevention and control practice (Prevention of Nosocomial Infection and Cost Effectiveness Analysis, National Institutes of Health, R01NR010107). The SENIC study design was a very important blueprint for the study. Whereas some methods are no longer applicable because there are more reliable, valid, and efficient sources of data (such as the identification of HAI through the use of NHSN hospitals vs chart review), the overall approach of surveying hospitals, then obtaining a sample of these hospitals to link processes to the prevention of HAI, comes directly from this CDC study. In phase I of this study, a national survey of select acute care hospital infection control programs was conducted. The process of developing, validating, and conducting the survey is described below.
Survey development
Building on the questionnaire used in the SENIC study, a revised survey was developed.1 The original SENIC survey had 3 sections, which included questions on (1) infection control staff; (2) surveillance of hospital infections; and (3) infection control programs. In the present survey, questions about staff and their qualifications were essentially unchanged from the original CDC survey except for updating terms (eg, use of the term infection control professional instead of infection control nurse). Questions about activities and organizational support were updated to reflect current practice and are described more fully below. All survey content was developed and examined by an expert panel that included the full research team and members of our advisory board (listed in the acknowledgements section). This panel had experts in psychometrics (A.D., N.K., M.P., P.S.), physician hospital epidemiologists (R.H., E.P., L.S.), and experienced IPs (T.C.H., J.H., E.L.). A paper copy of the survey is available upon request.
Survey content
Staffing
For each hospital, we assessed the numbers of professional infection prevention and control program staff (ie, IP, HE, and director/coordinator) and their highest qualifications (eg, physician, nurse, master's degree in public health, or other qualifications). For each staff member, we inquired about the years of infection control experience, certification, and membership in professional organizations. The number of hours per week each staff member dedicated to the program was measured.
Activities
Similar to the SENIC study method, we estimated how the IPs spent their time in the following manner. Based on the average hours per week each IP devoted to the program, the respondent was asked to estimate the average percentage of time per week spent on the following activities during the past 6 months: (1) collecting, analyzing, and interpreting data on the occurrence of infections; (2) teaching infection prevention and control policies and procedures; (3) activities related to outbreaks; (4) daily isolation issues; (5) policy development and meetings; and (6) other (eg, product evaluation, employee health, and emergency preparedness). The categorization of activities was based on the practice analysis published by the Certification Board of Infection Control and Epidemiology (CBIC).9
Organizational support
Organizational support for the program was estimated several ways. There were questions related to availability of support staff including data managers, secretaries and statisticians. Items related to access to key decision makers, authority of the HE or director to close beds in the event of an outbreak, funds for continuing education, and use of electronic surveillance systems were also included. Additionally, respondents were asked to report how long their institution had been part of a CDC network of hospitals (ie, NHSN or its precursor, NNIS) using the following categories (less than 1 year, 1 to 3 years, or more than 3 years).
Pilot testing
A paper version of the survey was piloted in 13 different settings and estimated to take on average 27 minutes to complete (standard deviation [SD] ± 11). Test-retest reliability was assessed by asking these respondents to retake the survey after a 2-week interval, and kappa (κ) coefficients for each item were computed.10 The responses had adequate agreement (mean κ = 0.88, SD ± 0.24). Criterion-referenced validity was assessed by conducting site visits and comparing the institutional policies and data to survey responses. No discrepancies were found. An electronic Web-based version of the survey was then developed and further piloted by 3 IPs and 2 nursing doctoral students.
Sample
To participate in the study (1) a hospital must have conducted NHSN device-associated surveillance of health care-associated infections in an adult medical, medical/surgical, or surgical intensive care unit (ICU) in 2007 according to the module protocol11; and (2) the ICU must have had a minimum of 500 device-days (central intravascular line, ventilator, or indwelling urinary catheter). These inclusion criteria were needed for the larger study. There were 441 hospitals eligible to participate.
Recruitment and study procedures
Although some states have recently mandated membership in NHSN, lists of NHSN hospitals are not public information. To protect the confidentiality of the hospitals, our NHSN expert (T.H.) developed a list of eligible hospitals and e-mailed them directly inviting them to participate by accessing the Web-based survey.
Dillman developed techniques to increase response rates in surveys and suggests multiple contacts including an initial invitation, reminders, and last chance communications.12 A modified Dillman technique was used, and reminders were sent weekly for 5 weeks, and a final letter was sent after 6 weeks. Weekly lotteries were held with 4 $100 prizes to encourage participation.
In the communications, we asked that only 1 person complete the survey for their institution. Hospital demographics of respondents were examined to check for duplicates. If a duplicate response was found for a single institution, the surveys were examined for completeness of data and role of the respondent. Those surveys completed by directors of departments and/or those in which responses were most complete were used. All procedures were reviewed and approved by institutional review boards at Columbia University, CDC, and RAND Corporation.
Statistical analysis
Distributions and descriptive statistics (ie, proportions or medians, means, SDs, and 95% confidence intervals [CI]) were computed. Missing data were not imputed. Based on the total number of IP hours per week, we computed the number of FTE per 100 beds assuming a 40-hour work week. To allow comparability with previous literature, we also calculated the mean and median number of beds for which 1 IP FTE provided service. Nonparametric correlations were computed using Spearman rho (ρ) statistics. Regression analyses were used to examine variation in IP staffing level and proportion of certified staff by hospital size. To examine the generalizability of the survey results, the teaching status and bed size of the hospitals that responded were compared with all hospitals in the Centers for Medicare and Medicaid Services database in 2005 as well as the previous published report describing NNIS hospitals in 1999 (the only description available).6
Results
There were 347 responses. After checking for duplicates, the final set of unique hospital infection control programs was 289, which was a 66% response rate. The majority of the hospitals was located in the Northeast (n = 135, 47%) and was teaching institutions (n = 180, 62%). The number of hospital beds ranged from 21 to over 1500 with the mean bed size of 363 (SD ± 223); the median number of beds was 310, and 20 hospitals had more than 750 beds. Most respondents (n = 134, 54%) reported that their institution had been a member of NHSN for more than 3 years. Thirty-three (13%) of the hospitals had been a member of NHSN for less than a year.
These hospitals employed 821 professionals in their infection control programs (Table 1). As expected, the majority of HE was physicians, and the majority of IP was nurses; however, titles varied across institutions. Those with the HE title had more years of experience than did the IPs (P < .001). Forty-nine percent of the hospitals (n = 141) reported the presence of a physician HE, and data were available for 171 physician HE. For both those with HE and IP titles, professional membership (ρ = .18) and certification (ρ = 0.37) were significantly correlated with years of experience (P values < .001).
Table 1.
Description of qualifications, experience, and hours worked of professionals with hospital epidemiologist and infection preventionist titles
| HE (n = 204) | IP (n = 617) | |||
|---|---|---|---|---|
| n | % | n | % | |
| Highest education | ||||
| Physician | 171 | 83.8 | 2 | 0.3 |
| Registered nurse with graduate degree | 4 | 2.0 | 133 | 21.6 |
| Registered nurse | 7 | 3.4 | 339 | 54.9 |
| Master's degree nonnurse | 4 | 2.0 | 23 | 3.7 |
| Other/don't know | 18 | 8.8 | 120 | 19.4 |
| Certified in infection control | ||||
| No | 163 | 79.9 | 315 | 51.0 |
| Yes | 18 | 8.8 | 288 | 46.7 |
| Missing data | 23 | 11.3 | 14 | 2.3 |
| Member of the APIC or the SHEA | ||||
| No | 30 | 14.7 | 111 | 18.0 |
| Yes | 117 | 57.4 | 494 | 80.1 |
| Missing data | 57 | 27.9 | 12 | 2.0 |
| Years of experience | ||||
| <2 | 11 | 5.4 | 150 | 24.3 |
| 2-5 | 17 | 8.3 | 118 | 19.1 |
| 6-10 | 30 | 14.7 | 95 | 15.4 |
| 11-15 | 30 | 14.7 | 73 | 11.8 |
| >15 | 89 | 43.6 | 170 | 27.6 |
| Missing data | 27 | 13.2 | 11 | 1.8 |
NOTE. Average number of hours worked per week in the infection prevention and control program (includes overtime): HE: mean, 12.0, SD ± 14.7; IP: mean, 37.0; SD ± 10.3.
APIC, Association for Professionals in Infection Control and Epidemiology, Inc; HE, hospital epidemiologist; IP, infection preventionist; SHEA, Society for Healthcare Epidemiology of America.
Most physician HEs (n = 122, 71%) were part-time; 10% (n = 17) worked full-time, and 19% (n = 32) did not report dedicated hours. As displayed in Fig 1, for the 246 hospitals reporting IP hours, the mean IP FTE staffing was 0.69 (SD ± 0.54) per 100 beds (or 1 IP per 144 beds); the median was 0.60 IP FTE per 100 beds (or 1 IP per 167 beds). Staffing was significantly negatively related to bed size, with higher staffing in smaller hospitals (P < .001). However, the large CIs in the smaller hospitals suggest a wide variation. The staffing rate was fairly consistent in the medium to large hospitals as noted by the tight CIs. In hospitals with 300 beds or more, the median was 0.54 IP FTE per 100 beds (or 1 IP per 186 beds). The mean proportion of hours provided by a certified IP in each institution was 0.48 (SD ± 0.38) per 100 beds. There was not a significant relationship between hospital size and the proportion of hours provided by certified IPs. Presence of a physician HE was significantly associated with hospital size (ρ = .26, P < .0001), with larger hospitals being more likely to have one or more physician HE present.
Fig 1.
Infection preventionists full-time equivalent per 100 beds.
Table 2 describes how the IPs spent their time. The largest percentage of time (mean, 44.5%) was spent collecting, analyzing and interpreting data on the occurrence of infections (ie, surveillance). Teaching, isolation issues, and policy development (ie, prevention) activities each took approximately 13% to 15% of the IPs' time. Only 6% was spent on outbreak activities (ie, control).
Table 2.
Activities reported by infection preventionists regarding how they spent their time
| Activity | Percent of Time | ||||
|---|---|---|---|---|---|
| Median | Mean | SD | Minimum | Maximum | |
| Collecting analyzing and interpreting data on the occurrence of infections | 49.0 | 44.5 | 14.3 | 7 | 80 |
| Policy development and meetings | 14.0 | 15.0 | 8.8 | 0 | 55 |
| Daily isolation issues | 10.0 | 12.9 | 9.0 | 0 | 50 |
| Teaching infection prevention and control policies and procedures | 10.0 | 13.0 | 6.2 | 1 | 35 |
| Other (eg, product evaluation, employee health, and emergency preparedness) | 5.0 | 8.8 | 8.2 | 0 | 60 |
| Activities related to outbreaks | 5.0 | 6.1 | 4.8 | 0 | 40 |
NOTE. N = 269 responses. Means are the average percent of time reported by all respondents. Twenty respondents did not provide data in this section. Means may not sum to 100% because of rounding.
Sixty-eight percent (n = 197) of the respondents reported having a director position for their program. Thirteen percent (n = 25) of the directors were master's degree-prepared epidemiologists, 12% (n = 24) were physicians, and the majority was registered nurses (66%, n = 130). Twelve (6%) directors were licensed practical nurses, and 6 (3%) did not report their qualifications. On average, the directors had 3.6 years (SD ± 0.76) of experience in infection control. The majority of directors (n = 178, 90%) was members of the Association for Professionals in Infection Control and Epidemiology (APIC) and/or the Society for Healthcare Epidemiology of America (SHEA) and certified in infection control (n = 128, 65%). The directors reported most frequently to the department of medicine (n = 54, 27%), quality management (n = 45, 23%), and/or nursing director (n = 44, 22%). These categories were not mutually exclusive, and a respondent could indicate that the director reported to more than one category.
Only 35% (n = 102) of the programs had help with data management, and even fewer (n = 39, 13%) had statistical help. More than half (n = 180, 62%) had access to a secretary. Table 3 describes other organizational supports for the departments. The majority of respondents indicated that they had access to key decision makers always or most of the time for planning (n = 223, 77%) and in case of a problem (n = 251, 87%). Additionally, frequently, the director or HE was reported to have authority to close beds always or most of the time (n = 225, 78%). Furthermore, funding for continuing education was available most or all the time (n = 190, 66%). Only 32% of the hospitals (n = 92) reported using an electronic surveillance system to track infections.
Table 3.
Organization and support for infection prevention and control departments
| n | % | |
|---|---|---|
| Access to key decision makers for planning | ||
| Never | 0 | 0 |
| Rarely | 6 | 2 |
| Sometimes | 44 | 15 |
| Most of the time | 127 | 44 |
| Always | 96 | 33 |
| Missing | 16 | 6 |
| Access to key decision makers if a problem | ||
| Never | 0 | 0 |
| Rarely | 4 | 1 |
| Sometimes | 19 | 7 |
| Most of the time | 86 | 30 |
| Always | 165 | 57 |
| Missing | 15 | 5 |
| Authority to close beds in the event of an outbreak | ||
| Never | 11 | 4 |
| Rarely | 14 | 5 |
| Sometimes | 23 | 8 |
| Most of the time | 56 | 19 |
| Always | 169 | 58 |
| Missing | 16 | 6 |
| Funds for continuing education | ||
| Never | 3 | 1 |
| Rarely | 25 | 9 |
| Sometimes | 56 | 19 |
| Most of the time | 110 | 38 |
| Always | 80 | 28 |
| Missing | 15 | 5 |
NOTE. N = 289 responses.
Table 4 presents a comparison of the sampled hospitals (n = 289) to all hospitals that submit cost report data to the Centers for Medicare and Medicaid Services (n = 5980) and the NNIS hospitals (n = 229).6 Bed size and teaching status in the sampled hospitals are similar to the previous report of NNIS hospitals and are larger and more likely to be teaching hospitals than other US hospitals.
Table 4.
Comparison of bed size among the study sample, NNIS sample, and US hospitals
| Sample | N | Median, beds | Interquartile range, beds | Percent teaching |
|---|---|---|---|---|
| US | 5980 | 74 | 20-172 | 20 |
| NNIS sample | 227 | 360 | 250-500 | 58 |
| Study sample | 289 | 310 | 207-464 | 62 |
US, United States hospitals reporting to the Centers for Medicare and Medicaid Services in 2005; NNIS, National Nosocomial Infections Surveillance System hospitals in 1999.6
Discussion
This study provides a current and comprehensive description of infection prevention and control program staffing, organization, and support in a select group of hospitals participating in the NHSN across the nation. Smaller hospitals used significantly more staffing hours, although there was wide variation in these hospitals. Despite the sampled hospitals having similar demographics, the staffing levels are less than the 1 IP per 115 beds Richards et al found in surveying 229 NNIS hospitals in 1999.6 This lower level of staffing may be concerning because the role and responsibilities of IPs are increasing.9,13 It is important to note that this report is purely descriptive, and no effort was made to determine what constituted adequate infection prevention and control staffing, although this has been identified as a gap in the literature by authors of 2 separate systematic reviews.14,15 Our results are similar to those found in 2 recent state surveys.16,17 For example, in Massachusetts hospitals, the average number of beds per IP was 178 with a median of 166.16
The negative correlation between IP staffing and hospital size has been reported for other health care staff18 and suggests potential economies of scale. That is, in larger hospitals, a single IP is able to provide more service. Although this study does not test directly for economies of scale, the variation of staffing across hospital size clearly illustrates the inappropriateness of assuming a single minimum IP staffing ratio would be adequate across settings.
We found that IPs spent the largest proportion of their time collecting and analyzing data related to infections. This is similar to the results from an expert Delphi panel from which it was estimated that 39% of IPs time was spent on surveillance and identifying infections19 and a recent survey of New York IPs who reported spending 45% of their time on surveillance.17 Although accurate and consistent case finding is important in reducing infections, actively working to change the organizational culture has also been found to be an important part of the multifaceted approach needed to promote patient safety and reduce infections.20,21 It is possible that this aspect of the roles was not captured in our survey. In the most recent practice analysis published by the CBIC, a new activity category entitled “Management and communication” has been identified.13 It is not clear whether this category fully captures the new roles and responsibilities; we encourage researchers in the future to assess IPs' leadership and involvement in teamwork and quality improvement activities aimed at the establishment of evidence-based clinical practices.
The relatively few physician HE hours devoted to the program could be concerning. As with all adverse patient safety events, the prevention of infections requires a team approach.22 Indeed, the infrastructure needed to promote patient safety includes an interdisciplinary, collaborative environment with high-level leadership.23 However, positively, these respondents reported high levels of access to key decision makers, especially in the case of problems.
We found a relatively high number of director positions in this study. However, the survey did not assess the budgetary responsibilities of these directors, and the position may have been interpreted differently by respondents.
The results reported here provide a snapshot of the experience and qualifications of IPs and HEs working in these hospital infection prevention and control programs. Although the majority of the HEs were experienced, almost one quarter of the IPs had less than 2 years experience. This has important implications for the APIC, which is the primary organization for training and educating these professionals. Reaching out to new IPs to provide education and role transition should be a top priority. It is heartening that the majority of respondents reported funds for continuing education, which is clearly needed given the high number of new IPs. Furthermore, a substantial proportion of IPs was certified. The certification process may also be important for these new IPs because the certification examination is designed to measure minimum competence for practice.24 It is possible that more hospitals are recognizing certification as an indicator of competence for hiring.
This report has a number of strengths and limitations. First, updating a well-developed survey with attention to psychometric properties and pilot testing is a strength of the research design. The use of an electronic survey made it possible to conduct large-scale data collection relatively inexpensively, minimized the amount of time needed for the distribution and response cycles, and included skip pattern logic (similar to “skip to question X” in paper-based surveys) and cross-field validity checking during data entry (eg, warnings were given for summative functions that did not equal the prescribed amount and respondents were cued when data were missing), which increases the validity of the data.25,26 Furthermore, the high response rate is a definite strength and increases the validity of the findings. However, this was a select sample, and the hospitals do not represent hospitals across the nation. In the past, whereas NNIS hospitals have been distributed across most of the United States, they have been shown to be overly represented by large hospitals and those in the mid-Atlantic and south Atlantic regions.6 Although the profile is changing now that there is open enrollment with the new Web-based system and states mandating that hospitals provide data to them using NHSN, our sample was more similar to the previous NNIS cohort of hospitals than to other hospitals in the country. Nevertheless, the IP staffing we found was similar to those found in other state-level surveys.16,17 Last, measurement bias must always be considered in self-report survey data.27 In pilot work, we found congruency between the IP self-report survey responses and institutional policies. This is consistent with a previous study, in which IP self-reported data on infection control processes was validated by site visits that included interviews and observing staff, checking on documentation including records and minutes, and making environmental rounds and found high congruency.28
The context of health care has changed and emergency preparedness, patient safety as well as mandatory reporting of infection rates, and lack of reimbursement for HAI are activities and issues with which staff of infection control programs may be contending. In the future, infection prevention and control programs will likely play an increasingly critical role in hospitals. Whereas this study provides a current snapshot of the staffing and professional roles of those employed in these programs, further research is needed to identify effective staffing levels for various hospital types as well as examine how the roles are changing over time.
Acknowledgments
The authors thank Sarah Jordan for her help in the data analysis and development of this manuscript and our advisory board members and others who reviewed the survey and provided input regarding the study including Steven Albert, Donald Goldmann, Janet Haas, Robert Haley, Nancy Kupka, Denise Murphy, Eli Perencevich, Lisa Saiman, and Jack Zwanziger.
Supported by the National Institute of Nursing Research R01NR010107, and pilot work conducted for this study was funded by NIH/NCRR P20RR020616.
Footnotes
Conflicts of interest: None to report.
References
- 1.Haley RW, Quade D, Freeman HE, Bennett JV. Appendix B: design of the Preliminary Screening Questionnaire and specifications for computing indexes of surveillance and control. Am J Epidemiol. 1980;111:613–21. [Google Scholar]
- 2.Haley RW. The usefulness of a conceptual model in the study of the efficacy of infection surveillance and control programs. Rev Infect Dis. 1981;3:775–80. doi: 10.1093/clinids/3.4.775. [DOI] [PubMed] [Google Scholar]
- 3.Haley RW, Quade D, Freeman HE, Bennett JV. The SENIC Project. Study on the Efficacy of Nosocomial Infection Control (SENIC Project): summary of study design. Am J Epidemiol. 1980;111:472–85. doi: 10.1093/oxfordjournals.aje.a112928. [DOI] [PubMed] [Google Scholar]
- 4.Quade D, Culver DH, Haley RW, Whaley FS, Kalsbeek WD, Hardison CD, et al. The SENIC sampling process: design for choosing hospitals and patients and results of sample selection. Am J Epidemiol. 1980;111:486–502. doi: 10.1093/oxfordjournals.aje.a112929. [DOI] [PubMed] [Google Scholar]
- 5.Haley RW, Culver DH, White JW, Morgan WM, Emori TG, Munn VP, et al. The efficacy of infection surveillance and control programs in preventing nosocomial infections in US hospitals. Am J Epidemiol. 1985;121:182–205. doi: 10.1093/oxfordjournals.aje.a113990. [DOI] [PubMed] [Google Scholar]
- 6.Richards C, Emori TG, Edwards J, Fridkin S, Tolson J, Gaynes R. Characteristics of hospitals and infection control professionals participating in the National Nosocomial Infections Surveillance System 1999. Am J Infect Control. 2001;29:400–3. doi: 10.1067/mic.2001.118408. [DOI] [PubMed] [Google Scholar]
- 7.Murphy DM. From expert data collectors to interventionists: changing the focus for infection control professionals. Am J Infect Control. 2002;30:120–32. doi: 10.1067/mic.2002.120526. [DOI] [PubMed] [Google Scholar]
- 8.Real Pro. Dupont Nancy. [July 2, 2008.];2008 May/June;(Healthwire) Available at: http://www.aft.org/pubs-reports/healthwire/issues/mayjune08/real_pro.htm.
- 9.Goldrick BA, Dingle DA, Gilmore GK, Curchoe RM, Plackner CL, Fabrey LJ. Practice analysis for infection control and epidemiology in the new millennium. Am J Infect Control. 2002;30:437–48. doi: 10.1067/mic.2002.127706. [DOI] [PubMed] [Google Scholar]
- 10.Cook RJ. Kappa. In: Armitage T, Colton P, editors. The Encyclopedia of biostatistics. New York: John Wiley & Sons; 1998. pp. 2160–2166. [Google Scholar]
- 11.NHSN manual: patient safety component protocol. [February 23, 2007];Centers for Disease Control and Prevention. Available at: http://www.cdc.gov/ncidod/dhqp/pdf/nhsn/NHSN_Manual_%20Patient_Safety_Proto col022307.pdf.
- 12.Dillman D. Mail and telephone surveys: the total design method614. New York: Wiley; 1978. [Google Scholar]
- 13.Curchoe R, Fabrey L, LeBlanc M. The changing role of infection prevention practice as documented by the Certification Board of Infection Control and Epidemiology practice analysis survey. Am J Infect Control. 2008;36:241–9. doi: 10.1016/j.ajic.2007.10.010. [DOI] [PubMed] [Google Scholar]
- 14.Haas JP. Measurement of infection control department performance: state of the science. Am J Infect Control. 2006;34:543–9. doi: 10.1016/j.ajic.2005.12.001. [DOI] [PubMed] [Google Scholar]
- 15.Stone PW, Pogorzelska M, Kunches L, Hirshhorn L. Nurse staffing and HAI: a systematic review. Clin Infect Dis. 2008;47:937–44. doi: 10.1086/591696. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Prevention and control of healthcare-associated infections in Massachusetts. Betsy Lehman Center for Patient Safety and Medical Error Reduction, JSI Research and Training Institute Inc, and Massachusetts Department of Public Health; 2008. [August 21, 2008]. Part 2: Findings from complementary research activities. Available at: http://www.mass.gov/Eeohhs2/docs/dph/patient_safety/haipcp_final_report_pt2.pdf. [Google Scholar]
- 17.New York State Hospital-Acquired Infection Reporting System, pilot year-2007. New York State Department of Health; [August 21, 2008]. Available at: http://www.health.state.ny.us/nysdoh/hospital/reports/hospital_acquired_infections/2007/docs/hospital-acquired_infection-full_report.pdf. [Google Scholar]
- 18.Jacobs P, Rapoport J, Edbrooke D. Economies of scale in British intensive care units and combined intensive care/high dependency units. Intensive Care Med. 2004;30:660–4. doi: 10.1007/s00134-003-2123-2. [DOI] [PubMed] [Google Scholar]
- 19.O'Boyle C, Jackson M, Henly SJ. Staffing requirements for infection control programs in US health care facilities: Delphi project. Am J Infect Control. 2002;30:321–33. doi: 10.1067/mic.2002.127930. [DOI] [PubMed] [Google Scholar]
- 20.Pronovost PJ, Berenholtz SM, Goeschel C, Thom I, Watson SR, Holzmueller CG, et al. Improving patient safety in intensive care units in Michigan. J Crit Care. 2008;23:207–21. doi: 10.1016/j.jcrc.2007.09.002. [DOI] [PubMed] [Google Scholar]
- 21.Centers for Disease Control and Prevention. Reduction in central line-associated bloodstream infections among patients in intensive care units-Pennsylvania, April 2001-March 2005. MMWR. 2005;54:1013–6. [PubMed] [Google Scholar]
- 22.Leape LL, Berwick DM, Bates DW. What practices will most improve safety? Evidence-based medicine meets patient safety. JAMA. 2002;288:501–7. doi: 10.1001/jama.288.4.501. [DOI] [PubMed] [Google Scholar]
- 23.Wong P, Helsinger D, Petry J. Providing the right infrastructure to lead the culture change for patient safety. Jt Comm J Qual Improv. 2002;28:363–72. doi: 10.1016/s1070-3241(02)28036-0. [DOI] [PubMed] [Google Scholar]
- 24.Goldrick BA. The Certification Board of Infection Control and Epidemiology white paper: the value of certification for infection control professionals. Am J Infect Control. 2007;35:150–6. doi: 10.1016/j.ajic.2006.06.003. [DOI] [PubMed] [Google Scholar]
- 25.Dillman DA. Why choice of survey mode makes a difference. Public Health Rep. 2006;121:11–3. doi: 10.1177/003335490612100106. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Dillman DA, Smyth JD. Design effects in the transition to web-based surveys. Am J Prev Med. 2007;32:S90–6. doi: 10.1016/j.amepre.2007.03.008. [DOI] [PubMed] [Google Scholar]
- 27.Donaldson SI, Grant-Vallone EJ. Understanding self-report bias in organizational behavior research. J Bus Psychol. 2002;17:245–60. [Google Scholar]
- 28.Larson EL, Quiros D, Lin SX. Dissemination of CDC's Hand Hygiene Guideline and impact on infection rates. Am J Infect Control. 2007;35:666–75. doi: 10.1016/j.ajic.2006.10.006. [DOI] [PMC free article] [PubMed] [Google Scholar]

