Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2014 Mar 1.
Published in final edited form as: Infect Control Hosp Epidemiol. 2013 Jan 18;34(3):238–244. doi: 10.1086/669527

Central Line–Associated Infections as Defined by the Centers for Medicare and Medicaid Services’ Hospital-Acquired Condition versus Standard Infection Control Surveillance: Why Hospital Compare Seems Conflicted

Rebekah W Moehring 1, Russell Staheli 2, Becky A Miller 3, Luke Francis Chen 4, Daniel John Sexton 4, Deverick John Anderson 4
PMCID: PMC3628677  NIHMSID: NIHMS454742  PMID: 23388357

Abstract

OBJECTIVE

To evaluate the concordance of case-finding methods for central line–associated infection as defined by Centers for Medicare and Medicaid Services (CMS) hospital-acquired condition (HAC) compared with traditional infection control (IC) methods.

SETTING

One tertiary care and 2 community hospitals in North Carolina.

PATIENTS

Adult and pediatric hospitalized patients determined to have central line infection by either case-finding method.

METHODS

We performed a retrospective comparative analysis of infection detected using HAC versus standard IC central line–associated bloodstream infection surveillance from October 1, 2007, through December 31, 2009. One billing and 2 IC databases were queried and matched to determine the number and concordance of cases identified by each method. Manual review of 25 cases from each discordant category was performed. Sensitivity and positive predictive value (PPV) were calculated using IC as criterion standard.

RESULTS

A total of 1,505 cases were identified: 844 by International Classification of Diseases, Ninth Revision (ICD-9), and 798 by IC. A total of 204 cases (24%) identified by ICD-9 were deemed not present at hospital admission by coders. Only 112 cases (13%) were concordant. HAC sensitivity was 14% and PPV was 55% compared with IC. Concordance was low regardless of hospital type. Primary reasons for discordance included differences in surveillance and clinical definitions, clinical uncertainty, and poor documentation.

CONCLUSIONS

The case-finding method used by CMS HAC and the methods used for traditional IC surveillance frequently do not agree. This can lead to conflicting results when these 2 measures are used as hospital quality metrics.


Most healthcare providers agree that many of the 1.7 million healthcare-associated infections (HAIs) that occur each year in the United States are preventable.1 HAI prevention campaigns promoted by individuals, hospitals, professional societies, and government agencies have resulted in substantial successes. For example, the Centers for Disease Control and Prevention (CDC) report that rates of central line–associated bloodstream infection (CLABSI) have decreased in US intensive care units (ICUs) by an estimated 58%, saving approximately 6,000 lives, from 2001 to 2009.2

The collective desires to decrease HAI, save human lives, and save money have led to the development of external quality measures to use in exerting financial pressure on hospitals to encourage prevention practices. The hospital-acquired condition (HAC) is a quality measure created by the Centers for Medicare and Medicaid Services (CMS) to reduce payments to hospitals for hospital-acquired complications of care. The intent was to incentivize hospitals to develop successful programs to prevent HACs and decrease overall costs. The CMS final rule went into effect on October 1, 2008, and stipulated that hospitals would no longer receive additional reimbursement for 10 HACs, 3 of which were HAIs.3,4

HAI cases targeted for reduced payment were identified using administrative billing code data derived from a billing coder’s interpretation of the clinician’s documentation of disease and timing of onset. The CMS ruling required billing coders to determine whether infections were present on admission (POA) or acquired after admission. Infections designated as POA were eligible for additional reimbursement, whereas those acquired after admission were not.

The validity of using billing data as the source for HAI surveillance has been questioned by authors of multiple studies. Most authors who have studied this topic emphasize that billing data are inaccurate when compared with traditional surveillance methods.59 At present, there is no measure of central line–related bloodstream infection that perfectly reflects clinical truth. Traditional IC methods have been previously criticized for elements of subjectivity and are considered a proxy measure for the true, unknown incidence. However, policy-makers and researchers agree that, although absolute perfection is not achievable, there is a lower threshold of validity below which stakeholders will not accept a measure as meaningful, which could stifle efforts to drive improvements in care.10

Few studies have examined the use of billing codes to identify patients with CLABSI since the 2008 CMS rule. This retrospective study was designed to compare the concordance of criteria from the 2008 CMS rule that defined HAC with traditional IC methods to identify cases of central line–related infections. We hypothesized that the 2 case-finding methods would frequently disagree.

METHODS

We performed a comparative analysis of CLABSI detected using the CMS HAC versus standard IC surveillance from October 1, 2007, through December 31, 2009. The study included 3 hospitals within the Duke Health System: a 950-bed academic tertiary care hospital and 2 community hospitals with 200 and 350 beds each. The study was approved by the institutional review boards of all participating hospitals.

Case-Finding Methods and Study Definitions

Three separate electronic databases were used to identify cases. Billing codes were accessed via the Duke University Healthcare System Enterprise Data Repository. This data repository was queried for all hospital encounter-based cases with International Classification of Diseases, Ninth Revision (ICD-9) code 999.31 (“infection due to central venous catheter”) at the 3 hospitals. A separate query of the hospital billing data in the Enterprise Data Repository was completed to collect the POA variable. Results of these 2 queries were then linked to match cases of ICD-9 code 999.31 with their POA variable. A case identified by the 2008 CMS rule criteria was defined by ICD-9 Clinical Modification code 999.31 and a POA variable of “no” and will hereafter be referred to as HAC.3,4

Trained coding personnel used CMS Official Coding Guidelines to assign billing codes. Billing codes and POA designations were based on documentation in the medical record of the diagnosis of central line–associated infection and the presence of infection at hospital admission. Additional software (3M Coding and Reimbursement System) was also used to assist coders in assigning the proper billing codes. Each case coded to receive reduced reimbursement was then verified by a Health Information Management specialist per usual practice in our healthcare system.

Cases of CLABSI diagnosed using standard IC surveillance were identified using the Duke Infection Control Outreach Network Surveillance Database and the Duke University Hospital Infection Control and Epidemiology Surveillance Database. These 2 databases contain surveillance data on all HAIs identified by infection preventionists using standardized definitions endorsed by the CDC National Healthcare Safety Network (NHSN).11,12 IC surveillance was targeted to ICU settings during the study period.

A concordant case was defined as a case identified by both HAC and IC within the same hospital encounter period. Discordant cases were defined as cases identified by one case-finding method but not the other. Twenty-five cases from each discordant category (HAC positive and IC negative or IC positive and HAC negative) were randomly selected from Duke University Hospital for manual chart review. A total of 4 reviewers, all infectious disease faculty or fellows with interests in hospital epidemiology, independently reviewed between 10 and 15 randomly assigned cases using a standardized collection form. Reviewers were asked to designate each case using criteria for NHSN-defined CLABSI and to note clinicians’ documentation of a line-related infection. Each reviewer independently came to a decision regarding the presence of CLABSI and was blinded to which method had identified each case.

Analysis

We first calculated the proportion of concordance between the 2 methods. We calculated sensitivity and positive predictive value (PPV) according to standard epidemiologic methods using the HAC method as the test and traditional IC surveillance methods as the criterion standard. Specificity and negative predictive value could not be determined, because the number of cases that failed to be detected by both methods was unknown. Data were analyzed as a whole and then stratified by hospital type. Discordant cases were categorized as “agree” or “disagree” on the basis of the manual chart reviewer’s agreement or disagreement with the initial case-finding method. We then identified the common specific reasons for discordance.

Some cases identified by billing code carried a POA indicator of “unreported/not used,” which was not accepted by CMS for HAC after the CMS Change Request 6086 was issued in June 2008. Therefore, we completed a secondary analysis that grouped the “unreported/not used” cases with the POA “no” cases and recalculated the estimates as if these patients also had an HAC. Finally, the study period included 1 year before implementation of the financial incentive and 14 months after implementation. Because implementation of the rule and the financial incentive may have changed billing and documentation practices after October 1, 2008, we calculated the estimates for the first and second year of the study separately to illustrate the financial incentive effect.

RESULTS

Concordance, Sensitivity, and PPV

A total of 1,505 vascular-catheter related infections were identified in the 3 hospitals during the study period. Of these, 798 cases (53%) were identified by IC, and 844 cases (56%) were identified by ICD-9 999.31 (Figure 1).

FIGURE 1.

FIGURE 1

Central line–associated bloodstream infection (CLABSI) case-finding as defined by 2008 Centers for Medicare and Medicaid Rule compared with infection control (IC) surveillance, Duke Health System, 2007–2009. Twenty-five of the cases that were identified by International Classification of Diseases, Ninth Revision (ICD-9), codes but deemed POA “yes” (asterisk) were also identified by IC as CLABSI. These 25 cases were counted as identified by IC only in the comparative analysis of the hospital-acquired condition (HAC) case-finding method defined by Centers for Medicare and Medicaid Final Rule 2008 (which indicates the presence of ICD-9 code 999.31 and infection not present on admission) versus CLABSI. POA, present on admission.

Of the 844 cases identified by ICD-9 999.31, 204 (24%) were POA “no,” 563 (67%) were POA “yes,” and 77 (9%) were coded as “unreported/not used” in the billing database. Twenty-five cases were identified by ICD-9 codes, deemed POA “yes,” and were also identified as CLABSI. These 25 cases were counted as identified only by IC in the comparative analysis, because they did not meet the definition of HAC. The total number of concordant cases was 112 (13%) of 890 when the cases identified by HAC (n = 204) were matched with the IC-identified cases (n = 798). The total number of discordant cases was 778 (87%); 686 (77%) of the discordant cases were identified by IC but not by HAC, whereas 92 (10%) were identified by HAC but not by IC. The sensitivity of HAC was 14%, and the PPV of HAC was 55% when compared with IC surveillance (Table 1).

TABLE 1.

Central Line–Associated Bloodstream Infection Using Criteria from Centers for Medicare and Medicaid Services (CMS) 2008 Rule Compared with Standard Infection Control (IC) Surveillance, Duke Health System, 2007–2009

Variable No. (%) of cases Sensitivity, % PPV
Overall (n = 890) 14 55
 Concordant 112 (13)
 IC only 686 (77)
 HAC only 92 (10)
Community hospitals (n = 111) 10 33
 Concordant 9 (8)
 IC only 84 (76)
 HAC only 18 (16)
Tertiary hospital (n = 779) 15 58
 Concordant 103 (13)
 IC only 602 (77)
 HAC only 74 (9)

NOTE. Sensitivity and PPV were calculated using IC surveillance as criterion standard. HAC, hospital-acquired condition case-finding method defined by CMS Final Rule 2008 (which indicates presence of International Classification of Diseases, Ninth Revision, code 999.31, and infection not present on admission); PPV, positive predictive value.

A total of 779 cases were identified in the tertiary care center. The number of concordant cases was 103 (13%), with a sensitivity of 15% and a PPV of 58%. A total of 111 cases were identified at the 2 community hospitals. Only 9 cases (8%) were concordant, with a sensitivity of 10% and a PPV of 33% (Table 1).

Reasons for Discordance Identified by Chart Review

In-depth review of discordant cases revealed several potential reasons for the lack of concordance between the 2 methods (Figure 2). Chart reviewers generally agreed with the classification of the group of discordant cases that were identified by IC and not by the HAC method; 22 (88%) of 25 IC-identified cases were in agreement. Reviewers found that 3 (14%) of the 22 IC-identified cases had a line-associated infection that was clearly documented by the clinician and overlooked by billing personnel. The majority of IC-identified CLABSI that the HAC method failed to identify had poor or absent provider documentation of the infection (n = 19 of 22 HAC false-negative classifications). Examples of failure to clearly document include the following: failure to mention an infection (n = 3), failure to mention the presence of a central line (n = 13), or failure to state the diagnosis despite plans to remove the line for infectious symptoms (n = 3). Three (12%) of the reviewer-identified CLABSI in this discordant group were not identified as CLABSI by IC. All 3 of these cases had cultures with classic skin contaminant organisms, which indicates a disagreement over the application of criterion 2 of the NHSN definition.11,12

FIGURE 2.

FIGURE 2

Chart review of discordant cases of central line–associated bloodstream infection (CLABSI), Duke Health System, 2007–2009. Note that chart reviewers used National Health and Safety Network CLABSI definitions to classify the presence of CLABSI. HAC, hospital-acquired condition case-finding method defined by Centers for Medicaid and Medicare Final Rule 2008 (which indicates the presence of International Classification of Diseases, Ninth Revision, Clinical Modification [ICD-9-CM] code 999.31 and infection not present on admission); IC, infection control; LCBSI, laboratory-confirmed bloodstream infection; PIV, peripheral intravenous catheter.

The discordant group of patients identified by HAC and not by IC was almost evenly split between agreement and disagreement with the reviewers (Figure 2). Thirteen (52%) of the 25 reviewed cases were designated as CLABSI that were missed by IC surveillance (IC false-negative classifications). However, all 13 cases occurred on non-ICU units that were not targeted by IC surveillance during the study period. The remaining 12 cases (48%) were in disagreement with the HAC method (HAC false-positive classifications). Ten of 12 patients had no laboratory-confirmed bloodstream infection (LCBSI) at all; 6 of 10 had zero positive blood cultures, and 3 had single positive cultures with classic skin contaminants. The final patient without LCBSI had an infection secondary to a urinary tract infection that was not clearly documented by the treating clinician. In 4 of the 6 cases with negative blood cultures, the patient had sepsis of unknown source. The clinician documented a working diagnosis and suspicion of the line as the source, but the patients ultimately had negative blood cultures. The other 2 HAC-defined cases with negative blood cultures were in patients who had evidence of local cellulitis or an exit site infection associated with the line but were not systemically ill or bacteremic. The remaining 2 patients with positive blood cultures did not have a central line in place. One patient had an infection associated with a peripheral intravenous catheter. The other patient had a central line placed after the bloodstream infection, but no line was in place on the day that the positive culture specimen was obtained.

Secondary Analyses

For 77 (7%) of the cases identified by ICD-9 code, POA was reported as “unreported/not used.” This coding is no longer permitted for HACs, and for the purposes of the primary analysis, we assumed that such cases would not have been counted as HACs. None of the POA “unreported/not used” cases were also identified by IC as CLABSI. When these cases are grouped with the other POA “no” cases as if they were an HAC, the estimates are not substantially different: concordance 12%, sensitivity 14%, and PPV 40%.

To further assess the impact of the implementation period on our study results, we calculated estimates for before and after October 1, 2008. In the year before implementation of the rule, concordance was 39 cases (8%), with a sensitivity of 9% and a PPV of 41%. The year after October 1, 2008, demonstrated slight improvement: concordance (60 cases, 15%), sensitivity (16%), and PPV (61%).

DISCUSSION

Valid interhospital comparisons are difficult because of surveillance bias, differences in patient case mix, and poorly designed methods for reporting outcomes.13 Multiple quality metrics are derived from billing code data, mostly because of the convenience of tapping into an existing data source that is universally available. Metrics based on billing codes are routinely used by hospital administrators, government agencies, and consumer groups to compare HAI among hospitals. These measures include the Agency for Healthcare Research and Quality (AHRQ)–defined patient safety indicators (PSIs) and pediatric quality indicators and the CMS-defined HAC. Our study demonstrates that HAC identified a substantially smaller number of cases of CLABSI than did standard IC methods and that the 2 methods frequently conflict when compared at a patient-by-patient level. Central venous catheter HAC and the NHSN CLABSI are presented side by side on the Hospital Compare website (http://medicare.gov/hospitalcompare). For our hospital and others, the 2 methods can lead to conflicting conclusions regarding the quality of infection prevention practices for any given hospital.

The field of hospital epidemiology has been engaged in surveillance of HAI for over 25 years. Surveillance systems in place at individual hospitals rely on an intensive process of manual case review performed by trained infection preventionists using standardized definitions and guidelines endorsed by the CDC.11 Traditional surveillance for HAI is time-consuming and admittedly imperfect, and it is vulnerable to surveillance bias because of both objective and subjective components in the definitions. However, the sensitivity and specificity of traditional infection control methods are estimated to be 68%–86% and 98%–99%, respectively, compared with expert review.14 Efforts to improve surveillance methods have been ongoing for more than 25 years as debate over NHSN surveillance definitions continues.15,16 In contrast, administrative billing codes are determined by nonclinical coders’ interpretation of often incomplete, hurried, and/or illegible clinician documentation of judgments made with incomplete information. With some exceptions, most billing codes are neither reviewed nor validated for clinical accuracy. Their primary purpose is financial reimbursement. As a result, many hospital epidemiologists have largely abandoned the use of administrative billing claims data as a method of identifying HAI, unless combined with other data sources and then rigorously validated for accuracy.17

A number of earlier studies have shown inaccuracies in using billing codes to identify HAI.59 Studies completed before the 2008 CMS rule generally concluded that claims data overestimated the incidence of HAI compared with traditional IC methods by approximately 5 to 1.5,6 A study by Stevenson et al5 estimated a 15% PPV for CLABSI identified with a set of multiple ICD-9 codes compared with traditional IC. Stone et al7 tested the AHRQ PSI-7 tool for CLABSI case finding, which used 2 ICD-9 codes, and again reported very low concordance with traditional IC surveillance: only 8 (5%) of 170 cases were concordant. After implementation of the 2008 CMS rule, Meddings et al8 were the first to examine the accuracy of billing codes plus the POA indicator for detecting cases of catheter-associated urinary tract infection (CA-UTI). The study sampled 80 patients with UTI and found that billing codes failed to correctly report even a single case of CA-UTI compared with a physician chart abstractor (0 vs 36 cases).8 The authors concluded that the effect of the CMS 2008 rule on imposing a financial penalty for CA-UTI was minimal, because the ability of the billing code method to identify cases was poor.

Our study is novel in its evaluation of CLABSI using the method of case-finding specified by the 2008 CMS rule. The results are consistent with earlier studies in indicating an astoundingly low concordance. The PPV of HAC is higher than that found in earlier studies of billing codes for HAI (55% vs ~20%), which we believe is attributable to the addition of the POA criterion. However, the POA criterion also reduced the proportion of agreement with CLABSI and caused a substantially smaller estimation of the burden of disease. This insensitivity and underestimation was largely attributable to poor or absent documentation and clinical uncertainty. We believe that the burden of disease and the number of patients that an individual hospital should be targeting for prevention efforts is much larger than those identified by HAC. Furthermore, the financial penalty levied by the 2008 rule is small compared with the true cost.

Limitations of this study include its observational, retrospective design; the inability to identify cases that were missed by both methods; and the scope of the chart review, which was limited to a single center and single reviewer. IC targeted only ICUs during our study period, which contributes to some of the discrepancy between the 2 methods. However, the overall number of cases captured by HAC (n = 204) was much smaller than the number captured by IC (n = 798); the additional cases in non-ICU patients that were missed by IC were far outweighed by the poor sensitivity of the hospital-wide HAC. Hypothetically, if every additional case identified by HAC and not by IC were in a non-ICU patient with an NHSN-defined CLABSI, and if IC could identify zero other CLABSIs, the sensitivity estimate would still be 22% at best. Current data presented on the Hospital Compare website also reflect ICU CLABSI identified by NHSN methods, in contrast to whole-hospital HAC. This distinct difference in sampling populations remains unexplained for the average user of the Hospital Compare website. Our study did not specifically evaluate the POA indicator for accuracy, although we did find 25 cases identified as CLABSI that were deemed POA “yes” and not counted as an HAC. Specific documentation of POA by the treating clinician was largely absent in the manually reviewed cases (data not shown); however, coders may have contacted clinicians verbally in an undocumented process. Finally, this study only evaluated case-finding methods for CLABSI; therefore, results are not generalizable to other types of HAI.

Every individual using these 2 quality measures must be fully informed about the methods used to derive these data and why there may be conflict. The information age, consumer scrutiny, and the punitive approach makes the process of developing fair outcomes measures more difficult and contentious.10 Measurement of central line infections by HAC is ongoing; reimbursement penalties as well as the proposed value-based purchasing program will be based on claims data.18 Healthcare in the United States has a critical need for research, funding, innovation, rigorous validation, and an informed national debate over outcomes measures. Hospital administrators, policy makers, epidemiologists, and clinicians must to come together to make collaborative decisions about what measures to use, how to improve them, how to ensure their validity, and ways to encourage meaningful interpretation.

Footnotes

Potential conflicts of interest. All authors report no conflicts of interest relevant to this article. All authors submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest, and the conflicts that the editors consider relevant to this article are disclosed here.

References

  • 1.Klevens RM, Edwards JR, Richards CL, Jr, et al. Estimating health care–associated infections and deaths in U.S. hospitals, 2002. Public Health Rep. 2007;122(2):160–166. doi: 10.1177/003335490712200205. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Centers for Disease Control and Prevention. Vital signs: central line-associated blood stream infections—United States, 2001, 2008, and 2009. MMWR Morb Mortal Wkly Rep. 2011;60(8):243–248. [PubMed] [Google Scholar]
  • 3.Medicare program: changes to the hospital inpatient prospective payment systems and fiscal year 2008 rates. Fed Regist. 2007;72(162):47129–48175. [PubMed] [Google Scholar]
  • 4.Medicare program: changes to the hospital inpatient prospective payment systems and fiscal year 2009 rates; payments for graduate medical education in certain emergency situations; changes to disclosure of physician ownership in hospitals and physician self-referral rules; updates to the long-term care prospective payment system; updates to certain IPPS-excluded hospitals; and collection of information regarding financial relationships between hospitals. Final rules. Fed Regist. 2008;73(161):48433–49084. [PubMed] [Google Scholar]
  • 5.Stevenson KB, Khan Y, Dickman J, et al. Administrative coding data, compared with CDC/NHSN criteria, are poor indicators of health care-associated infections. Am J Infect Control. 2008;36(3):155–164. doi: 10.1016/j.ajic.2008.01.004. [DOI] [PubMed] [Google Scholar]
  • 6.Sherman ER, Heydon KH, St John KH, et al. Administrative data fail to accurately identify cases of healthcare-associated infection. Infect Control Hosp Epidemiol. 2006;27(4):332–337. doi: 10.1086/502684. [DOI] [PubMed] [Google Scholar]
  • 7.Stone PW, Horan TC, Shih HC, Mooney-Kane C, Larson E. Comparisons of health care-associated infections identification using two mechanisms for public reporting. Am J Infect Control. 2007;35(3):145–149. doi: 10.1016/j.ajic.2006.11.001. [DOI] [PubMed] [Google Scholar]
  • 8.Meddings J, Saint S, McMahon LF., Jr Hospital-acquired catheter-associated urinary tract infection: documentation and coding issues may reduce financial impact of Medicare’s new payment policy. Infect Control Hosp Epidemiol. 2010;31(6):627–633. doi: 10.1086/652523. [DOI] [PubMed] [Google Scholar]
  • 9.Julian KG, Brumbach AM, Chicora MK, et al. First year of mandatory reporting of healthcare-associated infections, Pennsylvania: an infection control-chart abstractor collaboration. Infect Control Hosp Epidemiol. 2006;27(9):926–930. doi: 10.1086/507281. [DOI] [PubMed] [Google Scholar]
  • 10.Pronovost PJ, Lilford R. Analysis and commentary: a road map for improving the performance of performance measures. Health Aff (Millwood) 2011;30(4):569–573. doi: 10.1377/hlthaff.2011.0049. [DOI] [PubMed] [Google Scholar]
  • 11.Horan TC, Andrus M, Dudeck MA. CDC/NHSN surveillance definition of health care-associated infection and criteria for specific types of infections in the acute care setting. Am J Infect Control. 2008;36(5):309–332. doi: 10.1016/j.ajic.2008.03.002. [DOI] [PubMed] [Google Scholar]
  • 12.Centers for Disease Control and Prevention. Device-associated module: CLABSI. [Accessed March 14, 2011]; http://www.cdc.gov/nhsn/pdfs/pscmanual/4psc_clabscurrent.pdf. Published 2010.
  • 13.Haut ER, Pronovost PJ. Surveillance bias in outcomes reporting. JAMA. 2011;305(23):2462–2463. doi: 10.1001/jama.2011.822. [DOI] [PubMed] [Google Scholar]
  • 14.Emori TG, Edwards JR, Culver DH, et al. Accuracy of reporting nosocomial infections in intensive-care-unit patients to the National Nosocomial Infections Surveillance System: a pilot study. Infect Control Hosp Epidemiol. 1998;19(5):308–316. doi: 10.1086/647820. [DOI] [PubMed] [Google Scholar]
  • 15.Sexton DJ, Chen LF, Anderson DJ. Current definitions of central line–associated bloodstream infection: is the emperor wearing clothes? Infect Control Hosp Epidemiol. 2010;31(12):1286–1289. doi: 10.1086/657583. [DOI] [PubMed] [Google Scholar]
  • 16.Fraser TG, Gordon SM. CLABSI rates in immunocompromised patients: a valuable patient centered outcome? Clin Infect Dis. 2011;52(12):1446–1450. doi: 10.1093/cid/cir200. [DOI] [PubMed] [Google Scholar]
  • 17.Jhung MA, Banerjee SN. Administrative coding data and health care–associated infections. Clin Infect Dis. 2009;49(6):949–955. doi: 10.1086/605086. [DOI] [PubMed] [Google Scholar]
  • 18.Medicare program: hospital inpatient value-based purchasing program. Final rule. Federal Register. 2011;76(88):26490–26547. [PubMed] [Google Scholar]

RESOURCES