Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2014 Oct 1.
Published in final edited form as: Infect Control Hosp Epidemiol. 2013 Aug 29;34(10):1055–1061. doi: 10.1086/673144

Impact of Change to Molecular Testing for Clostridium difficile Infection on Healthcare Facility–Associated Incidence Rates

Rebekah W Moehring 1,2, Eric T Lofgren 3, Deverick J Anderson 1
PMCID: PMC3967243  NIHMSID: NIHMS563218  PMID: 24018922

Abstract

BACKGROUND

Change from nonmolecular to molecular testing techniques is thought to contribute to the increasing trend in incidence of Clostridium difficile infection (CDI); however the degree of effect attributed to this versus other time-related epidemiologic factors is unclear.

METHODS

We compared the relative change in incidence rate (IRR) of healthcare facility–associated (HCFA) CDI among hospitals in the Duke Infection Control Outreach Network before and after the date of switch from nonmolecular tests to polymerase chain reaction (PCR) using prospectively collected surveillance data from July 2009 to December 2011. Data from 10 hospitals that switched and 22 control hospitals were included. Individual hospital estimates were determined using Poisson regression. We used an interrupted time series approach to develop a Poisson mixed-effects model. Additional regression adjustments were made for clustering and proportion of intensive care unit patient-days. The variable for PCR was treated as a fixed effect; other modeled variables were random effects.

RESULTS

For those hospitals that switched to PCR, mean incidence rate of HCFA CDI before the switch was 6.0 CDIs per 10,000 patient-days compared with 9.6 CDIs per 10,000 patient-days after the switch. Estimates of hospital-specific IRR that compared after the switch with before the switch ranged from 0.89 (95% confidence interval [CI], 0.32–2.44) to 6.91 (95% CI, 1.12–42.54). After adjustment in the mixed-effects model, the overall IRR comparing CDI incidence after the switch to before the switch was 1.56 (95% CI, 1.28–1.90). Time-trend variables did not reach statistical significance.

CONCLUSION

Hospitals that switched from nonmolecular to molecular tests experienced an approximate 56% increase in the rate of HCFA CDI after testing change.


Clostridium difficile infection (CDI) is a major healthcare-associated pathogen that causes significant morbidity and mortality, increased length of stay, and increased hospitalization costs.1-3 Thus, CDI is a logical target for surveillance and prevention efforts at the national, state, and individual hospital levels. Multiple studies that included surveillance data up to 2009 have reported that the incidence of healthcare-associated CDI was increasing.4-8 For example, the incidence of CDI increased in our network of community hospitals; in fact, CDI overtook methicillin-resistant Staphylococcus aureus as the most common cause of hospital-acquired infection.9 Multiple theories and explanations exist for the increasing CDI trend in North America.4,10 The emergence and spread of the highly toxigenic BI/NAP1/027 clone is credited with causing a surge of infections.10-13 In addition, widespread use of broad-spectrum and quinolone antibiotics may be driving the increase in this clearly antibiotic-associated illness.14,15 Finally, the growing number of elderly individuals residing in chronic care facilities represents a population at particularly high risk for CDI.16

Several other factors, however, may be introducing a surveillance bias into interpretations of longitudinal trends. For example, definitions that capture community-onset healthcare facility–associated (CO-HCFA) CDI were introduced in 2008, which may have improved our ability to recognize CDI cases that occur after discharge from acute care hospitals.6,17 Also, increasing awareness and focus on CDI may alert clinicians to perform testing for CDI in a larger number of at-risk patients. Finally, increased sensitivity of molecular diagnostic tests has improved the laboratory detection of CDI cases.18-20 Thus, the combination of several of the above factors may affect accurate interpretation of global and local trends when attempting to assess the success of CDI prevention programs or perform external benchmarking.

Earlier studies have attempted to quantify the expected increase in the incidence of CDI that should occur after switching to a molecular diagnostic test. These studies, however, have primarily used data from a single center, were unable to adjust for time-dependent factors that may be unrelated to the change in testing, or did not have a control group comparator.21-24 The purpose of this study, therefore, is to estimate the effect of the switch from nonmolecular to molecular laboratory diagnostic tests on the incidence of healthcare-facility associated CDI using time-trended CDI surveillance data from a large sample of community hospitals while adjusting for time-dependent and other hospital-level factors.

METHODS

Study Design and Setting

We performed a quasiexperimental, interrupted time series analysis of prospectively collected CDI surveillance data from the Duke Infection Control Outreach Network from July 1, 2009, to December 31, 2011. DICON is a network of 43 nonteaching community hospitals in Virginia, North Carolina, South Carolina, Georgia, and Florida that share infection control resources and services from liaison infection preventionists (IPs) and physician epidemiologists.25 Membership in the network is voluntary and based on contractual agreements for consultative, educational, and information technology services. Decisions to switch CDI testing strategies were based on local hospital policy changes and not controlled by researchers. Inclusion criteria required all study hospitals to have complete CDI surveillance data for the duration of the study period. Each hospital performed laboratory testing according to local policy and procedures with respect to test type, brand, and whether specimens were sent out to reference laboratories. Each hospital’s clinical microbiology laboratory performed local validations and quality control for their clinical specimens as part of routine practice.

This study was approved by the Duke University Medical Center Institutional Review Board as exempt research. Surveillance data were obtained from a de-identified central database according to established data use agreements between DICON and each local hospital.

Definitions

Centers for Disease Control and Prevention (CDC) National Healthcare Safety Network (NHSN) surveillance definitions were used to identify incident cases of CO-HCFA and hospital-onset HCFA CDI.1 Local IPs at each hospital conducted prospective routine surveillance and microbiology-based manual chart review for use in infection prevention efforts. The study period corresponds with introduction of the 2008 change in CDC surveillance definitions for CDI, which included source type interpretations.17 This change in definition was fully incorporated into routine surveillance and data entry for the DICON database on July 1, 2009, which was the start of the study period. Recurrent cases were excluded.

The hospital-level cluster variable was defined using statistical control g-charts that plotted the date of the HCFA CDI event by the number of days-between infections for each hospital.26 Cluster-periods were identified during weeks when 5 consecutive time-between-infection data points occurred below the mean for the time-period or 5 consecutive time-between-infection data points occurred in a downward trend. Switch hospitals had 2 means calculated for the time-period before and after the switch to molecular testing to account for phase change occurring with introduction of the new diagnostic test.

Statistical Analysis

The effect of the switch to molecular testing on incidence rate of HCFA CDI was estimated by means of Poisson regression using an interrupted time series approach to produce an incidence rate ratio (IRR) and 95% confidence interval (CI). Each hospital’s time series was calculated in weeks. Before and after time variables were centered on the date of testing change for switch hospitals and the final time point for control hospitals. Variable amounts of postswitch follow-up time were available among switch hospitals. For each week, each hospital’s data included the number of diagnosed CDI cases, the person-days denominator, a covariate indicating the percentage of intensive care unit (ICU) patient-days divided by total patient-days, a binary variable indicating use of polymerase chain reaction (PCR) testing, and another binary indicator for presence of a hospital-level cluster of infections during that week.

First, hospital-specific adjusted IRR estimates for each switch hospital were generated by Poisson regression using only each individual switch hospital’s data. This analysis was completed to visualize single-hospital experiences, with an understanding that the small amounts of before and after data would be inadequate for statistically significant results or hypothesis testing for most single hospitals.

Then, the total time series data for both switch and control hospitals were combined and analyzed in a multivariate, piecewise, Poisson mixed-effects model.27 The effect of each random variable was estimated for the entire study period but allowed to vary randomly within each hospital, accounting for both between- and within-hospital variation. The IRR estimate measuring the increase in cases attributable to the switch to PCR testing was treated as a fixed effect that did not vary between hospitals. Variables for time trends were explored using nonlinear modeling and the examination of the resulting Akaike Information Criterion for hierarchical models to determine the ideal shape of the time trends. A secondary analysis estimating the effect of the switch to PCR based on the brand of nonmolecular test used before the switch was performed using a similar mixed effects model with binary indicator variables for each nonmolecular test type.

RESULTS

The study included 10 switch hospitals that changed from nonmolecular tests to PCR and 22 control hospitals that maintained the same nonmolecular diagnostic testing strategy for the entire study period. Eleven DICON hospitals were excluded because incomplete surveillance data were reported for the study period. As a result of local policy decisions and under no control of the researchers, all 10 hospitals that switched to PCR testing used the Cepheid Xpert C. difficile assay (Xpert CD assay; Cepheid). The study involved a total of 1,805 cases of CDI over 4,038,447 patient-days, 345,671 (8.6%) of which were ICU patient-days. The median CDI rate per week over the span of the study was 0.00 cases per 10,000 patient-days (interquartile range, 0.00–6.15). The mean (± standard deviation) was 4.36 ± 9.28 cases per 10,000 patient-days, with the rate ranging as high as 190.76 cases per 10,000 patient-days during a week in which there were clustered infections.

Hospital, ICU, and laboratory characteristics for 10 switch hospitals and 22 control hospitals are described in Table 1. Average bed sizes of switch hospitals were larger than but not significantly different from those of control hospitals (P = .2). Type of nonmolecular tests varied among switch and control hospitals and included the following: Premier Toxins A&B (Meridian Bioscience), Immunocard A&B (Meridian Bioscience), C. difficile Tox A/B II (TechLab), or C. Diff Quik Check Complete (TechLab). Two hospitals sent specimens out to reference laboratories for C. difficile testing. Switch hospitals had variable amounts of follow-up time after the switch in diagnostic testing, ranging from 4 to 117 weeks (Table 2).

TABLE 1.

Hospital and Clostridium difficile Diagnostic Test Characteristics, Duke Infection Control Outreach Network, 2009–2011

Variable PCR switch hospitals (n = 10) Control hospitals (n = 22)
Bed size, mean no. of beds (±SD) 280 ± 126 230 ± 160
Nonmolecular test type (manufacturer)
 Premier Toxins A&B (Meridian) 4 (40) 2 (9)
 C. Diff Quik Check Complete (TechLab) 4 (40) 6 (27)
C. difficile Tox A/B II (TechLab) 1 (10) 5 (23)
 Immunocard A&B (Meridian) 1 (10) 9 (41)
Specimens sent out to reference laboratory 1 (10) 2 (9)
Proportion ICU-days per total patient-days, mean % (±SD) 8 ± 3 10 ± 4
Mean no. of weeks with clusters (±SD) 2 ± 3 3 ± 3

NOTE. Nonmolecular test type for switch hospitals was the testing brand used before the switch. All hospitals switched to Cepheid PCR. Data are no. (%) of hospitals, unless otherwise indicated. ICU, intensive care unit; PCR, polymerase chain reaction; SD, standard deviation.

TABLE 2.

Hospital-Specific Estimates of Effect of Switch from Nonmolecular to Polymerase Chain Reaction Diagnostic Testing on Incidence Rate of Healthcare Facility–Associated Clostridium difficile Infection

Hospital no. Time before switch, weeks Time after switch, weeks aIRR (95% CI)
1 118 13 3.35 (1.46–7.72)
2 102 29 1.20 (0.64–2.26)
3 82 49 3.39 (1.08–10.63)
4 98 33 2.79 (0.62–12.68)
5 75 56 1.55 (0.95–2.51)
6 83 48 1.71 (0.47–6.15)
7 14 117 6.91 (1.12–42.54)
8 102 29 0.65 (0.26–1.59)
9 77 54 0.89 (0.32–2.44)
10a 127 4

NOTE. aIRR, adjusted incidence rate ratio. All estimates were adjusted for the following covariates: linear time before switch, linear time after switch, binary cluster indicator, and intensive care unit–days per total patient days expressed as a percentage.

a

Hospital 10 did not have enough data after switch to create a hospital-specific estimate with interpretable precision.

Poisson interrupted time series models that used single-hospital data were constructed for the 10 switch hospitals to observe single-hospital experiences, again with an understanding that an individual hospitals’ surveillance data were not powered to do hypothesis testing. The average incidence rate of HCFA CDI before the switch to PCR was 6.0 cases of CDI per 10,000 patient-days compared with 9.6 cases of CDI per 10,000 patient-days after the switch for switch hospitals. Adjusted hospital-specific estimates that compared HCFA CDI rates after to before switch to molecular testing ranged from 0.89 (95% CI, 0.32–2.44) to 6.91 (95% CI, 1.12–42.54; Table 2 and Figure 1). Most (7 of 10) individual switch hospitals experienced a numerical increase in HCFA CDI rate. Only 2 of 10 switch hospitals experienced a decrease (Table 2 and Figure 1). A meaningful hospital-specific estimate could not be calculated for one switch hospital because of the short period of follow-up time in which there were no reported cases after the switch (hospital 10).

FIGURE 1.

FIGURE 1

Hospital-specific estimates of the effect of switching from nonmolecular to polymerase chain reaction diagnostic testing on incidence rate of healthcare facility–associated Clostridium difficile infection. The Y-axis is on the logarithmic scale. Hospital 10 is not included because of the lack of data after switching to create a hospital-specific estimate with interpretable precision. IRR, incidence rate ratio.

Hospital time series from the 10 switch and 22 control hospitals were combined into a mixed-effect model that included the same covariates: linear time trends before and after switch, proportion of ICU patient-days, and a binary indicator for a cluster period. Nonlinear terms for time trends did not produce any improvement in model fit over linear terms. Linear time trends were not statistically significant in the adjusted model, which suggests that the baseline rate of CDI remained stable during the study period. Both the cluster period indicator and the proportion of ICU-days to total patient-days were strong predictors of HCFA CDI rate (Table 3). The overall adjusted IRR (aIRR) estimate of the effect of switch to molecular testing was 1.56 (95% CI, 1.28–1.90; Figure 2).

TABLE 3.

Multivariate Mixed Effects Interrupted Time Series Model of Effect of Switch from Nonmolecular to Polymerase Chain Reaction Diagnostic Testing on Incidence Rate of Healthcare Facility–Associated Clostridium difficile Infection

Mixed-effects model variable Fixed or random effect aIRR (95% CI) P
PCR switch Fixed 1.56 (1.28–1.90) <.001
Time before switch Random 1.00 (1.00–1.00) .5
Time after switch Random 1.00 (1.00–1.00) .8
Cluster Random 3.88 (3.00–5.01) <.001
ICU-days per total patient-daysa Random 1.02 (1.01–1.02) <.001

NOTE. aIRR, adjusted incidence rate ratio; ICU, intensive care unit; PCR, polymerase chain reaction.

a

Expressed as a percentage.

FIGURE 2.

FIGURE 2

Multivariate mixed model estimate of effect of switch from nonmolecular to polymerase chain reaction diagnostic testing on incidence rate of healthcare facility–associated Clostridium difficile infection, Duke Infection Control Outreach Network, 2009–2011. Model adjustment variables were held constant assuming no clusters of infection and 9% intensive care unit–days per total patient-days. CI, confidence interval; PCR, polymerase chain reaction.

Secondary analyses that delineated between the brand of nonmolecular tests used before the switch to PCR revealed minimal differences in the effect of the switch on incidence rate within the precision allowed by available data. Switch from Premier Toxins A&B (Meridian) produced an estimated aIRR of 1.57 (95% CI, 1.22–2.03). C. difficile Tox A/B II (Techlab) produced an aIRR of 1.70 (95% CI, 1.27–2.27). Switch from C. Diff Quik Check Complete (TechLab) produced an estimated aIRR of 1.16 (95% CI, 0.55–2.49), with limited precision because only a single switch hospital used this test. An estimate of switch from Immunocard A&B (Meridian) to PCR could not be generated because of sparse data from hospital 10, which only included 4 weeks of data after the switch. Hospital 10 was the only switch hospital to use this brand of nonmolecular test.

DISCUSSION

Improved test sensitivity because of the change to molecular diagnostic testing can produce both positive and negative effects. Sensitivity and specificity estimates for Cepheid PCR are 94%–100% and 93%–99%.18 In contrast, sensitivity and specificity estimates for enzyme immunoassays are 60%–81% and 91%–99% when compared with toxigenic anaerobic culture.28 However, a molecular test is more expensive to implement, may cause confusion among ordering providers, and be overused because of its novelty. Also, the more sensitive test may be “too good” at identifying patients who are colonized but not truly infected with C. difficile.1 In the context of testing for potentially transmittable diseases within the hospital setting, the improved sensitivity of molecular tests allows infected and colonized patients to be rapidly and reliably identified. Infection control measures and active antibiotic therapy can be instituted without the delays or confusion caused by false-negative testing. Thus, in theory, molecular tests may more effectively prevent in-hospital transmission of CDI.

In this study, we demonstrate that changing to the more sensitive molecular test will cause a significant increase in HCFA CDI incidence rate during the transition year. The average increase in rate for this study was 56%, which is consistent with earlier estimates. Longtin et al21 performed parallel testing at a single academic hospital in Canada and reported a 52% increase in HCFA CDI rate. Goldenberg et al24 also reported a 57% increase in CDI rates at their hospital in the United Kingdom after switch to a 2-step PCR strategy. Fong et al22 reported a 2.3-times greater prevalence of positive test results after switch with unadjusted data from the Cleveland Clinic. CDC researchers have also preliminarily reported an effect size of 59%–89% from 5 hospitals compared with nonswitch control hospitals using laboratory-identified CDI events.23

At the local level, our estimate can be used to gauge how much of an increase in rate may be expected with the testing switch during the transition year. Infection prevention teams using this estimate must still consider other concomitant causal factors that may require their attention and effective interventions. Importantly, the variability of the effect in each individual hospital’s experience should be noted. In fact, 2 hospitals in our study saw a numerical decrease in their incidence rates after the switch. Therefore, the degree of uncertainty around our estimate should be considered when interpreting local, unadjusted data in which multiple factors may be at play.

Another important finding of this study was the lack of statistically significant time trends in the incidence of HCFA CDI after adjustment for the switch to molecular testing. Although many sources, including our group, have described increasing CDI incidence in the United States in the last 10 years, more recent surveillance trends have not yet been widely reported.29 Billing data suggest that the CDI uptrend may be slowing or leveling off in 2009–2010,30 which would be consistent with the current study findings. Perhaps this observed stable trend may be attributable to ongoing awareness of CDI and infection prevention efforts. Another potential explanation may be that our 2.5-year study period was not long enough to detect a larger, gradual, longitudinal trend. In an exploratory analysis of our data, linear time trend variables for the 32 hospitals did not become statistically significant when regression adjustment for switch to PCR testing was withheld (data not shown). However, it is conceivable that, in larger data sets or those covering a longer period, simply reporting surveillance data without accounting for differences in testing sensitivity may have conversely led to the conclusion that rates were still increasing.

Important limitations to this study must be noted. This is a nonrandomized, retrospective quasi-experimental study with limited adjustment for potential unmeasured confounding factors, such as antibiotic use, changes in infection prevention practices, and patient case mix. We believe that the statistically significant predictor of ICU-days per total patient-days is a crude but reasonable proxy for case mix. This factor, however, has not previously been considered, to our knowledge, in risk-adjustment methods for multihospital CDI data.31 Another important case mix covariate to consider would be advanced age; however, we did not have this variable measured in our prospectively collected CDI surveillance data. Second, because of the availability of before and after data, there were variable amounts of data for switch hospitals after the change in testing. This affected the precision of our hospital-specific estimates, although every hospital contributed to the estimation of time trends and precision of the mixed model regardless of switch date. Third, the realignment of time centered on the testing switch date may have avoided our discovery of distinct seasonal or week-to-week time trends. However, the intent of our interrupted time series approach was to account for the larger, global increasing trend in CDI that has been previously described. Thus, we feel that our strategy for adjustment for clustering and inclusion of nonswitch hospital data in calculating the baseline trend was adequate for this purpose. Finally, our study included a network of smaller-sized community hospitals in the southeastern United States and may not be generalizable to other practice settings, particularly academic, tertiary care hospitals. Despite the above, our study uses data from 32 hospitals, includes hospitals that did not switch testing methods as comparators in determining temporal trends, and describes the varied experiences at individual institutions that implemented the molecular testing switch.

In the era of publicly reported data for hospital-acquired infections, changing to a superiorly sensitive diagnostic test may lead to inaccurate assessments of the quality of an infection prevention program. We agree with several authors who have called for recognition of surveillance bias in interpretation of externally benchmarked data for interhospital comparison when heterogeneous diagnostic testing strategies are used.21,22,24,29,32 Accordingly, we now stratify hospitals by diagnostic testing strategy to provide separate benchmarks for member hospitals in DICON that use molecular tests for C. difficile. The CDC NHSN is also moving toward using molecular testing as a factor in the risk adjustment strategy for laboratory-identified events.29,31 More importantly, however, epidemiologists must consider that a shift in diagnostic testing strategy introduces a surveillance bias when assessing local, national, and regional trends over time and the public health impact of C. difficile infection. Our study results provide a reasonable estimate that can be used when interpreting trends in local CDI data after implementing a molecular diagnostic test.

Acknowledgments

We thank E. N. Naumova, MD, for her advice and J. Obure, MD, for his contributions in data collection. We also thank the participating DICON hospitals, microbiology laboratory personnel, and infection preventionists for their contributions to the study.

Footnotes

Potential conflicts of interest. All authors report no conflicts of interest relevant to this article. All authors submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest, and the conflicts that the editors consider relevant to this article are disclosed here.

References

  • 1.Cohen SH, Gerding DN, Johnson S, et al. Clinical practice guidelines for Clostridium difficile infection in adults: 2010 update by the Society for Healthcare Epidemiology of America (SHEA) and the Infectious Diseases Society of America (IDSA) Infect Control Hosp Epidemiol. 2010;31:431–455. doi: 10.1086/651706. [DOI] [PubMed] [Google Scholar]
  • 2.Scott RD. The direct medical costs of healthcare-associated infections in U.S. hospitals and the benefits of prevention. [December 10, 2012];2009 http://www.cdc.gov/hai/pdfs/hai/scott_costpaper.pdf.
  • 3.Dubberke ER, Reske KA, Olsen MA, McDonald LC, Fraser VJ. Short- and long-term attributable costs of Clostridium difficile–associated disease in nonsurgical inpatients. Clin Infect Dis. 2008;46:497–504. doi: 10.1086/526530. [DOI] [PubMed] [Google Scholar]
  • 4.Freeman J, Bauer MP, Baines SD, et al. The changing epidemiology of Clostridium difficile infections. Clin Microbiol Rev. 2010;23:529–549. doi: 10.1128/CMR.00082-09. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Dubberke ER, Butler AM, Yokoe DS, et al. Multicenter study of surveillance for hospital-onset Clostridium difficile infection by the use of ICD-9-CM diagnosis codes. Infect Control Hosp Epidemiol. 2010;31:262–268. doi: 10.1086/650447. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Dubberke ER, Butler AM, Hota B, et al. Multicenter study of the impact of community-onset Clostridium difficile infection on surveillance for C. difficile infection. Infect Control Hosp Epidemiol. 2009;30:518–525. doi: 10.1086/597380. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Ricciardi R, Rothenberger DA, Madoff RD, Baxter NN. Increasing prevalence and severity of Clostridium difficile colitis in hospitalized patients in the United States. Arch Surg. 2007;142:624–631. doi: 10.1001/archsurg.142.7.624. discussion, 631. [DOI] [PubMed] [Google Scholar]
  • 8.McDonald LC, Jernigan DB. Increasing incidence of Clostridium difficile–associated disease in U.S. acute care hospitals, 1993–2001. Paper presented at: 14th Annual Meeting of the Society for Healthcare Epidemiology of America; April 18, 2004; Alexandria, VA. [Google Scholar]
  • 9.Miller BA, Chen LF, Sexton DJ, Anderson DJ. Comparison of the burdens of hospital-onset, healthcare facility-associated Clostridium difficile infection and of healthcare-associated infection due to methicillin-resistant Staphylococcus aureus in community hospitals. Infect Control Hosp Epidemiol. 2011;32:387–390. doi: 10.1086/659156. [DOI] [PubMed] [Google Scholar]
  • 10.Bartlett JG. Clostridium difficile: progress and challenges. Ann N Y Acad Sci. 2010;1213:62–69. doi: 10.1111/j.1749-6632.2010.05863.x. [DOI] [PubMed] [Google Scholar]
  • 11.McDonald LC, Killgore GE, Thompson A, et al. An epidemic, toxin gene-variant strain of Clostridium difficile. N Engl J Med. 2005;353:2433–2441. doi: 10.1056/NEJMoa051590. [DOI] [PubMed] [Google Scholar]
  • 12.Warny M, Pepin J, Fang A, et al. Toxin production by an emerging strain of Clostridium difficile associated with outbreaks of severe disease in North America and Europe. Lancet. 2005;366:1079–1084. doi: 10.1016/S0140-6736(05)67420-X. [DOI] [PubMed] [Google Scholar]
  • 13.Loo VG, Poirier L, Miller MA, et al. A predominantly clonal multi-institutional outbreak of Clostridium difficile–associated diarrhea with high morbidity and mortality. N Engl J Med. 2005;353:2442–2449. doi: 10.1056/NEJMoa051639. [DOI] [PubMed] [Google Scholar]
  • 14.Pepin J, Saheb N, Coulombe MA, et al. Emergence of fluoroquinolones as the predominant risk factor for Clostridium difficile–associated diarrhea: a cohort study during an epidemic in Quebec. Clin Infect Dis. 2005;41:1254–1260. doi: 10.1086/496986. [DOI] [PubMed] [Google Scholar]
  • 15.Birgand G, Miliani K, Carbonne A, Astagneau P. Is high consumption of antibiotics associated with Clostridium difficile polymerase chain reaction-ribotype 027 infections in France? Infect Control Hosp Epidemiol. 2010;31:302–305. doi: 10.1086/650758. [DOI] [PubMed] [Google Scholar]
  • 16.Loo VG, Bourgault AM, Poirier L, et al. Host and pathogen factors for Clostridium difficile infection and colonization. N Engl J Med. 2011;365:1693–1703. doi: 10.1056/NEJMoa1012413. [DOI] [PubMed] [Google Scholar]
  • 17.McDonald LC, Coignard B, Dubberke E, Song X, Horan T, Kutty PK. Recommendations for surveillance of Clostridium difficile–associated disease. Infect Control Hosp Epidemiol. 2007;28:140–145. doi: 10.1086/511798. [DOI] [PubMed] [Google Scholar]
  • 18.Carroll KC. Tests for the diagnosis of Clostridium difficile infection: the next generation. Anaerobe. 2011;17:170–174. doi: 10.1016/j.anaerobe.2011.01.002. [DOI] [PubMed] [Google Scholar]
  • 19.Chapin KC, Dickenson RA, Wu F, Andrea SB. Comparison of five assays for detection of Clostridium difficile toxin. J Mol Diagn. 2011;13:395–400. doi: 10.1016/j.jmoldx.2011.03.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Selvaraju SB, Gripka M, Estes K, Nguyen A, Jackson MA, Selvarangan R. Detection of toxigenic Clostridium difficile in pediatric stool samples: an evaluation of Quik Check Complete Antigen assay, BD GeneOhm Cdiff PCR, and ProGastro Cd PCR assays. Diagn Microbiol Infect Dis. 2011;71:224–229. doi: 10.1016/j.diagmicrobio.2011.07.015. [DOI] [PubMed] [Google Scholar]
  • 21.Longtin Y, Trottier S, Brochu G, et al. Impact of the type of diagnostic assay on Clostridium difficile infection and complication rates in a mandatory reporting program. Clin Infect Dis. 2013;56:67–73. doi: 10.1093/cid/cis840. [DOI] [PubMed] [Google Scholar]
  • 22.Fong KS, Fatica C, Hall G, et al. Impact of PCR testing for Clostridium difficile on incident rates and potential on public reporting: is the playing field level? Infect Control Hosp Epidemiol. 2011;32:932–933. doi: 10.1086/661789. [DOI] [PubMed] [Google Scholar]
  • 23.Gould G, Edwards J, Cohen J, et al. Effect of nucleic acid amplification testing on population-based incidence rates of Clostridium difficile infection. Paper presented at: ID Week; October 17–22, 2012; San Diego, CA. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Goldenberg SD, Price NM, Tucker D, Wade P, French GL. Mandatory reporting and improvements in diagnosing Clostridium difficile infection: an incompatible dichotomy? J Infect. 2011;62:363–370. doi: 10.1016/j.jinf.2011.03.007. [DOI] [PubMed] [Google Scholar]
  • 25.Anderson DJ, Miller BA, Chen LF, et al. The network approach for prevention of healthcare-associated infections: long-term effect of participation in the Duke Infection Control Outreach Network. Infect Control Hosp Epidemiol. 2011;32:315–322. doi: 10.1086/658940. [DOI] [PubMed] [Google Scholar]
  • 26.Benneyan JC. Statistical quality control methods in infection control and hospital epidemiology, part II: chart use, statistical properties, and research issues. Infect Control Hosp Epidemiol. 1998;19:265–283. [PubMed] [Google Scholar]
  • 27.Naumova EN, Must A, Laird NM. Tutorial in biostatistics: evaluating the impact of “critical periods” in longitudinal studies of growth using piecewise mixed effects models. Int J Epidemiol. 2001;30:1332–1341. doi: 10.1093/ije/30.6.1332. [DOI] [PubMed] [Google Scholar]
  • 28.Eastwood K, Else P, Charlett A, Wilcox M. Comparison of nine commercially available Clostridium difficile toxin detection assays, a real-time PCR assay for C. difficile tcdB, and a glutamate dehydrogenase detection assay to cytotoxin testing and cytotoxigenic culture methods. J Clin Microbiol. 2009;47:3211–3217. doi: 10.1128/JCM.01082-09. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Centers for Disease Control and Prevention. Vital signs: preventing Clostridium difficile infections. MMWR Morb Mortal Wkly Rep. 2012;61:157–162. [PubMed] [Google Scholar]
  • 30.Lucado J, Gould C, Elixhauser A. Clostridium difficile infections (CDI) in hospital stays, 2009. [December 10, 2012];HCUP statistical brief. 2011 http://www.hcup-us.ahrq.gov/reports/statbriefs/sb124.pdf. [PubMed]
  • 31.Dudeck M, Malpiedi P, Edwards J, Fridkin S, McDonald LC, Sievert S. Risk adjustment for healthcare facility-onset C. difficile infection and MRSA bacteremia reporting in NHSN. Presented at: ID Week; October 17–22, 2012; San Diego, CA. [Google Scholar]
  • 32.Goldenberg SD, French GL. Diagnostic testing for Clostridium difficile: a comprehensive survey of laboratories in England. J Hosp Infect. 2011;79:4–7. doi: 10.1016/j.jhin.2011.03.030. [DOI] [PubMed] [Google Scholar]

RESOURCES