Skip to main content
Antimicrobial Agents and Chemotherapy logoLink to Antimicrobial Agents and Chemotherapy
. 2016 May 23;60(6):3265–3269. doi: 10.1128/AAC.00572-16

Investigating the Extremes of Antibiotic Use with an Epidemiologic Framework

Marc H Scheetz a,b,, Page E Crew a,b, Cristina Miglis a,b, Elise M Gilbert b,c, Sarah H Sutton d, J Nick O'Donnell a,b, Michael Postelnick b, Teresa Zembower d, Nathaniel J Rhodes a,b
PMCID: PMC4879355  PMID: 27001807

Abstract

Benchmarks for judicious use of antimicrobials are needed. Metrics such as defined daily doses (DDDs) and days of therapy (DOTs) quantify antimicrobial consumption. However, benchmarking with these metrics is complicated by interhospital variability. Thus, it is important for each hospital to monitor its own temporal consumption trends. Time series analyses allow trends to be detected; however, many of these methods are complex. We present simple regressive methods and caveats in using them to define potential antibiotic over- and underutilizations.

BACKGROUND: CURRENT BENCHMARKS FOR ANTIMICROBIAL STEWARDSHIP

In order to foster antimicrobial stewardship and encourage appropriate utilization of antimicrobials, benchmarks of judicious use should be created. To create benchmarks, metrics of antibiotic use must be standardized. Currently, two major metrics of antimicrobial consumption exist. Defined daily doses (DDDs) and days of therapy (DOTs) both describe antibiotic utilization at the hospital level and are standardized to patient days of hospitalization (e.g., 1,000 patient days). To establish a standard methodology and ultimately develop benchmarks, the CDC has created the “Antibiotic Use and Resistance (AUR) Module” for reporting data to the National Healthcare Safety Network (NHSN) (1).

METRICS: NO METRIC IS PERFECT

The antibiotic use (AU) module utilizes a modified version of DOTs by quantifying bar-coded medication administration (BCMA) data. It is well known that antibiotic consumption estimates vary based on the method of calculation (i.e., DDD versus DOT) (2). Thus, the importance of using a consistent metric (i.e., DDD, DOT, or AU DOT) for benchmarking consumption data over time should be emphasized. The relationship between these methods is complex and varies based on each drug studied. For instance, ampicillin-sulbactam consumption is overestimated by DDDs because the World Health Organization (WHO) anatomical therapeutic chemical (ATC) classification system sets the daily expected amount at 2,000 mg, based on the ampicillin component (3). However, the average amount utilized per day is much closer to 3 doses of the 3-g combination product (or 6 g of ampicillin and 3 g of sulbactam) (2). That is, the average patient receives 6 g daily of ampicillin but 1 DDD is 2 g of ampicillin, so this patient is classified as having received 3 DDD for each real day of therapy. Additional complexities exist in that DDDs cannot account for dose adjustments in the setting of organ failure (e.g., dose adjustment of ciprofloxacin for renal failure). DOTs can account for appropriate antimicrobial adjustments for organ failure in some consumption-monitoring systems; however, DOT estimates obtained using only BCMA to record utilization also fail to capture dose adjustment for end-organ dysfunction. DOTs calculated with these methods underestimate the true DOTs among patients receiving antibiotics with a dosing interval greater than 24 h. DOT compilation based on ordered (not administered) drug can circumvent this issue but may inappropriately classify medication that the patient does not receive. Hence, consistent use of the same measurement method (i.e., DOTs or DDDs) and metric compilation method (i.e., BCMA or medication order) are both highly important factors to consider when tracking antibiotic consumption.

The CDC is attempting to develop benchmarks by creating the antimicrobial class-specific usage rates and standardized antibiotic administration ratios (SAARs) (4). The SAAR is derived from negative binomial regressions of consumption data across multiple centers and seeks to identify factors that are associated with changes in patterns of antimicrobial use at the hospital level. The merits of defining accurate antimicrobial benchmarks are significant, and the creation of the SAARs represents a step forward for antibiotic epidemiology in the United States. Yet we opine that the most appropriate application of benchmarks will remain elusive until a large number of acute care facilities are represented in the AUR program and descriptions of best practice and appropriate antimicrobial use are more clearly defined, as well. To this end, we implore hospitals to consider submitting data to the CDC AUR. Increased participation by institutions of diverse sizes and missions will lead to improved understanding of “standard of care” consumption and increase the generalizability of the benchmarks established.

POTENTIAL PROBLEMS WITH INTERHOSPITAL BENCHMARKS

A major problem that complicates the creation of benchmarks is that each health care entity is unique. That is, a hospital that cares for more acute patients with highly complex infectious diseases would be expected to utilize more antibiotics than a similarly sized hospital caring for patients with fewer and less complex conditions. Because of the large variability between hospitals, even complex modeling strategies have not fared well in accurately predicting which hospitals will use more antibiotics. As an example, Polk and colleagues (1) analyzed antibiotic consumption data from 35 hospitals. Multivariate regressions were only able to explain 46% of the variability in vancomycin utilization between hospitals when modeled as a function of mean duration of total antimicrobial usage and rates of bone marrow transplantation (5). It is disconcerting that even complex mathematical strategies describe less than 50% of usage variability for a single studied antibiotic. This is not to say that these pursuits should be abandoned; however, it must be recognized that antibiotic use is complex and that antibiotic consumption metrics serve as one of many potential indicators of overuse or underuse. Hence, robust analysis of appropriate antibiotic use practices at the hospital level may be an easier, more relevant, target for stewardship programs in the short term. We anticipate that many future investigations will better define appropriate, reasonable, rational, and warranted consumption on the basis of specific patient population differences. Strong investigative studies will be needed to define inappropriate use in the lofty goal of “reduction of inappropriate antibiotic use … by 20% in inpatient settings” (6).

POTENTIAL INTERNAL HOSPITAL STRATEGIES TO INVESTIGATE ANTIBIOTIC USE

Because differences in patient populations help drive consumption at each hospital, Bayes' theorem can be interpreted in this setting (7). Accordingly, it is suggested that each hospital's consumption may be most similar to its own past consumption. Thus, a method that tracks antibiotic consumption over time is compelling for those that have patient populations that remain constant over time or have changes that are easily tracked (e.g., a liver transplant program was initiated in January 2016). When a hospital compares its own antibiotic consumption against previous performance, patient acuity can often effectively be held constant or at least treated as a constant with random variation included as a factor, as in our examples below. In doing so, a hospital can define standard or usual antimicrobial use within its own system.

Once a hospital's usual consumption of an antibiotic can be reasonably established, an obvious question arises. Are there periods of time when antibiotic use experiences overuse (e.g., an “outbreak”) or underuse? During an outbreak, incidence is greater than what is expected in a defined system over a given time period. The WHO defines a disease outbreak as: “the occurrence of cases of disease in excess of what would normally be expected in a defined community, geographical area or season” (8). Hence, the definition also works very well for antibiotic consumption. A certain quantity of antibiotic use is expected and appropriate. One can also attempt to define trends in the epidemiological presentation of bacterial disease and resistance (9). To define whether use is “greater or lesser than expected,” one must first define what is expected after addressing the seasonal and periodic rates of antimicrobial resistance change.

COMPLEX AND SIMPLE MODELING STRATEGIES

The purpose of defining antibiotic over- and underuse is simple. Each hospital should know when its use exceeds or drops below that which would otherwise be considered normal. However, complexities and randomness in data make this task less than rote. To more clearly demonstrate what an antibiotic outbreak or underutilization might look like, we utilized two time series modeling methods for vancomycin, the single most commonly used antibiotic in the hospital setting (10). Antibiotic use data were compiled from BCMA at Northwestern Memorial Hospital, an 897-bed tertiary care academic medical center in Chicago, IL. All intravenous vancomycin administrations facility-wide from January 2012 to June 2015 were included. Vancomycin consumption was quantified using the NHSN AU method for compilation, and antimicrobial days (ADs) per 1,000 patient-days present were calculated (1). ADs and days present facility-wide were extracted from the electronic medication administration record (eMAR) and tallied (1). Intercooled Stata, version 14.0 (Statacorp, College Station, TX), was utilized for all calculations. The adjudicated AU data were exported from the AU Option Web portal and coded as time-series data using the date function in Stata.

The first method we used was a standard, though relatively complex statistical time series method, i.e., the Box-Jenkins autoregressive, integrated, moving average (ARIMA) model (11). This type of model has previously been used successfully for research applications (12). Time series analyses like the Box-Jenkins method allow the univariate assessment of occurrences (e.g., antibiotic incidence in patients per 1,000 days present) as a function of time. Using these methodologies, it is possible to define the following parameters for antimicrobial consumption: (i) trends, (ii) cycles, (iii) random variation, and (iv) seasonality. Cycle and seasonality have similarities. These are patterns that move up and down according to defined time periods. For example, secondary bacterial pneumonia is known to follow a seasonal pattern coincident with influenza (13). Therefore, it is reasonable to expect greater numbers of community-acquired bacterial pneumonia cases during this time. Because of increased disease rates, higher and appropriate antimicrobial use is expected. However, the patterns are frequently not as clear as with this example (14).

The Box-Jenkins approach controls for the number of autoregressive terms (p), the number of nonseasonal differences (i.e., difference order) to make the model stationary (d), and the number of lagged forecast errors in the prediction equation (q). In less technical terms, p controls for the influence of the previous time series data on the next result (e.g., what is the influence of August-to-October data on the value for November?), d controls for the overall trend, and q controls for the impact of previous error on the error for the next result (e.g., what is the influence of August-to-October error on the error for November?). Fitting the most appropriate model requires an iterative and complex process. Multiple strategies exist for the most appropriate fitting (11), but final models can be tested using Akaike or Bayesian information criteria (AIC and BIC, respectively) and thus help to discern the most parsimonious yet explanatory model. In our case, a model of 1,0,0 for p, d, and q (i.e., a first-order autoregressive model) resulted in the lowest values for AIC and BIC, 253.43 and 258.64, respectively. Forecast (i.e., future prediction) models can also be created for the ARIMA (Fig. 1).

FIG 1.

FIG 1

Box-Jenkins autoregressive integrated moving average (ARIMA) model: vancomycin consumption from January 2012 to June 2015, with predictions through December 2015.

To create models with the ARIMA, statistical expertise is generally needed, and model fitting requires considerable time. It is unlikely that the ARIMA method described above can be automated with simple recursive statistical procedures to generate meaningful results. The model applied to our data distilled down to a simple form, that of pure autocorrelation. This means that future predictions are unlikely to be precise when the trend changes. Often, such as with our examples, the real-world data may not lend themselves to simplification of consistent trends, cycles, and seasonality. Furthermore, it is unclear how to translate parameter estimates and fitted values into periods of potential over- and underutilization.

The second method we describe, linear regression, is a simpler model that is easy to implement. Linear regression is possible by regressing DOTs as the dependent variable and time as the independent variable (Fig. 2). By doing so, one can define a mean regression line, a 95% confidence interval for the mean, and prediction intervals (e.g., 80%). As stewardship programs are more interested in individual events (e.g., is the amount of vancomycin being used in January of 2016 beyond the expected amount?), the prediction interval has a natural interpretation and may be of great value to programs seeking to address antibiotic outbreaks. The prediction interval defines the percentage (e.g., 80%) of individual predicted data points that would fall within the bounds according to the linear mean regression forecast. We propose that this value has greater utility than the more generally reported 95% confidence interval for the mean, which predicts where the mean prediction would fall within 95% confidence. It should be noted that prediction intervals are calculated using standard errors, and thus, they control for high or low antimicrobial use within a given hospital system.

FIG 2.

FIG 2

Linear regression of vancomycin consumption from January 2012 to June 2015. Dashed purple circles represent potential overuse. Double-lined green circles represent potential underuse.

The percentage set for the prediction interval (i.e., to define an outbreak) is arbitrary and will require further vetting. We chose to set an 80% prediction interval to define potential antibiotic outbreaks because a narrower prediction interval creates a bias toward missing true events (i.e., true over- or underutilizations). Any value that occurs above or below these bounds could be investigated for potential antimicrobial overuse or could be reclassified as appropriate use (e.g., appropriate treatment of a large upswing of methicillin-resistant Staphylococcus aureus infections). In adopting this second method, it would be important to stress that 10% of values are expected to be above and 10% below the prediction interval (i.e., 20% total). An in-depth investigation of any value outside the bounds, similar to an infection control investigation, would be required to determine inappropriate use. We find this methodology compelling, as it provides trigger points for more in-depth investigation. It can be used on a facility-wide basis or be used to track specific units. The latter may provide even more action points, as discerning causes for high and low use may be more achievable. Clinicians should consider setting their own bounds for prediction intervals. Similar to testing for disease, our methods to define possible antibiotic outbreaks or underutilizations can be thought of as screening tests, as opposed to confirmatory tests. The goal is to maximize sensitivity, even at the expense of potential false-positive results. Thus, it is better to investigate potential over- and underutilizations when they are not in fact injudicious than it is to miss a time period when inappropriate antibiotic utilization occurred.

It should be well understood that all mathematic models may produce incorrect and biased results. Explaining the past is difficult; predicting the future is much more difficult, and projections are more often than not incorrect. We believe that our simplified regression model is most useful for attempting to explain the recent past (i.e., identifying whether outbreaks or underutilizations occurred in the past so that future efforts could correct these problems through education and training).

One should understand the important caveats of employing our simplified linear regression approach. Most centrally, this method assumes that the predicted value (i.e., DOTS) follows a linear pattern. To assess the validity of this assumption, one can assess the residuals (i.e., residual = YŶ), where Y is the observed point and Ŷ is the mean prediction from the line. Examination of this analysis should look like a cycle or random noise above and below the point of zero (where the observation was perfectly predicted). This is easily assessed visually (Fig. 3, left), and predictions around the line at zero should have an equal number of residuals above and below zero. When the diagnostic graphic displays that the linear trend assumption is violated, segmented regressions can be developed. Simply put, this is creating a linear regression for each segment where the minimum number of segments is two. Such an analysis would be appropriate when anti-infective use is highly seasonal, such as with antiviral influenza medications. Here, one would create a separate regression for each influenza season.

FIG 3.

FIG 3

Residual diagnostic plots. (Left) residuals; (right) standardized residuals. Predictions are color coded as follows: within 80%, blue; within 90%, orange; within 95%, red.

Many other mathematical approaches could certainly be valid. For instance, control charts are widely utilized by the military (15), engineers (16), and epidemiologists (17). A number of control charts exist. A common example is the Xbar chart (18), in which the average observed value is plotted as a center line and the range of expected values, representing statistical noise in the data, is calculated by selecting an arbitrary multiple of a measure of variation (e.g., 3 standard deviations of the mean or 2.66 times the moving range).These charts are useful because they provide immediate visual cues for potential over- and underutilizations; however, the Xbar chart lacks directional trends. The slope in the simple linear regression strategy can be used by hospitals to quickly understand the rate by which use is increasing or decreasing. In addition to the correctness of the model, the ultimate determination of the best method for application centers around the complexity of the method (e.g., can it be completed) and the applicability of the model in the hands of the local steward (e.g., will it be used). Those wishing to visualize our methods as a control-type chart can plot the standardized residuals and compare them (19) to critical values of interest (i.e., 80% = 1.28, 90% = 1.64, and 95% = 1.96 [Fig. 3, right]). In this manner, individual sites can set their own values to investigate potential over- and underutilizations.

CONCLUSIONS

In summary, we have proposed a methodology to recognize potential antimicrobial overuse and underuse at the hospital level. Regressive methods can define mean linear trends and expected future values based on those trends to set boundaries for potential outbreaks and underutilizations. As reviewing internal trends mitigates the complexity of interhospital comparisons, these methods may provide useful internal insights. Linear regression methodologies may not generate the most precise estimates of future antimicrobial use but can be easily employed and provide action points for antimicrobial stewardship teams to investigate potential outbreaks or underuse. Prompt assessment by antimicrobial teams will be critically important.

ACKNOWLEDGMENTS

This article was written on behalf of the Antimicrobial Stewardship Program (ASP) at Northwestern Medicine.

All authors declare no relevant funding related to this work.

All authors declare no conflicts of interest.

The views expressed in this Commentary do not necessarily reflect the views of the journal or of ASM.

REFERENCES

  • 1.Centers for Disease Control and Prevention (CDC). 2015. National Healthcare Safety Network (NHSN) antimicrobial use and resistance (AUR) module. Centers for Disease Control and Prevention, Atlanta, GA: http://www.cdc.gov/nhsn/PDFs/pscManual/11pscAURcurrent.pdf. [Google Scholar]
  • 2.Polk RE, Fox C, Mahoney A, Letcavage J, MacDougall C. 2007. Measurement of adult antibacterial drug use in 130 US hospitals: comparison of defined daily dose and days of therapy. Clin Infect Dis 44:664–670. doi: 10.1086/511640. [DOI] [PubMed] [Google Scholar]
  • 3.World Health Organization Collaborating Center for Drug Statistics Methodology. 2013. Anatomical therapeutic chemical index: ampicillin and enzyme inhibitor. WHO Collaborating Centre for Drug Statistics Methodology Norwegian Institute of Public Health, Oslo, Norway: http://www.whocc.no/atc_ddd_index/?code=J01CR01 Accessed November 2, 2015. [Google Scholar]
  • 4.Fridkin S. 2015. Challenges and opportunities for rapidly advancing reporting and improving inpatient antibiotic use in the U.S. Overview of benchmarking Antibiotic Use. National Center for Emerging and Zoonotic Infectious Diseases, Division of Healthcare Quality Promotion, Centers for Disease Control and Prevention, Atlanta, Georgia: http://asp.nm.org/uploads/2/4/4/1/24413384/3_fridkin_benchmarking.pdf Accessed November 2, 2015. [Google Scholar]
  • 5.Pakyz AL, MacDougall C, Oinonen M, Polk RE. 2008. Trends in antibacterial use in US academic health centers: 2002 to 2006. Arch Intern Med 168:2254–2260. doi: 10.1001/archinte.168.20.2254. [DOI] [PubMed] [Google Scholar]
  • 6.The White House. 2014. National action plan for combating antibiotic-resistant bacteria. https://www.whitehouse.gov/sites/default/files/docs/national_action_plan_for_combating_antibotic-resistant_bacteria.pdf Accessed November 2, 2015.
  • 7.Laplace PS. 1986. Memoir on the probability of the causes of events. Statist Sci 1:364-378. doi: 10.1214/ss/1177013621. [DOI] [Google Scholar]
  • 8.World Health Organization. 2015. Health topics: disease outbreaks. World Health Organization, Geneva, Switzerland: http://www.who.int/topics/disease_outbreaks/en/ Accessed November 2, 2015. [Google Scholar]
  • 9.Sun L, Klein EY, Laxminarayan R. 2012. Seasonality and temporal correlation between community antibiotic use and resistance in the United States. Clin Infect Dis 55:687–694. doi: 10.1093/cid/cis509. [DOI] [PubMed] [Google Scholar]
  • 10.Kelesidis T, Braykov N, Uslan DZ, Morgan DJ, Gandra S, Johannsson B, Schweizer ML, Weisenberg SA, Young H, Cantey J, Perencevich E, Septimus E, Srinivasan A, Laxminarayan R. 2016. Indications and types of antibiotic agents used in 6 acute care hospitals, 2009-2010: a pragmatic retrospective observational study. Infect Control Hosp Epidemiol 37:70–79. doi: 10.1017/ice.2015.226. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Becketti S. 2013. Introduction to time series using Stata. Stata Press, College Station, TX. [Google Scholar]
  • 12.Zou YM, Ma Y, Liu JH, Shi J, Fan T, Shan YY, Yao HP, Dong YL. 2015. Trends and correlation of antibacterial usage and bacterial resistance: time series analysis for antibacterial stewardship in a Chinese teaching hospital (2009-2013). Eur J Clin Microbiol Infect Dis 34:795–803. doi: 10.1007/s10096-014-2293-6. [DOI] [PubMed] [Google Scholar]
  • 13.Gupta RK, George R, Nguyen-Van-Tam JS. 2008. Bacterial pneumonia and pandemic influenza planning. Emerg Infect Dis 14:1187–1192. doi: 10.3201/eid1408.070751. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Metersky ML, Masterton RG, Lode H, File TM Jr, Babinchak T. 2012. Epidemiology, microbiology, and treatment considerations for bacterial pneumonia complicating influenza. Int J Infect Dis 16:e321–e331. doi: 10.1016/j.ijid.2012.01.003. [DOI] [PubMed] [Google Scholar]
  • 15.United States Air Force. 1996. Basic tools for process improvement. Module 10: control chart. US Navy Pacific Fleet's Handbook for basic process improvement (BPI) http://www.au.af.mil/au/awc/awcgate/navy/bpi_manual/mod10-control.pdf. [Google Scholar]
  • 16.NIST/SEMATECH. 2012. e-Handbook of statistical methods. National Institute of Standards and Technology, U.S. Department of Commerce, Gaithersburg, MD: http://www.itl.nist.gov/div898/handbook/ Accessed 25 February 2016. [Google Scholar]
  • 17.Sellick JA. 1993. The use of statistical process control charts in hospital epidemiology. Infect Control Hosp Epidemiol 14:649–656. doi: 10.2307/30149749. [DOI] [PubMed] [Google Scholar]
  • 18.Benneyan JC. 2001. Design, use, and performance of statistical control charts for clinical process improvement. Northeastern University, Boston, MA: http://www1.coe.neu.edu/∼benneyan/papers/intro_spc.pdf Accessed February 25, 2016. [Google Scholar]
  • 19.StataCorp LP. 2015. Regress postestimation—postestimation tools for regress, p 2134–2147. Statabase Reference Manual, release 14 Stata Press, College Station, TX. [Google Scholar]

Articles from Antimicrobial Agents and Chemotherapy are provided here courtesy of American Society for Microbiology (ASM)

RESOURCES