Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2009 Sep 18.
Published in final edited form as: Am J Manag Care. 2009 Feb;15(2):137–144.

Reporting hospitals’ antibiotic timing in pneumonia: adverse consequences for patients?

Mark W Friedberg 1, Ateev Mehrotra 1, Jeffrey A Linder 1
PMCID: PMC2746403  NIHMSID: NIHMS132456  PMID: 19284811

Abstract

Objective

In 2004, the Hospital Quality Alliance began nationwide public reporting of hospital quality performance. There has been concern that public reporting of the antibiotic timing measure (percentage of patients with pneumonia receiving antibiotics within 4 hours) has led to unintended adverse consequences for patients. In a nationally-representative hospital sample, we sought to determine whether public reporting has been associated with overdiagnosis of pneumonia, excessive antibiotic use, or inappropriate prioritization of patients with respiratory symptoms.

Study Design

Retrospective analyses of 13,042 emergency department (ED) visits by adult patients with respiratory symptoms in the National Hospital Ambulatory Medical Care Survey, 2001–2005.

Methods

Rates of pneumonia diagnosis, antibiotic use, and waiting times to see a physician were compared before and after public reporting. These outcomes were also compared between hospitals with different antibiotic timing scores.

Results

Comparing outcomes before and after antibiotic timing score reporting, there were no differences in rates of pneumonia diagnosis (10% vs. 11% of all ED visits, P = .72) or antibiotic administration (34% vs. 35%, P = .21). Mean waiting times to be seen by a physician increased similarly for patients with and without respiratory symptoms (11 vs. 6 minute increase, respectively; P = .29). After adjustment for confounders, hospitals with higher 2005 antibiotic timing scores had shorter mean waiting times for all patients, but there were no significant score-related trends for rates of pneumonia diagnosis or antibiotic use.

Conclusions

Despite concerns, public reporting of hospital antibiotic timing scores has not led to increased pneumonia diagnosis, antibiotic use, or a change in patient prioritization.

Keywords: Quality Measurement / Benchmarking, Performance Measurements, Quality of Care, Emergency Care, Consumer-Directed Healthcare, Infectious Disease


Take-Away Points

  • Despite concerns about potential unintended adverse consequences, a national sample of hospitals reveals little evidence that public reporting of hospitals’ antibiotic timing in pneumonia has led to widespread over-diagnosis of pneumonia or inappropriate antibiotic administration.

  • Explainable variation in hospitals’ antibiotic timing scores is primarily attributable to differences in patients’ waiting times to see a physician, rather than differences in rates of pneumonia diagnosis or antibiotic administration.

  • Future monitoring of the effects of public reporting programs may provide valuable guidance to policy makers, especially in areas of controversy.

Introduction

To encourage improvement in hospitals’ quality of care, the Hospital Quality Alliance (HQA) began an initiative to collect and publicly report hospital-level performance on 10 quality measures in 2004.13 Over 98% of U.S. acute care hospitals supply performance data to the HQA,4, 5 but concerns have been raised about potential unintended consequences of public reporting.68

Hospitals’ responses to the HQA measure “Initial Antibiotic Received within 4 Hours of Hospital Arrival” have been of particular concern. Hospitals feeling pressure to improve antibiotic timing performance could potentially “play for the test” by encouraging the premature (and potentially inaccurate) diagnosis of pneumonia, giving antibiotics indiscriminately to patients with respiratory symptoms, or inappropriately prioritizing patients likely to have pneumonia ahead of others whose medical conditions may be more urgent.914

Prior studies from single institutions and self-selected hospitals participating in a pay-for-performance pilot program suggest that incentives tied to pneumonia antibiotic timing scores have led to increased rates of inaccurate pneumonia diagnosis and inappropriate antibiotic administration in emergency departments (EDs).1517 However, whether public reporting on antibiotic timing has had similar effects on a national scale is unknown.

Since antibiotic timing scores are felt to reflect care delivered in EDs,18 we used a national database of ED visits and compared the care for patients with respiratory symptoms before and after the start of public reporting. We assessed whether these patients were more likely to be diagnosed with pneumonia, prescribed antibiotics, and have shorter waiting times to see a physician (as compared to patients without respiratory symptoms, reflecting patient prioritization). To test the hypothesis that hospitals with higher scores were “playing for the test,” we also assessed differences on these 3 measures between hospitals scoring higher and lower on the pneumonia antibiotic timing measure.

Methods and Materials

Emergency Department Visit Data

We used patient visit data from the Emergency Department module of the nationally-representative National Hospital Ambulatory Medical Care Survey (NHAMCS), which is administered annually by the National Center for Health Statistics (NCHS).19 A rotating panel of nonfederal, general, and short-stay hospitals participates in the NHAMCS, and visits from each hospital in the panel are sampled approximately every 15 months.

Trained hospital staff record patient and clinical data on standardized forms for each visit. Patient data include demographic information, expected source of payment, and nursing home residence. Clinical data include up to 3 reasons for visit, up to 3 physician diagnoses (in the ED), up to 8 medications administered during the ED visit (but not the timing of medication administration), waiting time to see a physician (2003–2005 only), triage vital signs, and orientation to person, place, and time.20

The NHAMCS ED module collected 182,332 patient visit records between 2001 and 2005. Among contacted hospitals over the study period, 90% to 95% participated. Visits are weighted to allow extrapolation of survey results to national estimates.19 For our analysis the NCHS created anonymous hospital identifiers that allowed longitudinal tracking of each participating hospital.

The NCHS institutional review board approved all NHAMCS protocols, and the confidentiality of the data is protected by law.21

Hospital Antibiotic Timing Scores and Hospital Characteristics

We obtained publicly available hospital-level HQA performance data on the timing of initial antibiotics delivered to patients admitted with pneumonia for 2004 and 2005. Hospital scores were calculated as the percentage of adult patients discharged from the hospital with a diagnosis of pneumonia who received their first dose of antibiotics within 4 hours of hospital arrival. Detailed specifications for this measure are available elsewhere.22

We also linked hospitals’ pneumonia antibiotic timing scores to other hospital characteristics obtained from the 2005 database of the American Hospital Association: number of beds, geographic region, urban location, ownership (for-profit, not-for-profit, and government), status of membership in the Council of Teaching Hospitals (COTH), and percentage of patients covered by Medicare and Medicaid. Of the 507 unique hospitals participating in the NHAMCS ED module during 2001–2005, NCHS staff matched 503 (99%) to their HQA pneumonia antibiotic timing scores and other corresponding hospital characteristics.

Consistent with prior literature, we included for analysis only hospitals reporting stable antibiotic timing scores, defined as scores calculated using at least 25 patient discharges during the year 2005.2 Of the 503 NHAMCS sample hospitals, 118 (23%) were excluded based on this criterion.

Study Population

Our study population included ED visits during 2001–2005 by patients aged 18 and older whose primary reason for visit was “symptoms referable to the respiratory system” or “diseases of the respiratory system,” excluding conditions limited to the upper respiratory tract (e.g., nasal congestion). Among included visits, the most common specific reasons for visit were cough (50%), shortness of breath (24%), and “labored or difficult breathing” (11%). In supplementary analyses, inclusion of visits for upper respiratory conditions did not substantively alter our results.

Visits were included regardless of patients’ dispositions at the end of each ED visit (e.g., admitted to hospital, transferred, discharged to home). Supplementary analyses limited to visits resulting in hospital admission did not substantively alter our results.

Outcome Variables: Processes of ED Care

We had 3 major outcome variables: ED diagnosis, antibiotic use, and waiting time to see a physician. We used ICD-9(International Classification of Diseases, Ninth Revision) codes to classify ED diagnoses as pneumonia (with the same codes used for antibiotic timing score reporting),22 bronchitis, congestive heart failure (CHF), or other. It was possible for a single visit to carry more than 1 of these diagnoses, and we counted visits receiving diagnoses in more than 1 category towards the total in each applicable diagnostic category.

Antibiotic use was identified using the NCHS drug classification system.20, 23 As in previous studies, we classified antibiotic use in visits for asthma and CHF as inappropriate when pneumonia was not also an ED diagnosis.16 In supplementary analyses we also included antibiotic use in bronchitis as inappropriate without substantive changes to the results.

Statistical Analysis

We performed 2 main comparisons. First, we analyzed nationwide longitudinal trends and differences in the outcome variables (ED diagnosis, antibiotic use, and waiting time to see a physician) before and after the start of public score reporting among ED visits for respiratory symptoms. We designated January 1, 2004 as the first day of the public reporting period because this was the first day of care that could contribute to publicly available antibiotic timing scores. Because the antibiotic timing measure was first published in October 2003,4 we designated October 1, 2003 as the first day of the reporting period in supplementary analyses with substantively similar results.

Second, we conducted a cross-sectional analysis of relationships between 2005 antibiotic timing scores and the outcome variables, restricting our analysis to ED visits in the public reporting period (2004–2005). We hypothesized that if hospitals were “playing for the test” in order to raise their scores, patients with respiratory symptoms visiting hospitals with the highest 2005 scores would experience the highest rates of pneumonia diagnosis, the highest rates of antibiotic use, and, relative to patients without respiratory symptoms, the shortest waiting times.

We assessed relationships between categorical variables using the chi-squared test. Due to the non-normal distribution of waiting time to see a physician, waiting times were modeled using generalized log-linear regression.

For the longitudinal analysis we constructed multivariable logistic regression models predicting each categorical outcome variable (ED diagnosis and antibiotic use) as a function of time period (pre-reporting vs. during reporting) and the above listed hospital-level characteristics from the American Hospital Association database. To adjust for case severity and patient characteristics known to be associated with ED processes of care,3, 24, 25 the models included patient age, gender, orientation to person/place/time (dichotomous), presence of fever (triage temperature >100.4 degrees Fahrenheit), residence in a nursing home (dichotomous), race and ethnicity (non-Hispanic white, non-Hispanic black, Hispanic, and other), anticipated source of payment (private, Medicare, Medicaid, or other), and season of the visit (winter, spring, summer, or fall). We constructed multivariable log-linear regression models for waiting time, including the same potential confounders. For the cross-sectional analysis we created similar models using hospitals’ 2005 antibiotic timing scores (continuous variable) rather than time period as the major predictor.

In order to assess the contribution of ED care processes to variation in antibiotic timing scores, we constructed supplementary linear regression models predicting the 2005 antibiotic timing score of the hospital associated with each ED visit as a function of pneumonia diagnosis, antibiotic use, and waiting time. We assessed the independent, mutually-adjusted contribution of each explanatory variable to overall variation in scores by comparing the partial R-squared values for each predictor. All analyses were performed using SUDAAN version 9.0.0 to account for data clustering and the complex NHAMCS sampling design.20

Results

There were 385 hospitals participating in the NHAMCS from 2001 to 2005 that reported pneumonia antibiotic timing scores based on at least 25 observations in 2005. There were no significant differences between the NHAMCS hospital sample and the overall population of U.S. hospitals reporting at least 25 antibiotic timing score observations in 2005 [Table 1].

Table 1.

Characteristics of Hospitals with ≥25 Pneumonia Antibiotic Timing Observations in 2005

NHAMCS hospitals (N = 385) All hospitals (N = 3431) P value for difference

%* (95% CI) %
Size (number of beds)
 <100 beds 30 (24 – 37) 31 .74

 100–400 beds 56 (49 – 63) 56

 >400 beds 14 (11 – 17) 12

Region
 Northeast 18 (15 – 22) 14 .74

 Midwest 25 (20 – 30) 28

 South 39 (34 – 44) 39

 West 18 (14 – 23) 18

Ownership
 For profit 12 (8 – 18) 17 .10

 Not-for-profit 74 (66 – 80) 66

 Government 14 (9 – 21) 17

Teaching status (member COTH)
 Teaching 11 (9 – 14) 8 .05

 Nonteaching 89 (86 – 91) 92

Urban setting
 Urban 82 (75 – 88) 87 .14

 Non-urban 18 (12 – 25) 13

Mean % Medicare patients 46 (44 – 48) 45 .42

Mean % Medicaid patients 17 (16 – 18) 17 .95

Mean 2005 pneumonia antibiotic timing score 77 (75 – 79) 77 .52

Abbreviations: NHAMCS, National Hospital Ambulatory Medical Care Survey; CI, confidence interval; COTH, Council of Teaching Hospitals.

*

All percentages for NHAMCS hospitals are weighted by hospital-level weights generated by the National Center for Health Statistics.

P values are based on chi-squared tests for categorical variables and on T tests for differences in the means of continuous variables.

Based on the NHAMCS sample, there were an estimated 40 million (95% confidence interval, 39 to 42 million) ED visits to hospitals by adults with respiratory symptoms between 2001 and 2005. These visits for respiratory symptoms represented 11% of all ED visits, and this percentage was constant across all data years. There were no significant differences between the pre-reporting (2001–2003) and reporting (2004–2005) periods in patient sociodemographic factors (age, gender, race, and ethnicity), nursing home residence, or expected payment source [data not shown]. However, patients in the pre-reporting period were less likely to have a fever (8% pre-reporting vs. 11% during reporting, P = .003) and to be oriented to person, place, and time (77% pre-reporting vs. 85% during reporting, P < .001).

There was no significant difference in the rate of pneumonia diagnosis among ED visits for respiratory symptoms between the pre-reporting and reporting periods [Table 2]. Antibiotic use during ED visits increased among visits resulting in an ED diagnosis of pneumonia (70% pre-reporting vs. 78% during reporting, P = .01), but there were no significant changes in antibiotic use among patients with other ED diagnoses. Mean waiting times to be seen by a physician increased similarly for patients with and without respiratory symptoms (11 vs. 6 minute increase, respectively; P = .29). Adjustment for potential confounders including fever and orientation did not significantly alter these relationships.

Table 2.

Trends Among ED Visits for Respiratory Symptoms: Diagnosis Rates, Antibiotic Administration, and Waiting Times to See a Physician, 2001–2005*

Pre-reporting During reporting
2001 2002 2003 2004 2005 P value for trend Pre-reporting During reporting P value for difference Adjusted P value for difference§
ED diagnosis (% of visits) (% of visits)
 Pneumonia 11 9 12 11 10 .07 10 11 .72 .06

 Bronchitis 26 25 26 23 26 .47 26 25 .41 .17

 CHF 8 10 10 9 7 .06 9 8 .01 .40

Antibiotic usage among visits with… (% of visits) (% of visits)
 …any ED diagnosis 34 31 36 35 36 .10 34 35 .21 .45

 …an ED diagnosis of pneumonia 66 66 78 78 78 .03 70 78 .01 .86

 …no ED diagnosis of pneumonia 27 25 27 26 28 .68 26 27 .65 .79

 …an “inappropriate” ED diagnosis|| 22 20 21 22 26 .45 21 24 .12 .80

Mean waiting times to see a physician (minutes) (minutes)
 Visits for respiratory symptoms -- -- 39 45 56 <.001 39 50 <.001 .06

 Visits not for respiratory symptoms -- -- 47 49 58 <.001 47 53 .02 .002

 Difference, respiratory symptom vs. no respiratory symptom -- -- 8 4 2 .03 8 3 .29 .06

Abbreviations: ED, emergency department; CHF, congestive heart failure.

*

Among hospitals with ≥25 antibiotic timing observations in 2005.

The “pre-reporting” period includes ED visits from January 1, 2003 to December 31, 2003. The “during reporting” period includes ED visits from January 1, 2004 to December 31, 2005.

P values from chi-squared test for categorical variables and from regression coefficient Wald test for waiting times.

§

Adjusted P values reflect statistical significance of regression coefficients for the reporting time period in models adjusting for patient- and hospital-level characteristics.

||

”Inappropriate” diagnoses are asthma, pulmonary edema, and CHF without concurrent diagnosis of pneumonia.

Waiting times to see a physician are not available for 2001 and 2002.

In our second analysis we examined cross-sectional relationships between hospitals’ 2005 pneumonia antibiotic timing scores and the outcome variables during public reporting. Pneumonia diagnosis rates did not significantly increase with antibiotic timing score [Table 3]. Patients visiting higher-scoring hospitals were more likely to receive an antibiotic (31% in the lowest score quintile vs. 44% in the highest, P = .004 for trend) and had shorter mean waiting times regardless of respiratory symptoms (66 vs. 38 minutes with respiratory symptoms, P < .001 for trend; 69 vs. 38 minutes without respiratory symptoms, P < .001 for trend; P = .45 for trend in difference between symptom categories). After adjustment for confounders, only the relationship between shorter waiting times and higher-scoring hospitals remained a statistically significant trend.

Table 3.

Diagnosis Rates, Antibiotic Administration, and Waiting Times to See a Physician, Stratified by 2005 Pneumonia Antibiotic Timing Score*

Antibiotic timing score quintile (2005)
1 2 3 4 5 P value for trend Adjusted P value for trend
Mean 2005 antibiotic timing score (range)§ 60 (25 – 68) 72 (69 – 76) 80 (77 – 82) 84 (83 – 86) 91 (87 – 100) -- --

ED diagnosis % of visits with each diagnosis (95% CI)
 Pneumonia 10 (8 – 12) 10 (8 – 12) 10 (8 – 12) 12 (10 – 14) 13 (9 – 18) .28 .76

 Bronchitis 20 (17 – 23) 27 (23 – 31) 24 (20 – 30) 26 (23 – 30) 31 (27 – 36) <.001 .09

 CHF 8 (7 – 10) 9 (7 – 11) 7 (6 – 9) 9 (6 – 12) 4 (3 – 7) .47 .07

Antibiotic usage among visits with… % of visits receiving an antibiotic (95% CI)
 …any ED diagnosis 31 (26 – 35) 37 (33 – 41) 34 (31 – 38) 37 (32 – 42) 44 (41 – 48) .004 .17

 …an ED diagnosis of pneumonia 73 (62 – 82) 78 (68 – 86) 81 (71 – 89) 80 (68 – 88) 81 (69 – 89) .26 .23

 …no ED diagnosis of pneumonia 23 (19 – 28) 29 (25 – 33) 26 (22 – 30) 28 (23 – 33) 34 (29 – 39) .04 .39

 …an “inappropriate” ED diagnosis|| 24 (19 – 30) 24 (17 – 33) 21 (15 – 28) 26 (20 – 32) 32 (25 – 41) .84 .68

Mean waiting times to see a physician (minutes) mean (95% CI)
 Visits for respiratory symptoms 66 (57 – 75) 53 (44 – 62) 40 (34 – 47) 42 (32 – 51) 38 (32 – 45) <.001 <.001

 Visits not for respiratory symptoms 69 (62 – 75) 57 (50 – 64) 44 (40 – 49) 44 (37 – 51) 38 (32 – 44) <.001 <.001

 Difference, respiratory symptom vs. no respiratory symptom 3 (–4 – 9) 4 (–4 – 11) 4 (–1 – 10) 2 (–6 – 11) –1 (–5 – 4) .45 .35

Abbreviations: ED, emergency department; CI, confidence interval; CHF, congestive heart failure.

*

Among reporting period (2004–2005) ED visits for respiratory symptoms to hospitals with ≥25 pneumonia antibiotic timing observations in 2005.

P values for test of trend reflect statistical significance of regression coefficient for continuous 2005 antibiotic timing score.

Adjusted P values reflect statistical significance of regression coefficients for continuous 2005 antibiotic timing score in models adjusting for patient- and hospital-level characteristics.

§

Score range in the lowest quintile has been rounded to shield hospital identity.

||

“Inappropriate” diagnoses are asthma, pulmonary edema, and CHF without concurrent diagnosis of pneumonia.

Differences between hospitals in rates of pneumonia diagnosis, antibiotic prescribing, and waiting time explained only 4% of overall variation in 2005 antibiotic timing scores, suggesting a weak relationship between the measured ED processes of care and hospital antibiotic timing scores [data not shown]. Of this explainable score variation, waiting times accounted for 79%, and antibiotic usage accounted for 3%.

Discussion

Despite fears that publicly reporting hospitals’ pneumonia antibiotic timing scores would lead to increased pneumonia diagnosis, indiscriminate antibiotic use, and inappropriate prioritization of patients with respiratory symptoms,6, 911 we found little evidence of these unintended consequences in a nationally representative cohort of ED visits. Rates of pneumonia diagnosis and overall antibiotic use did not exhibit significant changes over time. Waiting times to see a physician increased similarly for patients with and without respiratory symptoms over 2003–2005, arguing against higher prioritization of patients likely to have pneumonia.

Moreover, cross-sectional analyses of ED visits during the public reporting period revealed that after adjustment for confounders, only waiting times differed significantly between hospitals with higher and lower antibiotic timing scores. Successful efforts to shorten waiting times for all patients would be better described as quality improvements than as adverse consequences.

While hospitals in the highest antibiotic timing score quintile had the highest rates of antibiotic administration for inappropriate diagnoses, the lack of a trended relationship between antibiotic use and timing scores suggests that excessive antibiotic administration does not significantly contribute to the timing score ranking for most hospitals. Also, analysis of overall score variation revealed that only a very small percentage was attributable to differences in antibiotic administration rates. These findings are consistent with the overall stability of antibiotic administration rates before and after the start of public reporting.

However, if there are persistent concerns that hospitals seeking to achieve the highest antibiotic timing scores will, in the future, have increased rates of inappropriate antibiotic administration, then strategies focused on the top-scoring hospitals may be considered. For example, publicly reporting a score “band” rather than an exact score for hospitals scoring above a certain threshold could attenuate the incentive to achieve scores in the range generating these concerns.7, 26

The implications of our longitudinal analysis differ with those suggested by some earlier reports. However, key differences in cohort design between our analysis (which included all patients presenting to EDs with respiratory complaints) and earlier single-institution studies (which included only those patients admitted with a pneumonia diagnosis) may not allow direct comparison of findings.15, 27 A study among self-selected Premier client hospitals in the Hospital Quality Incentive Demonstration (HQID) revealed higher rates of antibiotic use for inappropriate diagnoses (heart failure, asthma, and COPD) at hospitals with higher antibiotic timing scores.16 However, HQID hospitals faced financial incentives directly tied to antibiotic timing performance, and while all hospitals in our analysis publicly reported their antibiotic timing scores, no data were available to identify which hospitals in our analysis had antibiotic timing-based financial incentives. It is possible that compared to public reporting, financial incentives could have different effects on patient treatment patterns. Prior studies agreeing with our findings have demonstrated associations between ED overcrowding (an established contributor to longer waiting times) and lower antibiotic timing scores.2830

Our study has limitations. First, the NHAMCS does not assess the accuracy of ED diagnoses, so we cannot directly conclude that diagnostic accuracy was unaffected by public reporting. However, because rates of pneumonia diagnosis did not change, any increased inaccuracy would have had to split equally between errors of commission and omission. Second, we lacked complete clinical information (e.g., presence or absence of infiltrate on chest X-ray, comorbid illnesses) to perform fuller case-mix adjustment. Third, our measure of patient prioritization (waiting times to see a physician) did not extend to other important processes of ED care, such as immediacy of imaging.18, 31 Fourth, our analysis included ED visits by patients whose primary reason for visit was a symptom referable to the lower respiratory tract. It is possible that public reporting of antibiotic timing scores would have a different impact on patients with symptoms that were less specific for pneumonia (e.g., delirium, fever, chest pain). Finally, absence of proof is not proof of absence. Failure to detect significant unintended consequences of public antibiotic timing score reporting could be due to insufficient statistical power. However, observed rates of pneumonia diagnosis and antibiotic use were remarkably stable over time, with waiting times for all patients (regardless of respiratory symptoms) accounting for the majority of explainable between-hospital variation in antibiotic timing scores.

External incentive programs designed to encourage health care quality improvement are becoming increasingly common, and concerns about unintended consequences of these programs have surfaced in a variety of clinical settings.3235 Many of these concerns focus on providers “playing for the test” or “gaming the system.”6, 10, 36 In response to concerns that the pneumonia antibiotic timing measure had led to adverse unintended consequences, the Joint Commission and the National Quality Forum changed the cutoff for timely initial antibiotic in pneumonia from 4 hours to 6 hours after hospital arrival and excluded cases of “diagnostic uncertainty” from score calculation.37 The Infectious Disease Society of America eliminated the time cutoff altogether, recommending only that initial antibiotics be received in the ED.38 To the extent that the outcomes we examined reflect possible adverse unintended consequences, our results do not support these changes.

In summary, we found that during the first 2 years of public reporting of hospital pneumonia antibiotic timing scores, concerns about potential widespread unintended consequences were not substantiated by the national experience. Patterns of ED pneumonia diagnosis and antibiotic use among patients with respiratory symptoms have remained stable over time. The EDs of hospitals with higher antibiotic timing scores distinguish themselves by having shorter waiting times to see a physician, suggesting that these scores communicate information of real importance to patients and payers. However, providers’ concerns about the potential adverse consequences of public reporting and pay-for-performance programs deserve attention.7, 8 Monitoring systems that target these concerns and prospectively measure the patient-level effects of quality improvement initiatives may provide valuable guidance (and reassurance) to policy makers.

Acknowledgments

Funding source: The study was supported by the Primary Care Teaching and Education Fund of the corresponding author’s hospital, a National Research Service Award from the Health Resources and Services Administration, and a Career Development Award from the Agency for Healthcare Research and Quality. No funding source had a role in the study design, analysis, or manuscript preparation.

Footnotes

Publisher's Disclaimer: This is the pre-publication version of a manuscript that has been accepted for publication in The American Journal of Managed Care (AJMC). This version does not include post-acceptance editing and formatting. The editors and publisher of AJMC are not responsible for the content or presentation of the prepublication version of the manuscript or any version that a third party derives from it. Readers who wish to access the definitive published version of this manuscript and any ancillary material related to it (eg, correspondence, corrections, editorials, etc) should go to www.ajmc.com or to the print issue in which the article appears. Those who cite this manuscript should cite the published version, as it is the official version of record.

References

  • 1.Williams SC, Schmaltz SP, Morton DJ, et al. Quality of care in U.S. hospitals as reflected by standardized measures, 2002–2004. N Engl J Med. 2005 Jul 21;353(3):255–264. doi: 10.1056/NEJMsa043778. [DOI] [PubMed] [Google Scholar]
  • 2.Jha AK, Li Z, Orav EJ, Epstein AM. Care in U.S. hospitals--the Hospital Quality Alliance program. N Engl J Med. 2005 Jul 21;353(3):265–274. doi: 10.1056/NEJMsa051249. [DOI] [PubMed] [Google Scholar]
  • 3.Pham JC, Kelen GD, Pronovost PJ. National study on the quality of emergency department care in the treatment of acute myocardial infarction and pneumonia. Acad Emerg Med. 2007 Oct;14(10):856–863. doi: 10.1197/j.aem.2007.06.035. [DOI] [PubMed] [Google Scholar]
  • 4. [Accessed July 30, 2008];Hospital Quality Alliance Website. http://www.aha.org/aha_app/issues/HQA/index.jsp.
  • 5. [Accessed July 30, 2008];CMS Hospital Quality Initiatives Website. Available at: http://www.cms.hhs.gov/HospitalQualityInits/01_Overview.asp.
  • 6.Wachter RM. Expected and unanticipated consequences of the quality and information technology revolutions. JAMA. 2006 Jun 21;295(23):2780–2783. doi: 10.1001/jama.295.23.2780. [DOI] [PubMed] [Google Scholar]
  • 7.Wachter RM, Flanders SA, Fee C, et al. Public reporting of antibiotic timing in patients with pneumonia: lessons from a flawed performance measure. Ann Intern Med. 2008 Jul 1;149(1):29–32. doi: 10.7326/0003-4819-149-1-200807010-00007. [DOI] [PubMed] [Google Scholar]
  • 8.Baum SG, Kaltsas A. Guideline tyranny: primum non nocere. Clin Infect Dis. 2008 Jun 15;46(12):1879–1880. doi: 10.1086/588302. [DOI] [PubMed] [Google Scholar]
  • 9.Thompson D. The pneumonia controversy: hospitals grapple with 4 hour benchmark. Ann Emerg Med. 2006 Mar;47(3):259–261. doi: 10.1016/j.annemergmed.2006.01.027. [DOI] [PubMed] [Google Scholar]
  • 10.Pines JM. Profiles in patient safety: Antibiotic timing in pneumonia and pay-for-performance. Acad Emerg Med. 2006 Jul;13(7):787–790. doi: 10.1197/j.aem.2006.02.015. [DOI] [PubMed] [Google Scholar]
  • 11.Pines JM. Measuring antibiotic timing for pneumonia in the emergency department: another nail in the coffin. Ann Emerg Med. 2007 May;49(5):561–563. doi: 10.1016/j.annemergmed.2006.12.007. [DOI] [PubMed] [Google Scholar]
  • 12.Seymann GB. Community-acquired pneumonia: defining quality care. J Hosp Med. 2006 Nov;1(6):344–353. doi: 10.1002/jhm.128. [DOI] [PubMed] [Google Scholar]
  • 13.Pronovost PJ, Miller M, Wachter RM. The GAAP in Quality Measurement and Reporting. JAMA. 2007 October 17;298(15):1800–1802. doi: 10.1001/jama.298.15.1800. [DOI] [PubMed] [Google Scholar]
  • 14.Metersky ML. Measuring the performance of performance measurement. Arch Intern Med. 2008 Feb 25;168(4):347–348. doi: 10.1001/archinternmed.2007.81. [DOI] [PubMed] [Google Scholar]
  • 15.Kanwar M, Brar N, Khatib R, et al. Misdiagnosis of community-acquired pneumonia and inappropriate utilization of antibiotics: side effects of the 4-h antibiotic administration rule. Chest. 2007 Jun;131(6):1865–1869. doi: 10.1378/chest.07-0164. [DOI] [PubMed] [Google Scholar]
  • 16.Drake DE, Cohen A, Cohn J. National hospital antibiotic timing measures for pneumonia and antibiotic overuse. Qual Manag Health Care. 2007 Apr–Jun;16(2):113–122. doi: 10.1097/01.QMH.0000267448.32629.f8. [DOI] [PubMed] [Google Scholar]
  • 17.Polgreen PM, Chen YY, Cavanaugh JE, et al. An outbreak of severe Clostridium difficile-associated disease possibly related to inappropriate antimicrobial therapy for community-acquired pneumonia. Infect Control Hosp Epidemiol. 2007 Feb;28(2):212–214. doi: 10.1086/512174. [DOI] [PubMed] [Google Scholar]
  • 18.Pines JM, Hollander JE, Lee H, et al. Emergency department operational changes in response to pay-for-performance and antibiotic timing in pneumonia. Acad Emerg Med. 2007 Jun;14(6):545–548. doi: 10.1197/j.aem.2007.01.022. [DOI] [PubMed] [Google Scholar]
  • 19.Nawar EW, Niska RW, Xu J. National Hospital Ambulatory Medical Care Survey: 2005 Emergency Department Summary: Advance data from vital and health statistics. Vol. 386. Hyattsville, MD: National Center for Health Statistics; 2007. [PubMed] [Google Scholar]
  • 20.National Center for Health Statistics. Public Use Microdata File Documentation, National Hospital Ambulatory Medical Care Survey, 2005. Hyattsville, MD: National Technical Information Service; 2007. [Google Scholar]
  • 21.42 USC §242m (2005)
  • 22.Centers for Medicare & Medicaid Services, The Joint Commission. Specifications Manual for National Hospital Quality Measures, Version 1.02. 2005. [Google Scholar]
  • 23.Koch H, Campbell W. National Ambulatory Medical Care Survey, 1980. 90. Vol. 2. National Center for Health Statistics. Vital and Health Statistics; 1982. The Collection and Processing of Drug Information. [PubMed] [Google Scholar]
  • 24.Fine JM, Fine MJ, Galusha D, et al. Patient and hospital characteristics associated with recommended processes of care for elderly patients hospitalized with pneumonia: results from the medicare quality indicator system pneumonia module. Arch Intern Med. 2002 Apr 8;162(7):827–833. doi: 10.1001/archinte.162.7.827. [DOI] [PubMed] [Google Scholar]
  • 25.Mortensen EM, Cornell J, Whittle J. Racial variations in processes of care for patients with community-acquired pneumonia. BMC Health Serv Res. 2004 Aug 10;4(1):20. doi: 10.1186/1472-6963-4-20. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Houck PM. Antibiotics and pneumonia: is timing everything or just a cause of more problems? Chest. 2006 Jul;130(1):1–3. doi: 10.1378/chest.130.1.1. [DOI] [PubMed] [Google Scholar]
  • 27.Welker JA, Huston M, McCue JD. Antibiotic timing and errors in diagnosing pneumonia. Arch Intern Med. 2008 Feb 25;168(4):351–356. doi: 10.1001/archinternmed.2007.84. [DOI] [PubMed] [Google Scholar]
  • 28.Pines JM, Hollander JE, Localio AR, et al. The association between emergency department crowding and hospital performance on antibiotic timing for pneumonia and percutaneous intervention for myocardial infarction. Acad Emerg Med. 2006 Aug;13(8):873–878. doi: 10.1197/j.aem.2006.03.568. [DOI] [PubMed] [Google Scholar]
  • 29.Derlet RW, Richards JR. Overcrowding in the nation’s emergency departments: complex causes and disturbing effects. Ann Emerg Med. 2000 Jan;35(1):63–68. doi: 10.1016/s0196-0644(00)70105-3. [DOI] [PubMed] [Google Scholar]
  • 30.Pines JM, Localio AR, Hollander JE, et al. The Impact of Emergency Department Crowding Measures on Time to Antibiotics for Patients With Community-Acquired Pneumonia. Ann Emerg Med. 2007 Oct 1;1:1. doi: 10.1016/j.annemergmed.2007.07.021. [DOI] [PubMed] [Google Scholar]
  • 31.Pines JM, Morton MJ, Datner EM, et al. Systematic delays in antibiotic administration in the emergency department for adult patients admitted with pneumonia. Acad Emerg Med. 2006 Sep;13(9):939–945. doi: 10.1197/j.aem.2006.04.013. [DOI] [PubMed] [Google Scholar]
  • 32.Casalino LP. The unintended consequences of measuring quality on the quality of medical care. N Engl J Med. 1999 Oct 7;341(15):1147–1150. doi: 10.1056/NEJM199910073411511. [DOI] [PubMed] [Google Scholar]
  • 33.Casalino LP, Alexander GC, Jin L, et al. General internists’ views on pay-for-performance and public reporting of quality scores: a national survey. Health Aff (Millwood) 2007 Mar–Apr;26(2):492–499. doi: 10.1377/hlthaff.26.2.492. [DOI] [PubMed] [Google Scholar]
  • 34.Werner RM, Asch DA. The unintended consequences of publicly reporting quality information. JAMA. 2005 Mar 9;293(10):1239–1244. doi: 10.1001/jama.293.10.1239. [DOI] [PubMed] [Google Scholar]
  • 35.Fung CH, Lim YW, Mattke S, et al. Systematic review: the evidence that publishing patient care performance data improves quality of care. Ann Intern Med. 2008 Jan 15;148(2):111–123. doi: 10.7326/0003-4819-148-2-200801150-00006. [DOI] [PubMed] [Google Scholar]
  • 36.File TM, Jr, Gross PA. Performance measurement in community-acquired pneumonia: consequences intended and unintended. Clin Infect Dis. 2007 Apr 1;44(7):942–944. doi: 10.1086/512436. [DOI] [PubMed] [Google Scholar]
  • 37.Mitka M. JCAHO tweaks emergency departments’ pneumonia treatment standards. JAMA. 2007 Apr 25;297(16):1758–1759. doi: 10.1001/jama.297.16.1758. [DOI] [PubMed] [Google Scholar]
  • 38.Mandell LA, Wunderink RG, Anzueto A, et al. Infectious Diseases Society of America/American Thoracic Society consensus guidelines on the management of community-acquired pneumonia in adults. Clin Infect Dis. 2007 Mar 1;44 (Suppl 2):S27–72. doi: 10.1086/511159. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES