Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2011 Nov 1.
Published in final edited form as: J Am Coll Surg. 2010 Sep 15;211(5):601–608. doi: 10.1016/j.jamcollsurg.2010.07.006

The Relationship between Case-Volume, Care Quality, and Outcomes of Complex Cancer Surgery

Andrew D Auerbach 1, Judith Maselli 2, Jonathan Carter 3, Penelope S Pekow 4,5, Peter K Lindenauer 4
PMCID: PMC2989972  NIHMSID: NIHMS249451  PMID: 20829079

Abstract

Background

How case volume and quality of care relate to each other and to results of complex cancer surgery is not well understood.

Study Design

Observational cohort of 14,170 patients 18 or older who underwent pneumonectomy, esophagectomy, pancreatectomy, or pelvic surgery for cancer between 10/1/2003 and 9/1/2005 at a United States hospital participating in a large benchmarking database. Case volumes were estimated within our dataset. Quality was measured by determining whether ideal patients did not receive appropriate perioperative medications (such as antibiotics to prevent surgical site infections) both as individual ‘missed’ measures, as well as the overall number missed. We used hierarchical models to estimate effects of volume and quality on 30-day readmission, in-hospital mortality, length of stay, and costs.

Results

After adjustment, we noted no consistent associations between higher hospital or surgeon volume and mortality, readmission, length of stay, or costs. Adherence to individual measures was not consistently associated with improvement in readmission, mortality, or other outcomes. For example, continuing antimicrobials past 24 hours was associated with longer length of stay (21.5% higher, 95% CI 19.5% to 23.6%) and higher costs (17% higher, 95% CI 16% to 19%). In contrast, overall adherence, while not not associated with differences in mortality or readmission, was consistently associated with longer length of stay (7.4% longer with one missed measure and 16.4% longer with 2 or more) and higher costs (5% higher with one missed measure, and 11% higher with 2 or more).

Conclusions

While hospital and surgeon volume were not associated with outcomes, lower overall adherence to quality measures is associated with higher costs, but not improved outcomes. This finding may provide a rationale for improving care systems by maximizing care consistency, even if outcomes are not affected.

Introduction

The volume–outcome relationship — the association between improved surgical outcomes at sites that perform a procedure more often — has become the focus of payor-driven proposals to regionalize care to high-volume centers1. This relationship has been of particular interest in complex cancer surgery, where evidence suggests that care from a more experienced surgeon and hospital produce better outcomes27.

However, little is known about the specific mechanisms that explain variation in outcomes between high and low volume centers or surgeons, and to what extent these can be attributed to differences in quality as measured by adherence to recommended care processes8, 9. If care quality is the primary factor explaining differences in outcomes between high and low volume centers, then patients in need of cancer surgery could expect similar results at high and low volume centers with similar quality measure performance. Conversely, if high volume centers are better regardless of adherence to recommended practices, then travel to a regional referral center would be the wisest course of action10.

When contrasted with the impact of case volume on outcomes, associations between individual quality measures and outcomes have been small4 or absent1114. Recent data from our group confirm inconsistent associations between individual quality measures and outcomes in coronary artery bypass surgery15. However, increasing overall performance on quality measures may have a powerful impact on mortality15 and is also associated with lower costs16.

We hypothesized that, for patients undergoing complex cancer surgery, advantages seen at high-volume systems would be related to greater adherence to recommended care practices. To explore this hypothesis, we analyzed data collected from adults undergoing cancer surgery (e.g. pelvic exenteration, esophageal resection, pancreatic resection, or pneumonectomy2) in a nationally representative sample of United States hospitals. Using these data, we first examined the relationship between patient outcomes, hospital case volume, physician case volume, and care quality measures. We then examined the degree to which overall quality (an all-or-none measure of system reliability) influenced mortality in relationship to volume measures.

Methods

Sites and subjects

Our data were collected on 14170 patients cared for by 1629 physicians at 266 hospitals participating in Perspective (Premier Inc., Charlotte, North Carolina), a database developed for measuring quality and health care utilization and which we have used in previous research57.

In addition to standard hospital discharge file data, Perspective contains a date-stamped log of all materials (e.g. serial compression devices used to prevent venous thromboembolism) and medications (e.g. beta-blockers) charged for during hospitalization. Perspective charge data are collected electronically and undergo comprehensive auditing as part of Premier efforts to ensure data validity. Previous research suggests that comorbidity indices collected using Premier data correspond closely to those collected from charts17.

Located in all regions of the United States, Perspective sites are representative of the US hospital population1820, in that they are predominantly small to mid-size non-teaching facilities and serve a largely urban patient population. Perspective sites also have performance on publicly reported quality measures similar to non-Perspective sites. The institutional review board at UCSF approved our study, and our funder (California HealthCare Foundation) had no role in the development or execution of the study, or preparation of the manuscript.

Patients were initially eligible for our analysis if they were admitted between 10/1/2003 and 9/30/2005 and were 18 years of age or older. Patients in this cohort who underwent complex cancer surgery were then identified using International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) procedure codes and diagnosis codes by replicating methods used by Begg and colleagues2. Specifically, patients had to have a principal diagnosis of cancer and to have undergone one of the following surgeries as their principal procedure during hospitalization: Esophageal resection (ICD-9 =42.40–42.42, 42.51–42.56, 42.58–42.59, 42.61–42.66, 42.68–42.69), pancreatic resection (ICD9 = 52.51, 52.33, 52.59, 52.6, 52.7), liver resection (ICD-9 =50.22, 50.3, 50.4), pelvic exenteration (ICD-9 =57.71, 68.8, 48.4–48.6), or pneumonectomy (ICD9=32.5, 32.3, 32.4).

Data

In addition to patient age, sex, race or ethnicity, insurance information, and principal diagnosis, we classified comorbidities using software provided by the Agency for Healthcare Research and Quality based on methods developed by Elixhauser21. Data regarding in-hospital deaths, discharge status (home vs. other), costs, length of stay, and readmission at the index hospital at 30 days were obtained from the Perspective discharge file. Three-quarters of the hospitals that contribute data to Perspective submit actual costs directly from their hospital cost-accounting system, while in the remaining 25% costs are estimated by the hospitals by applying the Medicare cost-to-charge ratio to hospital charges. Our data also included All Patient Refined Diagnosis Related Group Risk of Mortality scores (APR-DRG), an administrative data-derived risk adjustment methodology used to account for patient severity of illness2224. Finally, the database contained information about hospital size, teaching status, and location.

Definition of volume measures

Because some hospitals in our cohort did not contribute data for the entire study period, we estimated the annual case volume by dividing each hospital’s or physician’s observed patient count by the total number of months that the hospital or physician contributed patients to the dataset. These “annualized” volumes were then divided into quartiles so that one-quarter of the patient cohort was included in each quartile of volume, as done in previous work 4, 2527.

Definition of missed quality measures

Using charge data, we translated recommendations from national guidelines8 into a series of dichotomous quality measures representing whether a perioperative medication was received during hospitalization. These medications included whether antimicrobials were used to prevent surgical site infection on the operative day, whether an antimicrobial was continued inappropriately past the first day after surgery, and whether appropriate strategies were used to prevent venous thromboembolism on the operative day.

Because inpatient diagnosis codes cannot reliably distinguish between complications and preexisting conditions, we measured the proportion of ideal candidates for each care process who failed to receive them — a missed quality measure. For example, we considered the opportunity for beta-blocker use ‘missed’ if a patient did not receive the drug and did not have ICD-9 coded principal or secondary diagnosis of hypotension, heart block, or congestive heart failure recorded in their hospital record. In order to provide a more sensitive measure of system-level ability to provide reliable care4, 2527, we also counted the total number of quality measures missed during hospitalization.

Outcome variables

Cost, length of stay, readmission, discharge status, and mortality outcomes were obtained from Perspective discharge abstract data, as described. Length of stay and costs were log-transformed to account for skew and to stabilize variance of residuals in multivariable models. Beta estimates and 95% confidence intervals for log-transformed outcomes were converted to percent differences using the formula 100*(EXP(estimate)-1).

Analysis

We first described study patients and hospitals using univariable methods. Multivariable alternating logistic models 28 (SAS PROC GENMOD) were used to account for clustering of patients within physicians and physicians within hospitals for dichotomous outcomes and calculate adjusted odds ratios and adjusted estimates. Mixed effect models (SAS PROC MIXED) were used to account for clustering of patients within physicians and within hospitals for continuous variables. Models were constructed using manual variable selection methods. Volume and quality measures were entered manually, while additional covariates (confounding factors) were selected for inclusion if they were associated with the outcome at p<0.05, if including them changed estimates for the primary predictors by more than 10%, or for face validity. All analyses were carried out using SAS version 9.1 (SAS Institute, Inc. Cary, NC).

Results

Patient characteristics (Table 1)

Table 1.

Characteristics of Patients undergoing Cancer Surgery (n=14,170)

Characteristic Value
Patient age, y, mean (SD) 66.2 (11.0)
Male, n (%) 7946 (56.1%)
Race, n (%)
    White 10879 (76.8%)
    Other 1953 (13.8%)
    African American 1048 (7.4%)
    Hispanic 290 (2.1%)
Type of surgery, n (%)
    Pneumonectomy 9255 (65.3%)
    Pelvic exenteration 2725 (19.2%)
    Pancreatic resection 1022 (7.2%)
    Liver resection 636 (4.5%)
    Esophageal resection 532 (3.8%)
Admit source, n (%)
    Outpatient 13552 (95.6%)
    Emergency room 514 (3.6%)
    Transfer 104 (0.7%)
Discharge status, n (%)
    To home 8405 (59.3%)
    Transfer 65 (0.5%)
    SNF 1214 (8.6%)
    Home health care 3786 (26.7%)
    Dead 398 (2.8%)
    Hospice 26 (0.2%)
    Rehab 212 (1.5%)
    Other 63 (0.4%)
Primary payer, n (%)
    Medicare 8334 (58.8%)
    Managed care 3899 (27.5%)
    Indemnity 1081 (7.6%)
    Medicaid 538 (3.8%)
    Uninsured 203 (1.4%)
    Other 115 (0.8%)
DRG predicted mortality, n (%)*
    1 5461 (38.5%)
    2 5843 (41.2%)
    3 2061 (14.5%)
    4 805 (5.7%)
Individual comorbidites, n (%)
    Hypertension 7116 (50.2%)
    Chronic pulmonary disease 5696 (40.2%)
    Metastatic cancer 3366 (23.8%)
    Diabetes 2187 (15.4%)
    Fluid and electrolyte disorders 2138 (15.1%)
    Deficiency anemia 1587 (11.2%)
    Hypothyroidism 1100 (7.8%)
    Peripheral vascular disease 837 (5.9%)
    Depression 818 (5.8%)
    Congestive heart failure 713 (5.0%)
    Obesity 703 (5.0%)
    Valvular disease 624 (4.4%)
    Solid tumor 551 (3.9%)
    Weight loss 478 (3.4%)
Characteristics of hospitals
    Location, n (%)
      Rural 1042 (7.3%)
      Urban 13128 (92.7%)
    No, of beds, n (%)
      0–99 104 (0.7%)
      100–199 526 (3.7%)
      200–299 1666 (11.8%)
      300–399 2708 (19.1%)
      400–499 2140 (15.1%)
      ≥ 500 7026 (49.6%)
    Teaching hospital, n (%) 5053 (35.7%)
Process measures, n (%)
    No prophylactic antibiotics on day of surgery 1247 (8.8%)
    Antibiotics continued inappropriately 8797 (62.1%)
    No beta-blockers in first 2 days after surgery 2131 (15.0%)
    No venous thromboembolism prevention in first 2 days 10319 (72.8%)
    No. of process measures missed
      0 1255 (8.9%)
      1 4970 (35.1%)
      2 6416 (45.3%)
      3 or more 1529 (10.8%)
Outcomes, n (%)
    Mortality up to 30 days 427 (3.0%)
    Readmission at 30 days 1611 (11.4%)
Resource use
    Median length of stay, IQR 8 (6,11)
    Median total costs, IQR 16529 (12335, 23640)
*

All Patient Refined Diagnosis Related Group risk of mortality score, with higher score indicating higher risk for in-hospital death.2224

14170 patients underwent one of our target surgeries at one of our study sites between 10/1/2003 and 9/30/2005. Mean age of patients was 66.2 years (standard deviation 11.0 years), and 56% were men. Most were white and had Medicare insurance. The most common Elixhauser-defined comorbidities in our cohort were hypertension (50.2%), metastatic cancer (23.8%), and chronic obstructive pulmonary disease (40.2%). Three percent (427 patients) died during the initial hospitalization or a subsequent admission to the same hospital, 11% were readmitted in 30 days.

The proportion of patients who did not receive our target medications varied. Few did not receive a beta-blocker (15%) or had no antimicrobial charges on the operative day (9%); two-thirds had no venous thromboembolism preventative measures and 62% had antimicrobials continued after the first postoperative day. Few patients (9%) had no missed quality measures, 35% missed one, and 55% missed two or more.

Hospital and physician volume and rates of missed quality measures (Table 2)

Table 2.

Number of Hospitals or Providers and Missed Quality Measures per Quartile of Patient Volume

n No. of hospitals or providers Median volume (IQR) Mean no. of missed quality measures (SD)
Hospital annual volume
      1st quartile 3560 174 13 (8, 19) 1.71 (0.75)
      2nd quartile 3550 49 39 (35, 44) 1.63 (0.79)
      3rd quartile 3590 29 63 (58, 71) 1.44 (0.78)
      4th quartile 3470 14 110 (105, 148) 1.57 (0.91)
Physician annual volume
      1st quartile 3532 913 4 (3, 5) 1.68 (0.78)
      2nd quartile 3537 404 8 (7, 9) 1.58 (0.77)
      3rd quartile 3592 212 15 (13, 17) 1.49 (0.81)
      4th quartile 3509 100 29 (24, 41) 1.61 (0.89)

Most hospitals (174 hospitals, 65%) and physicians (913 physicians, 56%) were lowest-volume (e.g. 1st quartile of volume) providers. Hospital volume ranged from 13 (IQR 8,19) in the lowest quartile to 110 per year (IQR 105, 148) in the highest. Physician volume ranged from 4 patients per year (IQR 3, 5) in the lowest quartile, to 29 (IQR 24, 41) in the highest. The mean number of quality measures missed was similar across physician and hospital volume quartiles.

Effects of volume and individual quality of care measures on outcomes (Table 3)

Table 3.

Volume, Individual Quality Measures, and Outcomes of Major Cancer Surgery

Value Mortality, adjusted Odds Ratio (95% CI) Readmission, adjusted Odds Ratio (95% CI) Length of stay, adjusted % difference (95% CI) Costs, adjusted % difference (95% CI)
Hospital annual volume
      1st quartile 1.43 (0.91, 2.32) 1.18 (0.92, 1.52) −3.40 (−9.83, 3.48) −11 (−20, 0.1)
      2nd quartile 1.18 (0.76, 1.82) 1.01 (0.81, 1.27) −0.69 (−7.70, 6.85) −3 (−14, 9)
      3rd quartile 1.09 (0.72, 1.66) 1.15 (0.96, 1.39) −0.86 (−8.24, 7.11) −6 (−17, 7)
      4th quartile Referent Referent Referent Referent
Physician annual volume
      1st quartile 1.32 (0.92, 1.89) 0.99 (0.81, 1.22) 12.19 (7.46, 17.13) −1 (−5, 3)
      2nd quartile 1.25 (0.87, 1.81) 0.97 (0.82, 1.15) 9.92 (5.13, 14.92) 3 (−1, 7)
      3rd quartile 1.06 (0.76, 1.48) 0.93 (0.77, 1.12) 3.52 (−1.25, 8.53) −1 (−5, 3)
      4th quartile Referent Referent Referent Referent
Missed quality measures
      No beta blockers first 2 days 0.56 (0.39, 0.79) 1.04 (0.87, 1.24) −10.53 (−13.03, −7.96) −11 (−13, −9)
      No prophylactic antibiotics on day of surgery 1.18 (0.78, 1.78) 1.05 (0.84, 1.32) 0.12 (−3.19, 3.55) −1 (−3, 2)
      Antibiotics received after first postoperative day 1.05 (0.79, 1.39) 1.08 (0.96, 1.21) 21.54 (19.49, 23.61) 17 (16, 19)
      No VTEP first 2 days 0.81 (0.62, 1.06) 0.88 (0.77, 1.00) −1.20 (−3.20, 0.84) −3 (−4, −1)

Lower hospital volumes were not associated with higher risk for mortality after adjustment, although odds ratios were all greater than 1 in lower volume sites. Similarly, there were no statistically significant associations between volume measures and readmission, after adjusting for patient factors. In contrast, lower volume sites and surgeons tended to have lower costs, after adjustment. There were inconsistent associations between individual quality measures and mortality, length of stay, costs, or readmission.

Effects of volume and overall quality on outcomes, length of stay, and costs (Table 4)

Table 4.

Volume, Overall Quality Measures, and Outcomes of Major Cancer Surgery

Value Mortality, adjusted Odds Ratio (95% CI) Readmission, adjusted Odds Ratio (95% CI) LOS, adjusted % difference (95% CI) Costs, adjusted % difference (95% CI)
Hospital annual volume
    1st quartile 1.39 (0.91, 2.13) 1.19 (0.93, 1.52) −3.40 (−9.89, 3.56) −11 (−20, 0.3)
    2nd quartile 1.15 (0.76, 1.74) 1.01 (0.81, 1.26) −1.11 (−8.15, 6.47) −4 (−15, 9)
    3rd quartile 1.09 (0.73, 1.62) 1.16 (0.97, 1.39) −0.31 (−7.80, 7.78) −5 (−17, 8)
    4th quartile Referent Referent Referent Referent
Physician annual volume
    1st quartile 1.30 (0.92, 1.84) 0.99 (0.81, 1.21) 12.82 (8.03, 17.82) −0.2 (−4, 4)
    2nd quartile 1.23 (0.86, 1.77) 0.97 (0.82, 1.15) 10.53 (5.65, 15.59) 3 (−1, 7)
    3rd quartile 1.05 (0.76, 1.45) 0.93 (0.77, 1.12) 3.67 (−1.11, 8.74) −1 (−5, 4)
    4th quartile Referent Referent Referent Referent
No. of missed quality measures
    No missed measures Referent Referent Referent Referent
    1 missed measure 0.73 (0.44, 1.20) 0.80 (0.66, 0.97) 7.42 (4.35, 10.57) 5 (2, 7)
    2−4 missed measures 0.69 (0.41, 1.15) 0.85 (0.70, 1.03) 16.37 (12.91, 19.93) 11 (9, 14)

In analyses assessing the association between total number of quality measures missed during hospitalization and patient outcomes, there were no statistically significant associations between the number of measures missed and our key outcomes. However, both costs and length of stay were significantly increased if 1 or more measures were missed. Importantly, inclusion of overall quality in these models did not reveal any underlying associations between volume and any of our study outcomes.

Discussion

In this cohort of patients undergoing complex cancer surgery, we observed no statistically significant associations between higher volume and improved outcomes, or between individual quality measures and improved outcomes. When quality was measured as an overall count, worse overall quality (indicated by the number of measures missed during hospitalization) was not associated with clinical outcomes, but was strongly associated with higher costs and length of stay. These findings suggest that quality improvement efforts aimed at improving the reliability of systems that provide care of cancer surgery patients may have substantial impact on costs of care.

A large literature describes the relationship between higher volume and better outcomes in cancer surgery27. This observation has led to endorsement of case volume as a way to identify preferred sites and improve patient outcomes27— an approach aptly termed ‘follow the crowd 1.’ However, regionalization of services poses practical problems29, and the evidence for volume benchmarks’ ability to accurately identify ‘best’ sites has limitations10, 3033. We did not see a striking association between higher volume and better outcomes. This may be because we had a relatively small sample size compared with previous work2, 4, or because previous studies were able to include longer-term outcomes at fixed time periods34, 35. Longer periods of follow-up accrue more events, further increasing statistical power to compare events across volume strata, as well as potentially increasing sensitivity to the effects of high-quality postsurgical care for cancer patients provided at more specialized high volume centers, and which take place long after hospitalization. Although our methods selected cases using diagnosis codes used previously, pooling a number of fairly disparate surgical procedures in our study may have also limited our ability to detect a volume effect by attenuating our ability to discern hospitals’ or surgeons’ procedure-specific experience. With these limitations, it is important to note that others have found that the volume-outcomes relationship in cancer surgery may be weaker than previously described8, 36. This may be because secular trends in surgical outcomes are disproportionately affecting high-mortality centers32, or because the pressure to contract based on volume has already made substantial progress towards moving cases away from lower volume centers and towards higher volume ones. Although we used all available diagnosis code data for risk adjustment in our models and attempted to avoid pitfalls described by others37, we may have been unable to fully adjust for shifting of higher-complexity patients to higher-volume centers. This possibility is suggested by the observation that higher volume centers were more costly, although it seems equally possible that larger centers also tend to provide more costly care.

Few of our individual process measures were associated with improvements in outcomes or resource use. While this may be because we used measures that parallel but do not entirely replicate chart-abstracted process measures, our data are consistent with previous evidence suggesting that performance on publicly reported quality measures explains only a small portion of differences in patient outcomes38. In fact, early experience with Surgical Care Improvement Project (SCIP) measures — upon which our measures were based — suggests no relationship between better performance on individual quality measures and improved outcomes1114 in colorectal surgery. Like the SCIP measures, our quality measures address a few key processes in perioperative care and do not capture other key elements of operative or perioperative care.

In contrast, overall quality is thought to be a measure of a systems’ ability to deliver care reliably7, 34, 3941; reliability and consistency form the rationale for the growing use of checklists in clinical care4244. In our study, overall quality represents the proportion of patients who did not ‘miss’ an opportunity to receive appropriate care. Our measure was developed (to the greatest extent possible with our data) using information that might represent appropriately withholding a medication, and as such our overall quality measure represents the cumulative impact of multiple appropriate clinical decisions, in addition to the reliability of the system of care. This study did not demonstrate the strong impact of overall care quality on mortality observed in our previous work15 (for reasons described earlier), though it is important to note that maximizing overall quality in this study would save approximately $3400 per patient. When applied to the more than 80% of patients in our study who missed at least 1 quality measure, such cost savings would have enormous economic impact.

Our study has a number of limitations. Because we used administrative data, we cannot easily distinguish complications from preexisting disease, and cannot replicate chart-based SCIP measures exactly. However, we constructed our quality measures to focus on patients with no documented contraindications, and we did not use comorbidities to define outcomes. Our quality measures focus primarily on inpatient medications and cannot distinguish between continuation of home medications and initiation of medications in hospital. This factor may be influencing the associations seen between beta-blocker use and outcomes, but is less likely to affect antimicrobial or serial compression device use. In addition, our quality measures were collected from electronic billing systems rather than chart abstraction, and have not been validated in a scientific study. However, because Premier’s business model focuses on provision of accurate benchmarking data to their members, all charge and diagnosis data are regularly audited for accuracy17. As an observational study, the results are subject to biases related to nonrandom assignment of patients to receive medications or devices, as well as documentation biases described. However, secondary analyses including adjustment for hospital-level likelihood of receipt of quality measures did not suggest this was a substantial threat (data not presented). Although Premier hospitals are similar to other US centers in terms of size, teaching status, and location, they may differ from non-Premier hospitals. While we constructed our volume measures to be consistent with those employed in previous work, they may not adequately represent expertise accrued if low volume surgeons were performing other complex surgeries. Although we selected our surgery types according to previous studies2, it is possible that our approach missed some procedures performed less frequently. In addition, our study had somewhat smaller number of high volume hospitals than other studies, a trend that may have further limited our ability to see strong volume effects. Our volume measures do not take into account cases that surgeons performed outside hospitals participating in Premier or cases of similar complexity within target hospitals, and as such may underestimate surgeon volume and experience. Finally, it is likely that some surgeries in our dataset were at least partially performed by fellows or residents. However, whether the surgery was performed at a teaching hospital was not a significant predictor of outcome in our models.

Our study represents a first view at the important relationship of how case volume, care quality, and outcomes of care are linked. Borderline associations between improved outcomes and higher volume in our data may be countered by higher costs at high volume sites. Quality of care as measured in our data has, at best, little association with patient outcomes, but worse quality care was far costlier. Efforts to simultaneously encourage patients to ‘follow the crowd’ and increase the quality of health care have strong face validity, but may have heterogeneous impact on the value of health care.

Acknowledgment

The study was supported by Grant #05-1755 from the California Healthcare Foundation. Dr Auerbach was also supported by a K08 Patient Safety Research and Training Grant (K08 HS11416-02) from the Agency for Healthcare Research and Quality during the execution of this project.

We would like to acknowledge Erin Hartman, MS, for her expert editorial assistance, as well as Denise Remus, MD and Kathy Belk for their work in assembling the dataset used for this analysis.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Disclosure Information: Nothing to disclose.

References Cited

  • 1.Birkmeyer JD, Dimick JB. Potential benefits of the new Leapfrog standards: effect of process and outcomes measures. Surgery. 2004;135:569–575. doi: 10.1016/j.surg.2004.03.004. [DOI] [PubMed] [Google Scholar]
  • 2.Begg CB, Cramer LD, Hoskins WJ, Brennan MF. Impact of hospital volume on operative mortality for major cancer surgery. JAMA. 1998;280:1747–1751. doi: 10.1001/jama.280.20.1747. [DOI] [PubMed] [Google Scholar]
  • 3.Billingsley KG, Morris AM, Dominitz JA, et al. Surgeon and hospital characteristics as predictors of major adverse outcomes following colon cancer surgery: understanding the volume-outcome relationship. Arch Surg. 2007;142:23–31. doi: 10.1001/archsurg.142.1.23. discussion 32. [DOI] [PubMed] [Google Scholar]
  • 4.Birkmeyer JD, Sun Y, Goldfaden A, et al. Volume and process of care in high-risk cancer surgery. Cancer. 2006;106:2476–2481. doi: 10.1002/cncr.21888. [DOI] [PubMed] [Google Scholar]
  • 5.Finlayson EV, Birkmeyer JD. Effects of hospital volume on life expectancy after selected cancer operations in older adults: a decision analysis. J Am Coll Surg. 2003;196:410–417. doi: 10.1016/S1072-7515(02)01753-2. [DOI] [PubMed] [Google Scholar]
  • 6.Finlayson EV, Goodney PP, Birkmeyer JD. Hospital volume and operative mortality in cancer surgery: a national study. Arch Surg. 2003;138:721–725. doi: 10.1001/archsurg.138.7.721. discussion 726. [DOI] [PubMed] [Google Scholar]
  • 7.Hodgson DC, Zhang W, Zaslavsky AM, et al. Relation of hospital volume to colostomy rates and survival for patients with rectal cancer. J Natl Cancer Inst. 2003;95:708–716. doi: 10.1093/jnci/95.10.708. [DOI] [PubMed] [Google Scholar]
  • 8.Surgical Care Improvement Program. [Accessed June 25, 2010];2007 http://www.medqic.org/dcs/ContentServer?cid=1122904930422&pagename=Medqic%2FContent%2FParentShellTemplate&parentName=Topic&c=MQParents.
  • 9.Khuri SF. Safety, quality, and the National Surgical Quality Improvement Program. Am Surg. 2006;72:994–998. discussion 1021–1030, 1133–1048. [PubMed] [Google Scholar]
  • 10.Dimick JB, Finlayson SR, Birkmeyer JD. Regional availability of high-volume hospitals for major surgery. Health Aff (Millwood) 2004 doi: 10.1377/hlthaff.var.45. Suppl Web Exclusives:VAR45-53. [DOI] [PubMed] [Google Scholar]
  • 11.Nguyen N, Yegiyants S, Kaloostian C, et al. The Surgical Care Improvement project (SCIP) initiative to reduce infection in elective colorectal surgery: which performance measures affect outcome? Am Surg. 2008;74:1012–1016. doi: 10.1177/000313480807401028. [DOI] [PubMed] [Google Scholar]
  • 12.Hawn MT, Itani KM, Gray SH, et al. Association of timely administration of prophylactic antibiotics for major surgical procedures and surgical site infection. J Am Coll Surg. 2008;206:814–819. doi: 10.1016/j.jamcollsurg.2007.12.013. discussion 819–821. [DOI] [PubMed] [Google Scholar]
  • 13.Dull D, Baird SK, Dulac J, Fox L. Improving prophylactic perioperative antibiotic utilization in a hospital system. J Healthc Qual. 2008;30:48–56. doi: 10.1111/j.1945-1474.2008.tb01170.x. [DOI] [PubMed] [Google Scholar]
  • 14.Pastor C, Artinyan A, Varma MG, et al. An increase in compliance with the Surgical Care Improvement Project measures does not prevent surgical site infection in colorectal surgery. Dis Colon Rectum. 2010;53:24–30. doi: 10.1007/DCR.0b013e3181ba782a. [DOI] [PubMed] [Google Scholar]
  • 15.Auerbach AD, Hilton JF, Maselli J, et al. Shop for quality or volume? Volume, quality, and outcomes of coronary artery bypass surgery. Ann Intern Med. 2009;150:696–704. doi: 10.7326/0003-4819-150-10-200905190-00007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Auerbach AD, Hilton JF, Maselli J, et al. Volume, quality, and resource use following coronary artery bypass surgery. Arch Int Med. 2010 doi: 10.1001/archinternmed.2010.237. In Press. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Jennings LA, Auerbach AD, Maselli J, et al. Missed opportunities for osteoporosis treatment in patients hospitalized for hip fracture. J Am Geriatr Soc. 2010;58:650–657. doi: 10.1111/j.1532-5415.2010.02769.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Lindenauer PK, Behal R, Murray CK, et al. Volume, quality of care, and outcome in pneumonia. Ann Intern Med. 2006;144:262–269. doi: 10.7326/0003-4819-144-4-200602210-00008. [DOI] [PubMed] [Google Scholar]
  • 19.Lindenauer PK, Pekow P, Wang K, et al. Perioperative beta-blocker therapy and mortality after major noncardiac surgery. N Engl J Med. 2005;353:349–361. doi: 10.1056/NEJMoa041895. [DOI] [PubMed] [Google Scholar]
  • 20.Lindenauer PK, Remus D, Roman S, et al. Public reporting and pay for performance in hospital quality improvement. N Engl J Med. 2007;356:486–496. doi: 10.1056/NEJMsa064964. [DOI] [PubMed] [Google Scholar]
  • 21.Elixhauser A, Steiner C, Fraser I. Volume thresholds and hospital characteristics in the United States. Health Aff (Millwood) 2003;22(2):167–177. doi: 10.1377/hlthaff.22.2.167. [DOI] [PubMed] [Google Scholar]
  • 22.Iezzoni LI, Ash AS, Shwartz M, et al. Predicting who dies depends on how severity is measured: implications for evaluating patient outcomes. Ann Intern Med. 1995;123:763–770. doi: 10.7326/0003-4819-123-10-199511150-00004. [DOI] [PubMed] [Google Scholar]
  • 23.Iezzoni LI, Shwartz M, Ash AS, Mackiernan YD. Using severity measures to predict the likelihood of death for pneumonia inpatients. J Gen Intern Med. 1996;11:23–31. doi: 10.1007/BF02603481. [DOI] [PubMed] [Google Scholar]
  • 24.Healthcare Utilization Project. [Accessed June 25, 2010];APRDRG - All Patient Refined DRG. 2010 http://www.hcup-us.ahrq.gov/db/vars/aprdrg/nisnote.jsp.
  • 25.Birkmeyer JD, Dimick JB, Staiger DO. Operative mortality and procedure volume as predictors of subsequent hospital performance. Ann Surg. 2006;243:411–417. doi: 10.1097/01.sla.0000201800.45264.51. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Birkmeyer JD, Siewers AE, Finlayson EV, et al. Hospital volume and surgical mortality in the United States. N Engl J Med. 2002;346:1128–1137. doi: 10.1056/NEJMsa012337. [DOI] [PubMed] [Google Scholar]
  • 27.Birkmeyer JD, Stukel TA, Siewers AE, et al. Surgeon volume and operative mortality in the United States. N Engl J Med. 2003;349:2117–2127. doi: 10.1056/NEJMsa035205. [DOI] [PubMed] [Google Scholar]
  • 28.Nolan T, Berwick DM. All-or-none measurement raises the bar on performance. JAMA. 2006;295:1168–1170. doi: 10.1001/jama.295.10.1168. [DOI] [PubMed] [Google Scholar]
  • 29.Birkmeyer JD. High-risk surgery--follow the crowd. JAMA. 2000;283:1191–1193. doi: 10.1001/jama.283.9.1191. [DOI] [PubMed] [Google Scholar]
  • 30.Khuri SF, Henderson WG. The case against volume as a measure of quality of surgical care. World J Surg. 2005;29:1222–1229. doi: 10.1007/s00268-005-7987-6. [DOI] [PubMed] [Google Scholar]
  • 31.Ward MM, Jaana M, Wakefield DS, et al. What would be the effect of referral to high-volume hospitals in a largely rural state? J Rural Health. 2004;20:344–354. doi: 10.1111/j.1748-0361.2004.tb00048.x. [DOI] [PubMed] [Google Scholar]
  • 32.Peterson ED, Coombs LP, DeLong ER, et al. Procedural volume as a marker of quality for CABG surgery. JAMA. 2004;291:195–201. doi: 10.1001/jama.291.2.195. [DOI] [PubMed] [Google Scholar]
  • 33.Christian CK, Gustafson ML, Betensky RA, et al. The Leapfrog volume criteria may fall short in identifying high-quality surgical centers. Ann Surg. 2003;238:447–455. doi: 10.1097/01.sla.0000089850.27592.eb. discussion 455–447. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Birkmeyer JD, Sun Y, Wong SL, Stukel TA. Hospital volume and late survival after cancer surgery. Ann Surg. 2007;245:777–783. doi: 10.1097/01.sla.0000252402.33814.dd. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Goodney PP, Siewers AE, Stukel TA, et al. Is surgery getting safer? National trends in operative mortality. J Am Coll Surg. 2002;195:219–227. doi: 10.1016/s1072-7515(02)01228-0. [DOI] [PubMed] [Google Scholar]
  • 36.Center for Medicare and Medicaid Services - Hospital Compare. [Accessed November 8, 2007, 2007];2007 http://www.hospitalcompare.hhs.gov/Hospital/Search/SearchCriteria.asp?version=default&browser=IE%7C7%7CWinXP&language=English&defaultstatus=0&pagelist=Home.
  • 37.Tsai AC, Votruba M, Bridges JF, Cebul RD. Overcoming bias in estimating the volume-outcome relationship. Health Serv Res. 2006;41:252–264. doi: 10.1111/j.1475-6773.2005.00461.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Bradley EH, Herrin J, Elbel B, et al. Hospital quality for acute myocardial infarction: correlation among process measures and relationship with short-term mortality. JAMA. 2006;296:72–78. doi: 10.1001/jama.296.1.72. [DOI] [PubMed] [Google Scholar]
  • 39.Schrag D, Panageas KS, Riedel E, et al. Surgeon volume compared to hospital volume as a predictor of outcome following primary colon cancer resection. J Surg Oncol. 2003;83:68–78. doi: 10.1002/jso.10244. discussion 78–69. [DOI] [PubMed] [Google Scholar]
  • 40.Schrag D, Panageas KS, Riedel E, et al. Hospital and surgeon procedure volume as predictors of outcome following rectal cancer resection. Ann Surg. 2002;236:583–592. doi: 10.1097/00000658-200211000-00008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Begg CB, Riedel ER, Bach PB, et al. Variations in morbidity after radical prostatectomy. N Engl J Med. 2002;346:1138–1144. doi: 10.1056/NEJMsa011788. [DOI] [PubMed] [Google Scholar]
  • 42.Haynes AB, Weiser TG, Berry WR, et al. A surgical safety checklist to reduce morbidity and mortality in a global population. N Engl J Med. 2009;360:491–499. doi: 10.1056/NEJMsa0810119. [DOI] [PubMed] [Google Scholar]
  • 43.Abbett SK, Yokoe DS, Lipsitz SR, et al. Proposed checklist of hospital interventions to decrease the incidence of healthcare-associated Clostridium difficile infection. Infect Control Hosp Epidemiol. 2009;30:1062–1069. doi: 10.1086/644757. [DOI] [PubMed] [Google Scholar]
  • 44.Pronovost P, Needham D, Berenholtz S, et al. An intervention to decrease catheter-related bloodstream infections in the ICU. N Engl J Med. 2006;355:2725–2732. doi: 10.1056/NEJMoa061115. [DOI] [PubMed] [Google Scholar]

RESOURCES