Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2010 May 1.
Published in final edited form as: Surgery. 2010 May;147(5):602–609. doi: 10.1016/j.surg.2009.03.014

Hospital Characteristics, Clinical Severity, and Outcomes for Surgical Oncology Patients

Christopher R Friese 1, Craig C Earle 1, Jeffrey H Silber 1, Linda H Aiken 1
PMCID: PMC2858347  NIHMSID: NIHMS115575  PMID: 20403513

Abstract

Background

Patients and payers wish to identify hospitals with good surgical oncology outcomes. Our objective was to determine whether differences in outcomes explained by hospital structural characteristics are mitigated by differences in patient severity.

Methods

Using hospital administrative and cancer registry records in Pennsylvania, we identified 24,618 adults hospitalized for cancer-related operations. Colorectal, prostate, endometrial, ovarian, head and neck, lung, esophageal, and pancreatic cancers were studied. Outcome measures were 30-day mortality and failure to rescue (FTR) (30-day mortality preceded by a complication). After severity of illness adjustment, we estimated logistic regression models to predict the likelihood of both outcomes. In addition to American Hospital Association survey data, we externally verified hospitals with National Cancer Institute (NCI) cancer center or Commission on Cancer (COC) cancer program status.

Results

Patients in hospitals with NCI cancer centers were significantly younger and less acutely ill on admission (p < .001). Patients in high volume hospitals were younger, had lower admission acuity, yet had more advanced cancer (p < .001). Unadjusted 30-day mortality rates were lower in NCI-designated hospitals (3.76% vs. 2.17%, p = .01). Risk-adjusted FTR rates were significantly lower in NCI-designated hospitals (4.86% vs. 3.51%, p = .03). NCI center designation was a significant predictor of 30-day mortality when considering patient and hospital characteristics (OR 0.68, 95% CI 0.47–0.97, p = .04). We did not find significant outcomes effects based on COC cancer program approval.

Conclusions

Patient severity of illness varies significantly across hospitals, which may explain the outcome differences observed. Severity adjustment is crucial to understanding outcome differences. Outcomes were better than predicted for NCI-designated hospitals.


Oncology patients comprise a large proportion of hospital caseloads. Based on projections of cancer incidence, their presence is expected to increase. In addition, tumor-directed surgical procedures are being performed with increasing frequency on patients of older age and related comorbidities. Variations in outcomes from surgical oncology procedures are widely reported; the majority of these studies have focused on outcome differences by procedure volume,15 or receipt of care in a hospital recognized by the National Cancer Institute (NCI) cancer center program.6 The quality gap observed in surgical oncology outcomes might worsen given the increased attention to provide anti-cancer therapies to older adults, many of whom may have comorbities (Trimble & Christian, 2006).

Based on research findings, stakeholder groups in the United States have suggested that rare or complex cancer operations should be performed by physicians or hospitals achieving certain annual case volume targets.7 In 1992, Canadian provinces began the process of regionalizing cardiac procedures in response to documented variations in outcome.8 Similar proposals might be considered for receipt of surgical oncology care in facilities achieving certain benchmarks, such as NCI Cancer Center or the Commission on Cancer (COC) cancer program status.9 At the time of this study, NCI clinical cancer center designation required robust clinical and basic science research programs that underwent peer- and site-reviewed. In addition, comprehensive cancer center designation required shared research resources, as well as a cancer control and population science research program. The COC program credential required: state-of-the-art clinical services that span the phase of diagnosis through completion of treatment; a cancer committee leadership program; care conferences were patient cases are discussed and continuing education is provided, and; an established cancer registry.9 These credentials were confirmed by a formal site visit conducted by COC members. Before options to redirect patients with cancer to credentialed facilities are considered, additional research is needed to ascertain if and why differences in quality exist, and to rigorously examine the outcome differences in multiple datasets.

As part of our team’s research program in elucidating the relationship between nursing care and surgical patient outcomes, 11,12 we studied outcomes in a sample of surgical oncology patients admitted to Pennsylvania hospitals between 1998–1999. Mortality outcomes were superior when patients received care in hospitals with better nurse staffing, more favorable nurse perceptions of their workplace, and nurses with higher educational preparation.13 One intriguing finding that we follow up on in this study is that the only significant hospital characteristic associated with more favorable outcomes - in addition to nursing factors - was NCI cancer center designation. This paper extends our previous research to examine more closely the array of hospital and patient characteristics and their relationship to patient outcomes using enhanced patient severity adjustment. Do certain types of hospitals - including those with cancer specialty designation - have better outcomes for surgical oncology patients? To what extent are differences in patient outcomes, if found, explained by patient characteristics? The findings are pertinent to clinical and payer practices that encourage referrals to hospitals with specific organizational characteristics.

PATIENTS and METHODS

After human subjects exempt review, we performed secondary analysis of linked data created by merging inpatient claims from the Pennsylvania Health Care Cost Containment Council (PHC4), the Pennsylvania Cancer Registry, and the American Hospital Association annual survey data. The list of National Cancer Institute’s10 clinical and comprehensive cancer centers available from the NCI’s website, and a list of approved cancer programs provided by the American College of Surgeons were used to identify hospitals in the sample with those designations in 1998–1999. Details of the linkage procedure have been reported elsewhere.13

Our analytic sample included 24,618 adults treated in 164 acute care hospitals between 1998–1999 with a diagnosis and surgical procedure for one of the following cancers: head and neck, esophagus, colon-rectum, pancreas, lung, ovary, prostate, and endometrium. Breast cancer patients were excluded from this analysis because of their significantly shorter lengths of hospital stay.

Definition of Variables

Hospital Characteristics

Whenever possible, existing definitions from the outcomes research literature focused on hospital characteristics were used. Hospital beds set up and staffed were categorized as: 100 beds or fewer, 101–250 beds, 251 beds or higher.11 Hospitals that performed solid organ or open heart transplants in 1999 were coded as providers of “advanced procedures.”14 Prior studies have suggested the provision of advanced technological resources may have spillover effects for other conditions.15 We used the ratio of medical residents or fellows per beds set up and staffed to categorize teaching status: Non-teaching hospitals had no residents/fellows per bed; minor teaching hospitals had a lower than 1:4 resident/fellow to bed ratio; major teaching hospitals had at least one resident/fellow per 4 beds.16, 17 We constructed quartiles of hospital procedure volume for the total number of procedures performed at each hospital on our set of ICD-9 diagnosis codes for the years 1998 and 1999.18 For example, hospitals received credit for all right hemi-colectomies performed, regardless of whether the underlying diagnosis was for malignancy. Dichotomous variables were created to reflect whether a hospital had received cancer center or cancer program status by the NCI or COC, respectively.

Clinical Severity

Tumor registry data were combined with hospital claims to measures patients’ risk for poor outcomes. We then estimated logistic regression models to predict 30-day mortality and failure to rescue using split-sample methodology. In a random fifty percent sample of the patients, 83 logistic regression models with a single covariate reflecting a patient characteristic were estimated to predict 30-day mortality.19 Patient variables with significant coefficients at p ≤ .10 were retained in the severity model (a list of the final variables and the coefficients is available from the author). The model was replicated in the remaining 50 percent of the sample, with no corresponding differences in coefficients and significance observed. The retained 25 variables reflected demographics, comorbidity, and cancer information. Model discrimination for the full sample, reflected by the C statistic, was 0.83 for mortality and 0.76 for failure to rescue.20 Age was measured as both a linear and quadratic term. Non-white ethnicity was not a statistically significant variable in the severity model. While this may be partially explained by low numbers of non-white patients in Pennsylvania, we chose to retain the variable in our models to account for unmeasured socioeconomic differences by race and ethnicity. Results did not change when this variable was excluded from the model. By state regulations, each hospital admission in Pennsylvania was abstracted routinely by trained medical records coders for key clinical findings to construct the Atlas (formerly known as MEDISGRPS) severity of illness score.2123 In contrast to usual methods of measuring severity using diagnosis and procedure codes, The Atlas score uses data from the medical record to measure physiologic data, such as unstable vital signs, abnormal laboratory, radiology, or diagnostic test results. For each hospitalization, the resulting score is reported as a categorical variable (0 = no probability of inpatient mortality to 4 = > 0.5 probability of inpatient mortality). Based on an existing severity adjustment approach, 19 we constructed an algorithm to detect comorbidities from claims data up to 90 days preceding the studied admission, and each comorbidity was treated as a dichotomous variable. Tumor type was treated as a categorical variable, length of cancer diagnosis (in months) was a continuous measure, and a dichotomous measure was used to reflect distant or systemic cancer stage.

Outcomes

The two dichotomous outcomes were obtained by the linkage of death records to the cancer registry and inpatient claims records. 30-day mortality is the occurrence of death within 30 days of hospital admission. Failure to rescue (FTR) is a death within 30 days of hospital admission for patients who have also experienced a postoperative complication.24,25 A set of diagnosis and procedure codes (that were not coded in the 90 days prior to admission) are the basis for 40 complications considered. The empirical advantage of failure to rescue is the outcome measure does not “punish” a hospital should a patient experience a complication since complications are associated with case mix severity; it merely identifies whether the hospital rescued the patient successfully from the complication. Following established procedures, 11, 12 patients who died postoperatively were assumed to have experienced a complication, even if no complication was coded explicitly in the discharge abstract. Thus, FTR includes all patients who died within 30 days of hospital admission. The denominator between 30-day mortality and FTR differs. In the former, the denominator is all patients in the sample, while in the latter, the denominator is only patients who experience a complication or who die within 30 days of admission.

Statistical Analysis

We tested bivariate relationships between clinical severity and hospital characteristics using the appropriate t, F, or chi-square test. We also calculated bivariate associations of hospital characteristics with unadjusted and adjusted outcomes rates for hospitals. These risk-adjusted rates were calculated using the ratio of observed events (deaths or failures) divided by the expected number of events predicted by the risk adjustment model, multiplied by the sample’s respective event rate. We ruled out multicollinearity among hospital and nursing characteristics by examining correlation matrices for high correlations, and by yielding acceptable variance inflation factor and tolerance values. We then performed a patient-level analysis and estimated a series of logistic regression models to predict death and failure to rescue. First, models estimated the effect of each hospital characteristic without additional variables in the model. Next, models included the 25 variables identified in the risk adjustment model. Our final models considered all patient and hospital characteristics simultaneously. Robust, cluster methods were specified in STATA version 10.0 (STATA Corp, College Station, Texas) to adjust standard errors and account for patient clustering in hospitals.26, 27 Coefficients were transformed to odds ratios, and 95 percent confidence intervals were calculated for all parameter estimates.

Sensitivity Analyses

The analyses reported here used a dichotomy of cancer program status, however the COC reported separate categories based on volume and teaching status. A sensitivity analysis using the four categories revealed no differences in our results. Because our sample is quite heterogeneous in tumor type, we also performed an analysis of these variables stratified by volume-sensitive tumors (pancreas, esophagus, and lung, versus all others). We also replicated our findings for 30-day mortality using a measure of 60-day mortality. Our results and conclusions did not change appreciably.

RESULTS

Clinical Severity by Hospital Characteristics

Table 1 presents differences in clinical severity and cancer severity by hospital characteristics (the clinical variables for the entire sample are presented in the first column). The mean age of the sample was 68.3 years, and approximately one third of study patients were below the age of 65. The majority of patients received colorectal or prostate resections.

Table 1.

Clinical Severity by Hospital Characteristics

Full
Sample
NCI
Center
COC
Program
Procedure Volume Quartile Bedsize Teaching Intensity Advanced
Procedures
No Yes No Yes Lowest 2nd 3rd Highest <100 100–250 >250 None Minor Major No Yes
n 164 160 4 85 79 41 42 40 41 21 97 46 84 68 12 105 59
N 24,618 22,778 1,840 7,453 17,165 1,601 3,595 6,179 13,243 1,033 10,888 12,697 8,386 11,932 4,300 9,785 14,833
Age 68.3 68.7 63.2 69.6 67.8 71.6 70.1 69.4 66.9 68.9 69.3 67.4 70.1 68.2 65.1 69.6 67.4
Mean (SD) (12.2) (12.2) (12.1) (10.6) (12.3) (11.9) (11.9) (11.8) (11.8) (13.3) (12.0) (12.2) (11.9) (12.2) (12.2) (12.0) (12.3)
p <.0001 <.0001 <.0001 <.0001 <.0001 <.0001
Comorbidity
None 86.7 87.0 82.4 86.4 86.8 84.0 87.0 87.9 86.3 83.7 87.8 86.0 87.7 87.0 84.2 87.4 86.2
1 4.2 3.9 7.0 4.4 4.1 4.8 3.9 3.5 4.5 6.6 3.6 4.4 3.9 4.0 5.1 3.9 4.3
2 3.9 3.8 4.7 4.0 3.8 3.9 3.8 3.9 3.8 3.0 3.6 4.1 3.4 3.9 4.7 3.7 3.9
3 or more 5.3 5.3 5.9 5.2 5.4 7.3 5.2 4.7 5.4 6.7 5.0 5.5 5.0 5.3 6.1 5.0 5.5
p <.0001 .52 <.01 <.0001 <.0001 .08
Atlas Severity Score
0 13.5 13.5 13.6 13.2 13.6 10.4 12.3 12.3 14.7 11.7 13.4 13.7 12.3 13.8 15.1 12.8 14.0
1 33.9 33.5 39.0 30.5 35.3 24.9 29.0 32.6 36.9 27.0 31.6 36.4 31.2 34.4 37.5 31.2 35.6
2 36.0 36.1 34.7 39.6 34.5 44.2 40.5 36.9 33.5 40.7 37.3 34.6 38.2 35.0 34.7 38.7 34.3
3 or 4 16.6 16.9 12.7 16.7 16.6 20.5 18.3 18.3 14.9 20.6 17.7 15.3 18.3 16.8 12.7 17.4 16.1
p <.0001 <.0001 <.0001 <.0001 <.0001 <.0001
Distant Cancer 13.2 13.2 13.2 12.3 13.6 12.6 12.4 11.9 14.1 14.0 12.7 13.6 12.2 13.1 15.5 12.7 13.5
p .97 <.01 .0001 .07 <.0001 .07

NCI: National Cancer Institute Clinical or Comprehensive Cancer Center; COC: Commission on Cancer Approved Cancer Program. p values reported reflect significance test for t or F tests for age by hospital characteristics, or chi-square tests for categorical variables by hospital characteristics.

Admission severity and cancer severity differed significantly by hospital characteristics. Patients in hospitals with NCI cancer centers were of younger age, and lower Atlas admission severity than in other hospitals. NCI hospitals cared for a larger proportion of ovarian, prostate, and pancreatic cancer patients than non-NCI hospitals (results not shown). The proportion of patients with distant metastases was not significantly different across hospitals. Similarly, the average length of cancer diagnosis was 19.0 months, and did not differ significantly by hospital characteristics (results not shown). Hospitals with COC Cancer Program status had younger patients, yet slightly more patients with metastatic cancer. When contrasted with lower volume hospitals, patients in hospitals in the highest quartile of procedure volume were younger, with fewer comorbidities, and lower Atlas severity scores. Similar trends for age and Atlas severity were observed for hospitals of larger size, teaching intensity, and performance of advanced procedures.

Outcomes by Hospital Characteristics

Table 2 shows the unadjusted and risk-adjusted outcome rates based on hospital characteristics. These are hospital-level outcome rates, with the adjusted rates calculated by the proportion of observed over expected events multiplied by the sample’s overall mortality or failure to rescue rate. The overall hospital-level unadjusted rates of 30-day mortality and failure to rescue were 3.72%, and 10.5%, respectively. T and F tests were used to compare outcomes rates across hospital characteristics with two or three or more strata, respectively. While outcomes are uniformly better in hospitals with NCI cancer center designation, the only significant differences were found when comparing unadjusted 30-day mortality rates (p < .01), and adjusted failure to rescue rates (p = .03). Hospitals performing advanced procedures, such as organ transplantation or coronary artery bypass graft operations, had significantly lower unadjusted death and FTR (both p = .03). These differences were no longer significant when outcome rates were adjusted for severity of illness. Significant differences in outcome rates based on COC cancer program approval, teaching status, or hospital procedure volume were not observed.

Table 2.

Unadjusted and Risk-Adjusted, Hospital-Level Outcome Rates by Hospital Characteristics

30-day mortality Failure to Rescue

% (SD) F or t, p % (SD) F or t, p % (SD) F or t, p % (SD) F or t, p
Hospital Characteristic Unadjusted Adjusted Unadjusted Adjusted
Teaching Intensity
 Non 3.75 (2.9) 3.41 (2.5) 10.82 (9.4) 4.65 (3.4)
 Low 3.82 (2.4) 3.78 (2.4) 10.51 (6.7) 5.14 (2.9)
 High 2.96 (1.3) 0.55, .79 3.19 (1.2) 0.60, .55 8.16 (3.5) 0.57, .57 4.37 (1.8) 0.62, .54
Bedsize
 <100 3.74 (3.7) 3.28 (3.5) 12.48 (14.8) 4.46 (4.6)
 100–250 4.04 (2.7) 3.86 (2.5) 11.04 (7.4) 5.21 (3.1)
 >250 3.02 (1.5) 2.40, .09 3.02 (1.4) 2.10, .12 8.45 (4.1) 2.40, .09 4.20 (1.9) 1.87, .16
NCI Center
no 3.76 (2.7) 3.57 (2.4) 10.58 (8.1) 4.86 (3.1)
yes 2.17 (0.7) 3.85, .01 2.58 (0.6) 0.82, .41 7.47 (4.6) 0.76, .45 3.51 (0.8) 2.81, .03
COC Program
 no 4.03 (3.2) 3.68 (2.9) 11.19 (9.8) 4.99 (3.7)
 yes 3.39 (1.8) 1.58, .12 3.41 (1.7) 0.73, .46 9.75 (5.5) 1.16, .25 4.66 (2.2) 0.68, .50
Performed Advanced Procedures
 no 4.00 (3.0) 3.65 (2.5) 11.36 (9.4) 4.98 (3.4)
 yes 3.22 (1.7) 2.15, .03 3.37 (2.2) 0.70, .48 8.97 (4.8) 2.16, .03 4.57 (2.5) 0.82, .41
Quartile of Procedure Volume
 Lowest 4.30 (3.9) 3.67 (3.5) 12.49 (12.7) 5.01 (4.5)
 Second 4.10 (2.7) 4.00 (2.4) 10.72 (6.1) 5.34 (3.0)
 Third 3.36 (1.8) 3.30 (1.7) 10.10 (6.5) 4.52 (2.3)
 Highest 3.11 (1.5) 1.95, .12 3.19 (1.3) 0.99, .40 8.70 (4.3) 1.58, .20 4.43 (1.8) 0.79, .50
Overall 3.72 (2.6) 3.55 (2.4) 10.50 (8.1) 4.83 (3.1)

NCI: National Cancer Institute Clinical or Comprehensive Cancer Center; COC: Commission on Cancer Approved Cancer Program.

Note: t tests used when comparing outcome rates by NCI, COC and advanced procedures. Teaching status, bedsize, and procedure volume used F tests.

Predictors of Patient Outcomes

Table 3 shows the results of logistic regression models to predict 30-day mortality and failure to rescue from the patient-level data. Three series of models for both outcomes are presented: first, each hospital characteristic’s unique odds ratio on the outcome is reported; the next column is a model that includes all patient characteristics with each hospital characteristic separately, and; the final column reflects all patient and hospital characteristics simultaneously specified in the model.

Table 3.

Hospital Characteristics as Predictors of 30-day Mortality and Failure to Rescue

30-day Mortality Failure to Rescue
I II III I II III
OR
(95% CI)
p OR
(95% CI)
p OR
(95% CI)
p OR
(95% CI)
p OR
(95% CI)
p OR
(95% CI)
p
Teaching Intensity
Non (n=84) - - - - - -
Low (n=68) 0.89
(0.74–1.07)
.20 0.93
(0.77–1.12)
.45 1.00
(0.83–1.21)
.95 0.83
(0.68–1.03)
.20 0.88
(0.72–1.08)
.23 0.97
(0.80–1.19)
.79
High (n=12) 0.71
(0.54–0.93)
.01 0.81
(0.59–1.12)
.21 1.03
(0.71–1.49)
.87 0.67
(0.51–0.88)
<.01 0.74
(0.53–1.02)
.07 0.94
(0.64–1.37)
.73
Bedsize
<100 (n=21) - - -
100–250 (n=97) 1.05
(0.72–1.54)
.81 1.07
(0.73–1.59)
.71 1.00
(0.65–1.55)
.98 0.92
(0.59–1.44)
.72 0.91
(0.59–1.40)
.66 0.88
(0.55–1.42)
.61
>250 (n=46) 0.85
(0.58–1.24)
.40 0.90
(0.61–1.32)
.58 0.88
(0.55–1.41)
.61 0.75
(0.49–1.17)
.20 0.74
(0.48–1.14)
.17 0.79
(0.47–1.35)
.40
NCI center (n=4) 0.60
(0.5–0.72)
<.01 0.64
(0.50–0.83)
<.01 0.68
(0.47–0.97)
.04 0.64
(0.47–0.86)
<.01 0.67
(0.47–0.96)
.03 0.76
(0.49–1.18)
.23
COC program (n=79) 0.93
(0.76–1.14)
.47 0.99
(0.81–1.21)
.94 1.11
(0.88–1.40)
.36 0.95
(0.77–1.18)
.66 0.99
(0.80–1.22)
.92 1.17
(0.91–1.49)
.22
Advanced Procedures (n=59) 0.80
(0.68–0.96)
.02 0.87
(0.72–1.04)
.11 1.01
(0.79–1.29)
.94 0.83
(0.68–1.02)
.08 0.84
(0.69–1.01)
.07 1.02
(0.78–1.32)
.91
Quartile of Procedure Volume
Lowest (n=41) - - - - -
Second (n=42) 0.88
(0.64–1.20)
.41 1.19
(0.86–1.64)
.31 1.14
(0.81–1.62)
.45 0.94
(0.68–1.31)
.72 1.13
(0.81–1.60)
.48 1.10
(0.77–1.57)
.61
Third (n=40) 0.75
(0.55–1.02)
.07 0.94
(0.67–1.30)
.69 0.91
(0.62–1.34)
.65 0.85
(0.60–1.20)
.35 0.94
(0.65–1.34)
.72 0.91
(0.60–1.38)
.65
Highest (n=41) 0.64
(0.48–0.88)
<.01 0.88
(0.65–1.20)
.42 0.90
(0.59–1.39)
.65 0.69
(0.49–0.96)
.03 0.82
(0.59–1.14)
.23 0.82
(0.52–1.32)
.43

Model I: Unadjusted. Each hospital characteristic modeled separately. Model II: All patient characteristics modeled with each hospital characteristic separately. Model III: All patient and hospital characteristics modeled simultaneously. NCI: National Cancer Institute. COC: Commission on Cancer. All models adjusted standard errors to account for patient clustering in hospitals.

From the first series of models, significant predictors of 30-day mortality included high teaching intensity (OR 0.71, 95% CI 0.54–0.93), NCI cancer center (0.60, 95% CI 0.50–0.72), advanced procedure hospitals (OR 0.80, 95% CI 0.68–0.96), and highest quartile of procedure volume (OR 0.64, 95% CI 0.48–0.88). Models estimating failure to rescue found similar effects for high teaching intensity, NCI cancer centers, and highest procedure volume. In the results for Model II, where patient characteristics were modeled with each hospital characteristic, the only hospital characteristic that significantly predicted outcomes when patient severity was considered was NCI cancer center (30-day mortality OR = 0.64, 95% CI = 0.50 to 0.83; FTR OR = 0.67, 95% CI = 0.47 to 0.96). From Model III, the only variable to predict 30-day mortality when all patient and hospital characteristics were simultaneously considered was NCI cancer center (OR = 0.68, 95% CI = 0.47 to 0.97). No hospital characteristics significantly predicted the odds of failure to rescue when all characteristics were considered.

DISCUSSION

We report significant differences in clinical severity, cancer severity, and outcomes for surgical oncology patients by hospital characteristics. Contrary to what might be expected, severity of illness does not appear uniformly higher in NCI cancer centers. However, NCI cancer centers in our study achieved lower mortality rates than would be expected on the basis of case mix. In other types of hospitals studied, more favorable mortality rates were found to be largely a product of less severely ill patients. The absence of outcome differences by COC status, either adjusted or unadjusted, suggests that Commission on Cancer standards in place at the time of the study did not convey a direct outcome benefit for patients in this study. It would be worthy to re-examine this question in additional datasets as COC standards have changed over time. It is also possible that many hospitals could meet COC standards, but elected to not obtain formal program approval. This would result in few actual differences between the COC and non-COC hospitals in our sample.

Patient and provider selection are two other explanations for these observations. Younger patients may feel compelled to travel outside their immediate area and seek facilities or providers based on reputation. In a study of chemotherapy outcomes, patients who traveled greater than 15 miles for treatment had superior survival to patients treated locally.28 Alternately, physicians in hospitals with higher teaching intensity, advanced resources, and higher volumes may deem patients too frail to undergo operations and instead recommend less invasive management. Our data are from 1998 to 1999; this is because of the unique linkage of datasets that are not routinely available to investigators. While the procedures studied at the time are common operations for cancer, confirmation of our results in more contemporary samples, coupled with the measurement of process of care variables, would be a useful addition to this area of research.

Our inability to detect significant outcome differences on hospital characteristics may be due to the coarseness of some measurements. For example, knowledge of individual physician characteristics such as provider volume, training, and board certification could refine our approach.5 Because our initial study was not designed to examine the volume-outcome relationship a priori, we have small numbers of tumor types where volume-outcome relationships have been previously documented. Thus, these findings should be interpreted with caution, yet application of the risk adjustment methods used in this study could be applied in the future to larger samples of these patients. Other important outcomes, such as recurrence, late survival, costs, and subsequent health care utilization, were not examined in this study due to data availability. While we had a large number of hospitals in our analysis, not all acute care hospitals in Pennsylvania were included because of missing claims or administrative data. We were unable to adjust our analysis for prior receipt of chemo- or radiotherapy, or consider any care provided outside the Commonwealth of Pennsylvania. While only four hospitals with NCI status were in our sample, they accounted for seven percent of the patient sample. Confirmation of our findings in more hospitals with and without NCI status is suggested. However, our study contributes to the cancer outcomes research literature by extending the analysis outside of the Medicare-eligible population. When compared with other cancer outcomes study focused on hospital differences, we included both admission severity and cancer severity in our models. While most studies report adjustment for age, sex, and comorbidities, we have described our analytic approach and model discrimination statistics in greater detail. Cancer severity variables and Atlas severity scores were among the strongest predictors of outcomes in our severity adjustment models; these measures are often not available in traditional claims-based analyses. Datasets that combine claims, tumor registry, and physiologic variables, such as the National Surgical Quality Improvement Program,29 are optimal targets for replication of our analyses. However, a challenge remains to study structure, process, and outcomes in hospitals who do not participate in voluntary data collection efforts.

CONCLUSIONS

Hospitals with high teaching intensity, capabilities to perform advanced procedures, and national credentials, were not always caring for the sickest patients. After risk adjustment, few hospital characteristics were significantly associated with 30-day mortality or failure to rescue. Our report underscores the necessity for robust risk adjustment in cancer outcomes research, and explicit reporting of risk adjustment procedures in publications. From the management and policy perspectives, recommendations to reorganize oncology surgical care based on these factors should await further confirmation. Our confirmation of favorable benefits to patients who receive care in National Cancer Institute-designated cancer centers should prompt additional research into underlying differences in care processes in these institutions.

Acknowledgments

Funding: National Institute of Nursing Research R01-NR04513, American Cancer Society, DSCN-03-202-01-SCN, the Oncology Nursing Society via the Pennsylvania Tobacco Settlement Funds, and a predoctoral training grant from the National Institute of Nursing Research, T32-NR07104.

References

  • 1.Meyerhardt JA, Catalano PJ, Schrag D, et al. Association of hospital procedure volume and outcomes in patients with colon cancer at high risk for recurrence. Ann Intern Med. 2003;139(8):649–657. doi: 10.7326/0003-4819-139-8-200310210-00008. [DOI] [PubMed] [Google Scholar]
  • 2.Finlayson EV, Goodney PP, Birkmeyer JD. Hospital volume and operative mortality in cancer surgery: a national study. Arch Surg. 2003;138(7):721–725. doi: 10.1001/archsurg.138.7.721. [DOI] [PubMed] [Google Scholar]
  • 3.Schrag D, Panageas KS, Riedel E, et al. Surgeon volume compared to hospital volume as a predictor of outcome following primary colon cancer resection. J Surg Oncol. 2003;83(2):68–78. doi: 10.1002/jso.10244. [DOI] [PubMed] [Google Scholar]
  • 4.Hewitt M, Petitti D, editors. Interpreting the Volume-Outcome Relationship in the Context of Cancer Care. Washington DC; National Academy Press: 2001. [PubMed] [Google Scholar]
  • 5.Hillner BE, Smith TJ, Desch CE. Hospital and physician volume or specialization and outcomes in cancer treatment: Importance in quality of cancer care. J Clin Oncol. 2000;18:2327–2340. doi: 10.1200/JCO.2000.18.11.2327. [DOI] [PubMed] [Google Scholar]
  • 6.Birkmeyer NJO, Goodney PP, Stukel TA, et al. Do cancer centers designed by the National Cancer Institute have better surgical outcomes? Cancer. 2005;103(3):435–441. doi: 10.1002/cncr.20785. [DOI] [PubMed] [Google Scholar]
  • 7.Birkmeyer JD, Wennberg DE, Young M, Birkmeyer CB. Leapfrog Safety Standards: Potential Benefits of Universal Adoption. Washington DC: The Business Roundtable; 2000. [Google Scholar]
  • 8.Church J, Barker B. Regionalization of health care services in Canada: A critical perspective. Int J Health Serv. 2003;28:467–486. doi: 10.2190/UFPT-7XPW-794C-VJ52. [DOI] [PubMed] [Google Scholar]
  • 9.American College of Surgeons. [Accessed February 24, 2009];American College of Surgeons: Cancer Programs: Approvals Program. [website] http://www.facs.org/cancer/coc/whatis.html.
  • 10.National Cancer Institute. [Accessed February 24, 2009];The National Cancer Institute Cancer Centers Program. [website] http://cancercenters.cancer.gov/
  • 11.Aiken LH, Clarke SP, Sloane DM, Sochalski J, Silber JH. Hospital nurse staffing and patient mortality, nurse burnout, and job dissatisfaction. JAMA. 2002;288(16):1987–1993. doi: 10.1001/jama.288.16.1987. [DOI] [PubMed] [Google Scholar]
  • 12.Aiken LH, Clarke SP, Cheung RB, Sloane DM, Silber JH. Education levels of hospital nurses and patient mortality. JAMA. 2003;290(12):1617–1623. doi: 10.1001/jama.290.12.1617. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Friese CR, Lake ET, Aiken LH, Silber JH, Sochalski J. Hospital nurse practice environments and outcomes for surgical oncology patients. Health Serv Res. 2008;43(4):1145–1163. doi: 10.1111/j.1475-6773.2007.00825.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Hartz AJ, Krakauer H, Kuhn EM, et al. Hospital characteristics and mortality rates. N Engl J Med. 1989;321(25):1720–1725. doi: 10.1056/NEJM198912213212506. [DOI] [PubMed] [Google Scholar]
  • 15.Daley J, Forbes MG, Young GJ, et al. Validating risk-adjusted surgical outcomes: site visit assessment of process and structure. J Am Coll Surg. 1997;185:341–351. [PubMed] [Google Scholar]
  • 16.Ayanian JZ, Weissman JS. Teaching hospitals and quality of care: A review of the literature. Milbank Q. 2002;80(3):569–593. doi: 10.1111/1468-0009.00023. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Ayanian JZ, Weissman JS, Chasan-Taber S, Epstein AM. Quality of care for two common illnesses in teaching and nonteaching hospitals. Health Aff (Millwood) 1998;17(6):194–205. doi: 10.1377/hlthaff.17.6.194. [DOI] [PubMed] [Google Scholar]
  • 18.Hodgson DC, Zhang W, Zaslavsky AM, Fuchs CS, Wright WE, Ayanian JZ. Relation of hospital volume to colostomy rates and survival for patients with rectal cancer. J Natl Cancer Inst. 2003;95(10):708–716. doi: 10.1093/jnci/95.10.708. [DOI] [PubMed] [Google Scholar]
  • 19.Silber JH, Rosenbaum PR, Ross RN. Comparing the contributions of groups of predictors: Which outcomes vary with hospital rather than patient characteristics? J Am Stat Assoc. 1995;90(429):7–18. [Google Scholar]
  • 20.Hanley JA, McNeil BJ. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology. 1982;143(1):29–36. doi: 10.1148/radiology.143.1.7063747. [DOI] [PubMed] [Google Scholar]
  • 21.Brewster AC, Karlin BG, Hyde LA, Jacobs CM, Bradbury RC, Chae YM. MEDISGRPS: a clinically based approach to classifying hospital patients at admission. Inquiry. 1985;22:377–387. [PubMed] [Google Scholar]
  • 22.Bradbury RC, Stearns FE, Jr, Steen PM. Interhospital variations in admission severity-adjusted hospital mortality and morbidity. Health Serv Res. 1991;26(4):407–424. [PMC free article] [PubMed] [Google Scholar]
  • 23.Iezzoni LI, Moskowitz MA. A clinical assessment of MedisGroups. JAMA. 1988;260(21):3159–3163. doi: 10.1001/jama.260.21.3159. [DOI] [PubMed] [Google Scholar]
  • 24.Silber JH, Williams SV, Krakauer H, Schwartz JS. Hospital and patient characteristics associated with death after surgery. A study of adverse occurrence and failure to rescue. Med Care. 1992;30(7):615–629. doi: 10.1097/00005650-199207000-00004. [DOI] [PubMed] [Google Scholar]
  • 25.Silber JH, Romano PS, Rosen AP, Wang Y, Ross RN, Even-Shoshan O, Volpp K. Failure-to-rescue: Comparing definitions to measure quality of care. Med Care. 2007 doi: 10.1097/MLR.0b013e31812e01cc. in press. [DOI] [PubMed] [Google Scholar]
  • 26.Rogers WH. Regression standard errors in clustered samples. Stata Technical Bulletin. 1993;13:19–23. [Google Scholar]
  • 27.White H. Maximum likelihood estimation of misspecified models. Econometrica. 1982;50:1–25. [Google Scholar]
  • 28.Lamont EB, Hayreh D, Pickett KE, et al. Is patient travel distance associated with survival on phase II clinical trials in oncology? J Natl Cancer Inst. 2003;95(18):1370–1375. doi: 10.1093/jnci/djg035. [DOI] [PubMed] [Google Scholar]
  • 29.Khuri SF. The NSQIP: a new frontier in surgery. Surgery. 2005;138(5):837–843. doi: 10.1016/j.surg.2005.08.016. [DOI] [PubMed] [Google Scholar]

RESOURCES