Abstract
Objective
We sought to evaluate differences in guideline concordance between National Cancer Institute (NCI)–designated and other centers and determine whether the level of available evidence influences the degree of variation in concordance.
Background
The National Cancer Institute recognizes centers of excellence in the advancement of cancer care. These NCI-designated cancer centers have been shown to have better outcomes for cancer surgery; however, little work has compared surgical process measures.
Methods
A retrospective cohort study was conducted using Surveillance, Epidemiology and End Results registry linked to Medicare claims data. Fee-for-service Medicare patients with a definitive surgical resection for breast, colon, gastric, rectal, or thyroid cancers diagnosed between 2000 and 2005 were identified. Claims data from 1999 to 2006 were used. Our main outcome measure was guideline concordance at NCI-designated centers compared to other institutions, stratified by level of evidence as graded by National Comprehensive Cancer Network guideline panels.
Results
All centers achieved at least 90%, and often 95%, concordance with guidelines based on level 1 evidence. Concordance rates for guidelines with lower-level evidence ranged from 30% to 97% and were higher at NCI-designated centers. The adjusted concordance ratios for category 1 guidelines were between 1.02 and 1.08, whereas concordance ratios for guidelines with lower-level evidence ranged from 0.97 to 2.19, primarily favoring NCI-designated centers.
Conclusions
When strong evidence supports a guideline, there is little variation in practice between NCI-designated centers and other hospitals, suggesting that all are providing appropriate care. Variation in care may exist, however, for guidelines that are based on expert consensus rather than strong evidence. This suggests that future efforts to generate needed evidence on the optimal approach to care may also reduce institutional variation.
Agreat deal of effort is currently being devoted to helping health care purchasers, both patients and payers identify hospitals providing better quality care. The term “center of excellence” is often used to promote a given institution as having expertise in a particular area. Within oncology, the National Cancer Institute (NCI) Cancer Centers Program identifies centers of excellence primarily based on their research programs, but it also emphasizes the association between strong research and excellence in clinical care, education, community outreach and other critical components of cancer care.1
The NCI designates institutions as either comprehensive cancer centers, which demonstrate significant research activities in each of 3 major areas—laboratory-based research, population-based research, and clinical research—and which have substantial multidisciplinary research efforts or cancer centers, which are primarily focused in one or more of these scientific areas. As of 2010, the NCI recognizes 44 comprehensive cancer centers and 20 cancer centers. Institutions with NCI-designated cancer center status are expected to be a major source of discovery in cancer, including the development of more effective approaches to care and the creation of effective research dissemination strategies.2
Previous studies have shown improved surgical outcomes at NCI-designated cancer centers, particularly for 30-day mortality rates.3–5 These studies also reported improved long-term outcomes for colon and rectal cancer at NCI-designated cancer centers, but long-term survival was similar for other cancer sites. Little work has investigated differences between NCI-designated cancer centers and other institutions on process measures, or whether the appropriate care is provided.6,7
Concordance with practice guidelines is currently one approach to determining whether appropriate care was provided. The goal of practice guidelines is to close the gap that exists between best evidence and current practice.8–10 However, due to increasing pressure to define and measure appropriate care, guideline creation has often outpaced evidence generation. As a result, many guidelines are based on less than level 1 evidence11 and often rely on expert consensus.12 Prior research has shown that lower-level evidence can lead to surgical practice that is less adherent to guidelines and results in greater variability in care among institutions.13,14 Little is currently known about the role that center of excellence status may play in this observed increase in institutional variation when strong evidence is lacking.
Eleven practice guidelines for 5 cancer sites (thyroid, gastric, rectal, colon, or breast cancer) that are commonly treated by general surgeons and general surgery subspecialists, including surgical oncologists and breast, endocrine, and colorectal surgeons, were identified previously by our group.15 These guidelines were further examined to determine the role of NCI-designation in institutional variation for guidelines based on level of evidence.
We hypothesize that when there is strong evidence to support a guideline, there will be general consensus, minimizing practice variation between NCI-designated centers and other institutions. However, when a guideline is based on less definitive evidence, greater variation based on NCI-designation will be observed. A better understanding of the relationship between level of evidence and guideline concordance will shed important light on institutional variation in the surgical care of cancer patients. If our hypothesis is true, institutions that are not NCI-designated cancer centers are providing appropriate care when there is high quality evidence to support it, providing reassurance to patients and providers that care is equivalent for these cases. When the supporting evidence is less clear, it is a greater leap to equate guideline concordance with higher quality care because the right approach to care has not been definitely determined.
METHODS
Study Cohort
The Surveillance, Epidemiology and End Results (SEER) database was linked to Medicare claims for the period from January 1, 1999, to December 31, 2006. All patients diagnosed with thyroid, gastric, rectal, colon, or breast cancer between 2000 and 2005 were identified. The analysis was limited to patients for whom the primary cancer was their first or only cancer, who were at least 65 years old at time of diagnosis, who were enrolled in Medicare for age and not disability or end-stage renal disease, who had a pathologically confirmed diagnosis that was not through autopsy or death certificate, for whom the difference in reported death in SEER and Medicare did not exceed 3 months, and who were continuously enrolled in Medicare and not enrolled in an HMO from 30 days prediagnosis through 1 year postdiagnosis. The cohort was further limited to patients with stage I through III cancer (except for the inclusion of DCIS of the breast and stage IV thyroid cancer) and appropriate histology. The analysis was limited to patients who underwent surgical resection at a hospital with a nonmissing hospital identifier and that performed 5 or more of a given procedure. Inclusion and exclusion criteria specific to individual guidelines were applied to obtain the final cohorts.
Level of Evidence
As described previously, we identified 11 nationally endorsed guidelines relevant to surgical oncology practice.15 Because all of these guidelines map to specific guidelines within the decision algorithms promulgated by the National Comprehensive Cancer Network (NCCN), the NCCN guidelines provided a source of standardized evidence grading for our measures. Since 1995, the NCCN has produced guidelines in the form of decision algorithms that encompass 97% of tumors encountered in oncology practices.16 The NCCN guidelines are created through a consensus-driven explicit review of the scientific evidence by multidisciplinary panels of expert physicians. Each guideline is annotated to indicate the level of the supporting evidence and the degree of consensus among the guideline panel members. Category 1 recommendations are based on high-level evidence and uniform consensus that the recommendation is appropriate; category 2A recommendations are based on lower-level evidence but with uniform consensus that the recommendation is appropriate; category 2B recommendations are based on lower-level evidence and nonuniform consensus; and category 3 recommendations reflect major disagreement (Table 1). Of our 11 measures, 6 were classified by the NCCN expert panels as category 1 and 5 as category 2A. That is to say, 6 measures reflected high-level evidence, whereas 5 measures reflected expert consensus based on lower-level evidence. None of our measures were graded as category 2B or 3.
TABLE 1.
Category 1 | The recommendation is based on high-level evidence (eg, randomized controlled trials) and there is uniform NCCN consensus |
Category 2A | The recommendation is based on lower-level evidence and there is uniform NCCN consensus |
Category 2B | The recommendation is based on lower-level evidence and there is nonuniform NCCN consensus (but no major disagreement) |
Category 3 | The recommendation is based on any level of evidence but reflects major disagreement |
NCCN Practice Guidelines in Oncology—v.2.2010.
Data Analysis
Variables reflecting patient characteristics were identified from the SEER registry data, whereas hospital characteristics were identified for the year of surgery using the associated hospital file created by NCI using a combination of sources. AJCC staging was used for all cancers except thyroid and gastric, which were classified as localized, regional, or distant. Patient comorbidity information was captured with the Deyo implementation of the Charlson comorbidity score17,18 using hospital and physician claims data in the 12 months before diagnosis as described by Klabunde et al.19 Socioeconomic status was measured at the county level as median income and percent college education. Hospitals were considered to be part of an oncology group if they participated in any of the cooperative groups captured in the hospital file. As our primary variable of interest, NCI-designated cancer centers and comprehensive cancer centers were examined in aggregate as NCI-designated centers. NCI center designation was obtained from the hospital file, and an institution was considered an NCI-designated center if the designation was present at the time of surgery claim submission.
Patients treated at NCI-designated centers were compared with patients treated at other centers, using χ2 tests for generalized estimating equations, accounting for clustering of patients by institution. The comparison of concordance rates between NCI-designated and other centers was quantified in terms of a “concordance ratio,” defined as the ratio of rates observed at NCI-designated centers to rates in other centers. To adjust for potential confounding in the comparison of NCI-designated to non–NCI-designated centers, 2 distinct weighted propensity scores20,21 were generated—one that adjusted only for patient characteristics and one that adjusted for patient and structural characteristics of the institution. The weighted generalized estimating equations models21 were then used to estimate the adjusted concordance ratios and to calculate confidence interval (CI) and statistical significance, while accounting for clustering by institutions. All P values were 2-sided and were considered statistically significant if P < 0.05. All analyses were performed with SAS version 9.2 (SAS Institute Inc, Cary, NC).
RESULTS
The site-specific cohorts included 32,960 patients with colon cancer, 8082 patients with rectal cancer, 56,761 patients with breast cancer, 3531 patients with gastric cancer, and 2421 patients with thyroid cancer. The number of NCI-designated centers varied across guidelines and ranged from 15 for central neck dissection for papillary thyroid cancer to 26 for nodal count of 12 nodes or more for colon cancer. The proportion of all eligible cases treated at NCI-designated centers ranged from 2.5% to 12.8% across measures (Table 2).
TABLE 2.
Cancer | Measure | Total Patient (n) | No. NCI-Designated Centers |
Other Institutions | Percentage of Patients Treated in NCI-Designated Centers |
---|---|---|---|---|---|
Breast | Lymph node evaluation | 34,827 | 25 | 1263 | 3.4% |
Axillary dissection | 11,144 | 24 | 987 | 3.5% | |
Chemotherapy | 1395 | 20 | 520 | 5.2% | |
Radiation therapy | 6590 | 21 | 801 | 5.1% | |
PMRT | 4014 | 24 | 790 | 3.8% | |
Colon | Lymph node evaluation | 31,504 | 26 | 1148 | 2.5% |
Chemotherapy | 6112 | 24 | 866 | 2.8% | |
Rectal | Radiation therapy | 1983 | 25 | 568 | 5.9% |
Gastric | Lymph node evaluation | 3041 | 24 | 588 | 8.3% |
Thyroid | Total thyroidectomy | 1357 | 23 | 453 | 8.8% |
Central neck dissection | 366 | 15 | 203 | 12.8% |
PMRT–indicates postmastectomy radiation therapy.
The characteristics of patients treated at NCI-designated centers and other institutions and of the hospitals themselves are shown in Table 3. Cancer patients treated at NCI-designated centers were younger and more often from middle quartile education census tracts. NCI-designated centers had more hospital beds, were more likely to participate in an oncology group, provide radiation oncology services, be voluntary or governmental (compared to non-profit), have a medical school affiliation, and have an AMA-/AOA-approved residency program.
TABLE 3.
Patient Characteristics | NCI-Designated Centers (n = 3632) |
Other Institutions (n = 98,699) |
P |
---|---|---|---|
Age, yrs | <0.001 | ||
<70 | 36.3% | 27.7% | |
70–75 | 24.3% | 21.7% | |
75–80 | 20.9% | 22.8% | |
80 | 18.5% | 27.8% | |
Female | 81.4% | 81.3% | 0.96 |
Race | 0.82 | ||
White | 83.2% | 88.1% | |
Black | 12.0% | 7.3% | |
API | 4.4% | 4.2% | |
Other | 0.5% | 0.5% | |
Married | 51.4% | 47.6% | 0.17 |
Charlson score | 0.05 | ||
0 | 68.0% | 64.7% | |
1 | 19.9% | 22.5% | |
2 | 5.8% | 6.8% | |
3+ | 6.4% | 6.0% | |
Income quartile | 0.24 | ||
1 | 21.6% | 25.7% | |
2 | 23.4% | 25.0% | |
3 | 23.9% | 24.8% | |
4 | 31.1% | 24.5% | |
Education quartile | 0.001 | ||
1 | 22.2% | 25.3% | |
2 | 30.8% | 24.4% | |
3 | 25.0% | 24.8% | |
4 | 22.1% | 25.5% | |
Hospital Characteristics | (n = 31) | (n = 1429) | P |
Number of hospital beds | <0.001 | ||
Median (IQR) | 565 (433–879) | 173 (68–345) | |
Geographic location | 0.07 | ||
Northeast | 32.3% | 17.8% | |
South | 9.7% | 24.8% | |
Midwest | 22.6% | 17.4% | |
West | 35.5% | 40.0% | |
Urban | 74.2% | 68.4% | 0.49 |
Oncology group | 96.8% | 39.6% | <0.001 |
Therapeutic radiology | <0.001 | ||
Not provided | 45.6% | 3.2% | |
Provided under arrangement | 8.8% | 12.9% | |
Provided by hospital | 45.6% | 83.9% | |
Type of control | 0.05 | ||
Voluntary | 64.5% | 58.6% | |
Proprietary | 3.2% | 20.0% | |
Governmental | 32.3% | 21.4% | |
Medical school affiliation | 93.6% | 29.6% | <0.001 |
Approved residency program | 80.7% | 18.9% | <0.001 |
The unadjusted concordance rates for NCI-designated and other centers, stratified by the level of evidence supporting the measure, are shown in Table 4. Measures graded as category 1 include chemotherapy or medical oncology evaluation for stage III colon cancer (age < 80 years), radiation therapy or radiation oncology evaluation for T4N0 or stage III rectal cancer (age < 80 years), nodal evaluation for invasive breast cancer 1 cm or larger, radiation therapy or radiation oncology evaluation for breast conserving surgery (age < 70 years), chemotherapy or medical oncology evaluation for ER-negative breast cancer (age < 70 years), and postmastectomy radiation therapy or radiation oncology evaluation for more than 4 positive nodes, node positive, and T > 5 cm or stage III breast cancer. Both NCI-designated centers and other institutions, achieved at least 90%, and in most cases, greater than 95% concordance with guidelines based on level 1 evidence.
TABLE 4.
NCI-Designated Centers |
Other Institutions |
|
---|---|---|
Category 1 | ||
Chemo or med onc eval for stage III colon cancer (age < 80 yrs) |
98% | 97% |
XRT or rad onc eval for T4N0 or stage III rectal cancer (age < 80 yrs) |
97% | 91% |
Nodal evaluation for invasive breast cancer ≥ 1 cm |
93% | 92% |
XRT or rad onc eval for BCS (age < 70 yrs) |
100% | 99% |
Chemo or med onc eval for ER-negative breast cancer (age < 70 yrs) |
99% | 98% |
Postmastectomy radiation therapy or rad onc eval for high risk* breast cancer |
99% | 95% |
Category 2A | ||
Gastric cancer lymph node examination ≥ 15 nodes |
55% | 30% |
Colon cancer lymph node examination ≥ 12 nodes |
68% | 48% |
Axillary dissection for node positive breast cancer |
95% | 97% |
Total thyroidectomy for papillary thyroid cancer ≥ 1.5 cm |
94% | 89% |
Central neck dissection for node positive papillary thyroid cancer |
85% | 71% |
High risk ≥ 4 positive nodes, tumor size > 5.0 cam and node positive or stage III disease.
Chemo indicates chemotherapy; Med onc eval, medical oncology evaluation; Rad onc eval, radiation oncology evaluation; XRT, radiation therapy.
Category 2A measures include gastric cancer lymph node examination (≥15 nodes), colon cancer lymph node examination (≥12 axillary dissection for node positive breast cancer, total thyroidectomy for papillary thyroid cancer 1.5 cm or larger, and central neck dissection for node positive papillary thyroid cancer. For category 2A measures, concordance rates ranged from 30% to 97% and were higher at NCI-designated centers for most guidelines examined. This difference was especially apparent for guidelines regarding nodal counts for various cancers, where concordance rates differed by approximately 20% between NCI-designated and other centers. The only measure for which rates were lower in NCI-designated cancer centers was axillary dissection for node-positive breast cancer (95% vs 97%) but the difference was not statistically significant.
Table 5 shows the unadjusted and adjusted concordance ratios for each measure. After controlling for patient characteristics, there was minimal change in the concordance ratios for all guidelines examined. Controlling for both patient and hospital characteristics failed to alter most measures but did accentuate the observed difference for NCI-designated cancer centers for nodal examination for gastric cancer and central neck dissection. Fully adjusted concordance ratios were between 1.02 (95% CI: 1.01–1.03) and 1.08 (95% CI: 1.05–1.11) for level 1 guidelines, indicating that there was little difference in concordance rates between NCI-designated centers and other institutions. The concordance ratios for the category 2A measures were more than 1 for nearly all measures, including gastric cancer lymph node examination (≥15 nodes) (concordance ratio = 2.19, 95% CI: 1.12–4.27), colon cancer lymph node examination (≥12 nodes) (concordance ratio = 1.54, 95% CI: 1.33–1.78), and central neck dissection for node positive papillary thyroid cancer (concordance ratio = 1.38, 95% CI: 1.25–1.52), indicating that NCI centers had higher concordance rates even after adjusting for differences in patient and hospital characteristics. The one exception was axillary dissection for node-positive breast cancer, where the concordance ratio was less than 1, but not statistically significantly different (concordance ratio = 0.98, 95% CI: 0.95–1.02). The concordance rate for total thyroidectomy for papillary thyroid cancer 1.5 cm or larger was only slightly different between the 2 center types (concordance ratio = 1.07, 95% CI: 1.01–1.13).
TABLE 5.
Unadjusted (CI) | Adjusted for Patient Characteristics (CI) |
Adjusted for Patient and Hospital Characteristics (CI) |
|
---|---|---|---|
Category 1 | |||
Chemo or med onc eval for stage III colon cancer (age < 80 yrs) | 1.02 (1.00–1.03) | 1.01 (1.00–1.04) | 1.02 (1.01–1.04) |
XRT or rad onc eval for T4N0 or stage III rectal cancer (age < 80 yrs) | 1.06 (1.02–1.10) | 1.08 (1.05–1.11) | 1.08 (1.05–1.11) |
Nodal evaluation for invasive breast cancer ≥ 1 cm | 1.02 (0.99–1.04) | 1.00 (0.96–1.03) | 1.04 (1.01–1.07) |
XRT or rad onc eval for BCS (age < 70 yrs) | — * | — * | — * |
Chemo or med onc eval for ER-negative breast cancer (age < 70 yrs) | 1.01 (0.98–1.03) | 1.02 (1.01–1.03) | 1.02 (1.01–1.03) |
Postmastectomy radiation therapy or rad onc eval for high risk breast cancer | 1.04 (1.02–1.06) | 1.03 (0.99–1.07) | 1.05 (1.03–1.07) |
Category 2A | |||
Gastric cancer lymph node examination ≥15 nodes | 1.82 (1.43–2.31) | 1.87 (1.47–2.39) | 2.19 (1.12–4.27) |
Colon cancer lymph node examination ≥ 12 nodes | 1.43 (1.29–1.58) | 1.42 (1.28–1.58) | 1.54 (1.33–1.78) |
Axillary dissection for node positive breast cancer | 0.98 (0.95–1.01) | 0.98 (0.95–1.01) | 0.97 (0.93–1.02) |
Total thyroidectomy for papillary thyroid cancer ≥ 1.5 cm | 1.06 (1.01–1.11) | 1.07 (1.02–1.12) | 1.07 (1.01–1.13) |
Central neck dissection for node positive papillary thyroid cancer | 1.20 (1.06–1.36) | 1.25 (1.09–1.43) | 1.41 (1.28–1.54) |
Concordance ratio cannot be calculated because of zero patients at NCI-designated centers receiving nonconcordant care.
DISCUSSION
For 11 nationally endorsed guidelines relevant to surgical oncology practice, we found that those supported by high-quality evidence were almost uniformly followed in both NCI-designated cancer centers and other institutions, with little if any difference in practice between center types. However, for guidelines supported by less definitive evidence but uniform consensus among clinical experts, our results suggest that care may be more varied. In general, we found that care was more likely to be concordant in NCI-designated centers. These findings persisted or were amplified after adjusting for patient characteristics and measurable structural institutional characteristics. Therefore, neither differences in the patient populations nor differences in measurable characteristics of the institutions explained the observed variation in care.
One potential explanation for the suggested gap in concordance with lower levels of evidence relates to the mission of NCI-designated cancer centers. NCI-designated cancer centers are given the task of generating new knowledge through research and our results suggest that they tend to practice ahead of the evidence in keeping with this strong research component of their mission. As the NCI suggests in their description of the cancer center programs, only through research can and should an institution be a leader in clinical innovation. In addition, when a guideline is based on consensus, many of the experts who sit on guideline panels practice in NCI centers and therefore practice at these centers is more likely to follow the opinion-based guidelines.
Although the results for axillary dissection and total thyroidectomy may at first glance seem contradictory to our conclusion, they may in fact further support the importance of level of evidence, or at least grading of published guidelines. Axillary dissection has been a mainstay of the surgical treatment of breast cancer because the radical mastectomy was first described by Halsted.22,23 Although the introduction of the sentinel lymph node biopsy allowed this procedure to be omitted in patients with a negative sentinel node, a fulldissection remained the standard of care for all patients with a positive sentinel node biopsy, a recommendation graded as category 2A by the NCCN expert panel. However, prompted by emerging lower-level evidence supporting the safety of omitting dissection in carefully selected patients,24–28 the American College of Surgeons Oncology Group conducted the randomized Z011 study, and the results, just published, now provide level 1 evidence supporting this practice.29,30 Awareness of this emerging evidence over the past several years may explain why concordance with the guideline recommending axillary dissection in all node-positive patients was the only category 2A guideline in our study for which the concordance rate was lower in the NCI-designated centers. Furthermore, given their higher participation in cooperative groups, these institutions were more likely to be participating in clinical trials generating the evidence. Of the 156 sites that participated in the Z011 study, 27 were NCI-designated cancer centers.
The concordance rate for total thyroidectomy for papillary thyroid cancer 1.5 cm or larger was only marginally different at NCI-designated cancer centers and other institutions despite being categorized as 2A by the NCCN. Our observation of 95% versus 89% is similar to a previous report by Bilimoria et al6 of 92% versus 88% in their examination of total thyroidectomy rates for thyroid cancer in NCI-designated cancer centers compared to other institutions. A potential explanation for the similarity of this guideline to those based on level 1 evidence may relate to the use of the NCCN categorization for level of evidence. Although the evidence supporting this guideline does not meet the definition for level 1 evidence,31,32 the American Thyroid Association’s guideline gives this recommendation a rating of A (their highest recommendation supported by the strongest evidence).33 The observed concordance rate and ratio may reflect the higher rating of the guideline by this specialty organization.
Our study has several limitations. The use of administrative claims data can lead to misclassification bias.34 However, the use of SEER-Medicare data for the examination of practice patterns is well established.35–37 In addition, the use of the SEER-Medicare data precluded us from extending our analysis to patients younger than 65 years and may impact the generalizability of this analysis to younger age groups. Practice patterns for younger patients are conceivably different than those for older cancer patients and the results observed here may not hold for younger populations. However, more than half of all cancer patients are diagnosed after the age of 65 years,38 especially for colon, gastric, and rectal cancers.
Our definition of concordance for measures involving adjuvant therapy differed from that used in some prior studies.39–41 Our goal was to assess the performance of the surgical oncologist rather than the appropriateness of the treatment ultimately received by the patient after being seen by all provider types. Therefore, for measures that assessed adjuvant chemotherapy or radiation therapy, we considered surgical care concordant if the patient was successfully referred to the relevant specialist, even if the treatment was not delivered. This definition of concordant care has the potential to be confounded by differences in referral practices.42 Surgeons who refer all patients, regardless of compelling contraindications to chemotherapy would have higher concordance rates than those that are more selective with their referrals.
A final consideration is that our choice of guidelines may account for some of the observed variation. All guidelines examining adequate referral were supported by level 1 evidence, whereas most of the guidelines based on lower-level evidence examined nodal retrieval rates. Given these patterns, other explanations, such as physicians’ attitudes about the types of guidelines they accept into practice, ease of compliance to a guidelines, and monetary or measurement incentives, may play a role. A study using a broader range of guidelines and the full spectrum of levels of evidence will allow these effects to be explored.
NCI-designation is one example of “center of excellence” (COE) status, used to identify institutions that have particular expertise in a given disease. The term “center of excellence” is currently broadly used and poorly defined, complicating interpretation of its value and meaning.43–45 Even as used here to describe regional centers offering particular expertise, the criteria for granting this designation are not centralized or standardized. COE status can be conferred by a number of different entities depending on the disease site but often is defined by certain criteria set forth by a professional organization or government agencies.46–50 Data on the differences in care at COE as compared to other institutions are limited and somewhat inconsistent. Although superior surgical outcomes for certain conditions have been demonstrated for cancer centers,3–5 several studies have failed to show a correlation between COE-designation and improved outcomes for bariatric surgery.51,52
Our results suggest that COEs may provide “different care” but not necessarily “better care.” Because COEs are often academic medical centers whose mission includes an emphasis on the generation of new evidence, it may be safer and more appropriate for them to be early adopters of emerging treatments, ideally through careful and thoughtful adherence to guidelines based on less than definitive evidence and accompanied by close monitoring of the safety and effectiveness of these treatments. The knowledge gained can then inform decisions about when and how to disseminate the practices more broadly. Our results suggest that the NCI-designated cancer centers may be playing just this role.
Observed differences in practice based on the level of available evidence have important policy implications. In the current era of quality measurement, it is generally believed that variation in care represents a quality problem. However, given that medical practice does and should change over time as new evidence emerges, variation such as is suggested by our study between NCI-designated cancer centers and other institutions may be reasonable, properly aligned with the mission and activities of these institutions, and most importantly, beneficial to society.
ACKNOWLEDGMENTS
This study used the linked SEER-Medicare database. The interpretation and reporting of these data are the sole responsibility of the authors. The authors acknowledge the efforts of the Applied Research Program, NCI; the Office of Research, Development and Information, CMS; Information Management Services (IMS), Inc; and the Surveillance, Epidemiology, and End Results (SEER) Program tumor registries in the creation of the SEER-Medicare database.
This work was supported in part by a grant from the American Surgical Association Foundation (Greenberg) and by the “Program in Cancer Outcomes Research Training” grant (NIH R25 CA092203, Gazelle) (H.I.). The sponsor did not have any role in design or conduct of the study or manuscript preparation or review.
Footnotes
Authors’ contributions: conception and study design (C.C.G., H.I., S.R.L., J.C.W.), data acquisition (C.C.G., B.A.N.), data analysis (H.I., K.A.C., C.C.G., S.R.L., B.A.N.), data interpretation (all authors), drafting of manuscript (H.I., C.C.G.), critical review of manuscript, and final approval of published version (all authors). Dr In had full access to all of the data in the study and takes personal responsibility for the integrity of the data and the accuracy of the data analysis. The authors thank Drs Ashish Jha, Atul Gawande, and Deborah Schrag for their contributions in the earlier work that informed the analysis presented in this manuscript.
Disclosure: The authors report no financial conflicts of interest.
REFERENCES
- 1.Medina-Franco H, Garcia-Alvarez MN, Rojas-Garcia P, et al. Body image perception and quality of life in patients who underwent breast surgery. Am Surg. 2010;76:1000–1005. [PubMed] [Google Scholar]
- 2.National Cancer Institute Cancer Centers Program [Accessed November 30, 2010]; Available at: http://cancercenters.cancer.gov/
- 3.Birkmeyer NJ, Goodney PP, Stukel TA, et al. Do cancer centers designated by the National Cancer Institute have better surgical outcomes? Cancer. 2005;103:435–441. doi: 10.1002/cncr.20785. [DOI] [PubMed] [Google Scholar]
- 4.Paulson EC, Mitra N, Sonnad S, et al. National Cancer Institute designation predicts improved outcomes in colorectal cancer surgery. Ann Surg. 2008;248:675–686. doi: 10.1097/SLA.0b013e318187a757. [DOI] [PubMed] [Google Scholar]
- 5.Friese CR, Earle CC, Silber JH, et al. Hospital characteristics, clinical severity, and outcomes for surgical oncology patients. Surgery. 2010;147:602–609. doi: 10.1016/j.surg.2009.03.014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Bilimoria KY, Bentrem DJ, Linn JG, et al. Utilization of total thyroidectomy for papillary thyroid cancer in the United States. Surgery. 2007;142:906–913. doi: 10.1016/j.surg.2007.09.002. discussion 913.e1–2. [DOI] [PubMed] [Google Scholar]
- 7.Bilimoria KY, Talamonti MS, Wayne JD, et al. Effect of hospital type and volume on lymph node evaluation for gastric and pancreatic cancer. Arch Surg. 2008;143:671–678. doi: 10.1001/archsurg.143.7.671. discussion 678. [DOI] [PubMed] [Google Scholar]
- 8.Wadman M. New plan proposed to help resolve conflicting medical advice. Nat Med. 2008;14:226. doi: 10.1038/nm0308-226a. [DOI] [PubMed] [Google Scholar]
- 9.Moulton G. IOM report on quality of cancer care highlights need for research, data expansion. Institute of Medicine. J Natl Cancer Inst. 1999;91:761–762. doi: 10.1093/jnci/91.9.761. [DOI] [PubMed] [Google Scholar]
- 10.Davis D, Evans M, Jadad A, et al. The case for knowledge translation: shortening the journey from evidence to effect. BMJ. 2003;327:33–35. doi: 10.1136/bmj.327.7405.33. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Grimshaw JM, Thomas RE, MacLennan G, et al. Effectiveness and efficiency of guideline dissemination and implementation strategies. Health Technol Assess. 2004;8:iii–iv. 1–72. doi: 10.3310/hta8060. [DOI] [PubMed] [Google Scholar]
- 12.Tricoci P, Allen JM, Kramer JM, et al. Scientific evidence underlying the ACC/AHA clinical practice guidelines. JAMA. 2009;301:831–841. doi: 10.1001/jama.2009.205. [DOI] [PubMed] [Google Scholar]
- 13.Schroen AT, Brenin DR. Breast cancer treatment beliefs and influences among surgeons in areas of scientific uncertainty. Am J Surg. 2010;199:491–499. doi: 10.1016/j.amjsurg.2009.04.005. [DOI] [PubMed] [Google Scholar]
- 14.Simunovic M, Baxter NN. Knowledge translation research: a review and new concepts from a surgical case study. Surgery. 2009;145:639–644. doi: 10.1016/j.surg.2008.11.011. [DOI] [PubMed] [Google Scholar]
- 15.Greenberg CC, Lipsitz SR, Neville B, et al. Receipt of appropriate surgical care for Medicare beneficiaries with cancer. Arch Surg. 2011;146:1128–1134. doi: 10.1001/archsurg.2011.141. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.NCCN Clinical Practice Guidelines in Oncology (NCCN GuidelinesTM) [Accessed June 15, 2010]; Available at: http://www.nccn.org/clinical.asp.
- 17.Charlson ME, Pompei P, Ales KL, et al. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis. 1987;40:373–383. doi: 10.1016/0021-9681(87)90171-8. [DOI] [PubMed] [Google Scholar]
- 18.Deyo RA, Cherkin DC, Ciol MA. Adapting a clinical comorbidity index for use with ICD-9-CM administrative databases. J Clin Epidemiol. 1992;45:613–619. doi: 10.1016/0895-4356(92)90133-8. [DOI] [PubMed] [Google Scholar]
- 19.Klabunde CN, Potosky AL, Legler JM, et al. Development of a comorbidity index using physician claims data. J Clin Epidemiol. 2000;53:1258–1267. doi: 10.1016/s0895-4356(00)00256-0. [DOI] [PubMed] [Google Scholar]
- 20.Rubin DB. Estimating causal effects from large data sets using propensity scores. Ann Intern Med. 1997;127:757–763. doi: 10.7326/0003-4819-127-8_part_2-199710151-00064. [DOI] [PubMed] [Google Scholar]
- 21.Robins JM, Hernan MA, Brumback B. Marginal structural models and causal inference in epidemiology. Epidemiology. 2000;11:550–560. doi: 10.1097/00001648-200009000-00011. [DOI] [PubMed] [Google Scholar]
- 22.Halsted W. The results of radical operations for the cure of carcinoma of the breast. Ann Surg. 1907;46:1–19. doi: 10.1097/00000658-190707000-00001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Brunicardi F, Anderson D, Billiar T, et al. Schwartz’s Manual Of Surgery. 8th ed. McGraw-Hill; New York, NY: [Accessed February 17, 2011]. 2006. Available at: http://www.R2Library. com/marc frame.aspx?ResourceID=238. [Google Scholar]
- 24.Bilimoria KY, Bentrem DJ, Hansen NM, et al. Comparison of sentinel lymph node biopsy alone and completion axillary lymph node dissection for nodepositive breast cancer. J Clin Oncol. 2009;27:2946–2953. doi: 10.1200/JCO.2008.19.5750. [DOI] [PubMed] [Google Scholar]
- 25.Fant J, Grant M, Knox S, et al. Preliminary outcome analysis in patients with breast cancer and a positive sentinel lymph node who declined axillary dissection. Ann Surg Oncol. 2003;10:126–130. doi: 10.1245/aso.2003.04.022. [DOI] [PubMed] [Google Scholar]
- 26.Fisher B, Wolmark N, Bauer M, et al. The accuracy of clinical nodal staging and of limited axillary dissection as a determinant of histologic nodal status in carcinoma of the breast. Surg Gynecol Obstet. 1981;152:765–772. [PubMed] [Google Scholar]
- 27.Hwang RF, Gonzalez-Angulo AM, et al. Low locoregional failure rates in selected breast cancer patients with tumor-positive sentinel lymph nodes who do not undergo completion axillary dissection. Cancer. 2007;110:723–730. doi: 10.1002/cncr.22847. [DOI] [PubMed] [Google Scholar]
- 28.Jeruss JS, Winchester D, Sener S, et al. Axillary recurrence after sentinel node biopsy. Ann Surg Oncol. 2005;12:34–40. doi: 10.1007/s10434-004-1164-2. [DOI] [PubMed] [Google Scholar]
- 29.Giuliano AE, McCall L, Beitsch P, et al. Locoregional recurrence after sentinel lymph node dissection with or without axillary dissection in patients with sentinel lymph node metastases: the American College of Surgeons Oncology Group Z0011 randomized trial. Ann Surg. 2010;252:426–432. doi: 10.1097/SLA.0b013e3181f08f32. discussion 432–423. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Giuliano AE, Hunt KK, Ballman KV, et al. Axillary dissection vs no axillary dissection in women with invasive breast cancer and sentinel node metastasis. JAMA. 2011;305:569–575. doi: 10.1001/jama.2011.90. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Sackett DL. Rules of evidence and clinical recommendations on the use of antithrombotic agents. Chest. 1989;95(suppl):2S–4S. [PubMed] [Google Scholar]
- 32.US Preventitive Services Task Force (USPSTF) [Accessed December 1, 2010];Hierarchy of Research Design. Available at: http://www.ahrq.gov/clinic/uspstfix.htm.
- 33.Cooper DS, Doherty GM, Haugen BR, et al. Revised American Thyroid Association management guidelines for patients with thyroid nodules and differentiated thyroid cancer. Thyroid. 2009;19:1167–1214. doi: 10.1089/thy.2009.0110. [DOI] [PubMed] [Google Scholar]
- 34.Butler Nattinger A, Schapira MM, Warren JL, et al. Methodological issues in the use of administrative claims data to study surveillance after cancer treatment. Med Care. 2002;40(suppl):IV-69–74. doi: 10.1097/00005650-200208001-00010. [DOI] [PubMed] [Google Scholar]
- 35.Potosky AL, Riley GF, Lubitz JD, et al. Potential for cancer related health services research using a linked Medicare-tumor registry database. Med Care. 1993;31:732–748. [PubMed] [Google Scholar]
- 36.Warren JL, Klabunde CN, Schrag D, et al. Overview of the SEER-Medicare data: content, research applications, and generalizability to the United States elderly population. Med Care. 2002;40(suppl):IV-3–18. doi: 10.1097/01.MLR.0000020942.47004.03. [DOI] [PubMed] [Google Scholar]
- 37.Cooper GS, Virnig B, Klabunde CN, et al. Use of SEER-Medicare data for measuring cancer surgery. Med Care. 2002;40(suppl):IV-43–48. doi: 10.1097/00005650-200208001-00006. [DOI] [PubMed] [Google Scholar]
- 38.American Cancer Society [Accessed November 15, 2010];Cancer Facts & Figures. Available at: http://www. cancer.org/Research/CancerFactsFigures/index.
- 39.Schrag D, Cramer LD, Bach PB, et al. Age and adjuvant chemotherapy use after surgery for stage III colon cancer. J Natl Cancer Inst. 2001;93:850–857. doi: 10.1093/jnci/93.11.850. [DOI] [PubMed] [Google Scholar]
- 40.Ayanian JZ, Zaslavsky AM, Fuchs CS, et al. Use of adjuvant chemotherapy and radiation therapy for colorectal cancer in a population-based cohort. J Clin Oncol. 2003;21:1293–1300. doi: 10.1200/JCO.2003.06.178. [DOI] [PubMed] [Google Scholar]
- 41.Dobie SA, Baldwin LM, Dominitz JA, et al. Completion of therapy by Medicare patients with stage III colon cancer. J Natl Cancer Inst. 2006;98:610–619. doi: 10.1093/jnci/djj159. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Luo R, Giordano SH, Zhang DD, et al. The role of the surgeon in whether patients with lymph node-positive colon cancer see a medical oncologist. Cancer. 2007;109:975–982. doi: 10.1002/cncr.22462. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Brant-Zawadzki M, Wortham J, Cox J. Search for meaning: centers of excellence, service lines and institutes. Physician Exec. 2009;35:32–34. 36–37. [PubMed] [Google Scholar]
- 44.Cress D, Pelton J, Thayer SC, et al. Development of a center of excellence for joint replacement. Orthop Nurs. 2010;29:150–168. doi: 10.1097/NOR.0b013e3181db5416. [DOI] [PubMed] [Google Scholar]
- 45.Meyer LC. Centers of excellence: a medical measurement or marketing myth? Med Group Manage J. 1996;43:66–68. 70 passim. [PubMed] [Google Scholar]
- 46.HSR&D Centers of Excellence [Accessed February 16, 2011];United States Department of Veterans Affairs. Available at: http://www.hsrd.research.va.gov/centers/centers of excellence.cfm.
- 47.National Center on Minority Health and Health Disparities [Accessed February 16, 2011];Department of Health and Human Services. Available at: http://www.nimhd.nih.gov/our programs/centerOfExcellence.asp.
- 48.Defense Centers of Excellence for Psychological Health & Traumatic Brain Injury [Accessed Feb 16, 2011];United States of America Department of Defense. Available at: http:// www.dcoe.health.mil/
- 49.ASMBS Centers of Excellence Bariatric Surgery [Accessed February 16, 2011];American Society for Metabolic & Bariatric Surgery. Available at: http://www.asmbs.org/Newsite07/ resources/asmbs coe.htm.
- 50.Womenshealth.gov . National Centers of Excellence in Women’s Health. U.S. Department of Health & Human Services; [Accessed February 16, 2011]. Available at: http://www. womenshealth.gov/archive/owh/multidisciplinary/ [Google Scholar]
- 51.Birkmeyer NJ, Dimick JB, Share D, et al. Hospital complication rates with bariatric surgery in Michigan. JAMA. 2010;304:435–442. doi: 10.1001/jama.2010.1034. [DOI] [PubMed] [Google Scholar]
- 52.Livingston EH. Bariatric surgery outcomes at designated centers of excellence vs nondesignated programs. Arch Surg. 2009;144:319–325. doi: 10.1001/archsurg.2009.23. discussion 325. [DOI] [PubMed] [Google Scholar]