Abstract
Background
Pancreatic cancer outcomes vary considerably among hospitals. Assessing pancreatic cancer care by using quality indicators could help reduce this variability. However, valid quality indicators are not currently available for pancreatic cancer management, and a composite assessment of the quality of pancreatic cancer care in the United States has not been done.
Methods
Potential quality indicators were identified from the literature, consensus guidelines, and interviews with experts. A panel of 20 pancreatic cancer experts ranked potential quality indicators for validity based on the RAND/UCLA Appropriateness Methodology. The rankings were rated as valid (high or moderate validity) or not valid. Adherence with valid indicators at both the patient and the hospital levels and a composite measure of adherence at the hospital level were assessed using data from the National Cancer Data Base (2004–2005) for 49 065 patients treated at 1134 hospitals. Summary statistics were calculated for each individual candidate quality indicator to assess the median ranking and distribution.
Results
Of the 50 potential quality indicators identified, 43 were rated as valid (29 as high and 14 as moderate validity). Of the 43 valid indicators, 11 (25.6%) assessed structural factors, 19 (44.2%) assessed clinical processes of care, four (9.3%) assessed treatment appropriateness, four (9.3%) assessed efficiency, and five (11.6%) assessed outcomes. Patient-level adherence with individual indicators ranged from 49.6% to 97.2%, whereas hospital-level adherence with individual indicators ranged from 6.8% to 99.9%. Of the 10 component indicators (contributing 1 point each) that were used to develop the composite score, most hospitals were adherent with fewer than half of the indicators (median score = 4; interquartile range = 3–5).
Conclusions
Based on the quality indicators developed in this study, there is considerable variability in the quality of pancreatic cancer care in the United States. Hospitals can use these indicators to evaluate the pancreatic cancer care they provide and to identify potential quality improvement opportunities.
CONTEXT AND CAVEATS
Prior knowledge
Pancreatic cancer outcomes vary considerably among hospitals, but the factors responsible for this variability have been difficult to identify because valid indicators of high-quality care for pancreatic cancer patients are not available.
Study design
A panel of pancreatic cancer experts identified valid quality indicators for pancreatic cancer care, assessed hospital-level compliance with these indicators, and developed a composite measure of adherence at the hospital level using data from the National Cancer Data Base (2004-2005) in the United States.
Contribution
Of 50 potential quality indicators identified, 43 were rated as valid and assessed structural factors, clinical processes of care, treatment appropriateness, efficiency, and outcomes. Most hospitals were adherent with fewer than half of the 10 component indicators that were used to develop the composite measure of adherence.
Implications
These quality indicators can be used by hospitals to monitor, standardize, and improve the care they provide to pancreatic cancer patients.
Limitations
Important indicators may have been missed. Some indicators may have received slightly lower rankings because of how they were worded. The reliability of hospital performance comparisons was limited by the small sample size and an inability to adjust completely for differences in case mix among hospitals. The findings may not be generalizable to all hospitals.
From the Editors
There is considerable variability in outcomes among hospitals in the United States for many procedures and medical conditions, particularly for complex surgeries such as pancreatectomy for malignancy (1,2). Short-term and long-term outcomes of patients at some hospitals are considerably worse than at other hospitals (3–9); however, it has been difficult to identify the factors responsible for this variability (10,11). Hospitals with poor outcomes are left with little guidance on where to focus quality improvement efforts. Thus, efforts have focused on identifying quality indicators or measures that can be used to standardize care and ensure that patients are managed in accordance with established recommendations (7,10).
A number of organizations have developed quality measures for surgical and oncology care, including the Agency for Healthcare Research and Quality (AHRQ), the Centers for Medicare and Medicaid Services (eg, the Surgical Care Improvement Project and the Physician Quality Reporting Initiative) (12,13), the Joint Commission (14), and the American Hospital Association (15). Of the hundreds of measures put forth thus far, to our knowledge, the only ones involving pancreatic cancer examine pancreatectomy case volume and postoperative mortality (16,17). Recently, the American College of Surgeons, the National Comprehensive Cancer Network, and the American Society of Clinical Oncology collaboratively developed five quality measures for cancer care. These measures were subsequently endorsed by the National Quality Forum as part of the Quality of Cancer Care Performance Measures project (18); however, none of these quality indicators specifically addressed pancreatic cancer care.
Individual quality measures assess only a single aspect of care. However, health care is multidimensional and complex, leading the Institute of Medicine to note that composite quality measures consisting of multiple individual component measures can provide a better sense of the reliability of the health-care system (19). Importantly, the National Quality Forum has recently introduced an initiative to establish a framework for composite quality measures to ensure that they are scientifically acceptable (ie, reliable and valid), usable (ie, meaningful and understandable), and feasible (ie, based on data that are readily available and retrievable without undue collection burden) (20).
Thus, there is a need for both individual and composite quality indicators that are developed by using a formal methodology and that encompass the various domains of pancreatic cancer care, including those related to pancreatic surgery, for which outcomes are highly variable and potentially modifiable. Moreover, there is a need for hospitals to assess adherence with individual aspects of care by using specific indicators as well as to examine the overall quality of pancreatic cancer care by using a composite measure to identify potential quality improvement opportunities within their institutions. The objectives of this study were 1) to develop indicators of high-quality care for pancreatic cancer patients; 2) to assess hospital-level compliance with these indicators in the United States; and 3) to develop a composite, evidenced-based measure of the quality of hospital-level pancreatic cancer care. The ultimate goal of this study was to identify indicators that hospitals can use to assess their performance and to develop specific initiatives to improve the quality of patient care and outcomes.
Methods
Quality Indicator Development
We used a modification of the RAND/UCLA Appropriateness Methodology to assess the validity of potential quality indicators (21,22). The RAND/UCLA Appropriateness Methodology is an iterative Delphi method that has been used to develop quality-of-care indicators across a broad range of disease processes (21,23–27). This method is particularly useful when high-level evidence is lacking because it incorporates recommendations made by an expert panel that are based on their evaluation of the evidence and their clinical experience. Briefly, in two rounds of rankings, the expert panel members independently rank potential quality indicators for validity. Between the two rounds, there is an expert panel discussion (Figure 1). Indicators are evaluated for appropriateness (based on the median ranking for each) and agreement (based on the distribution of rankings). This process identifies indicators that are ranked as valid by the expert panel and has been shown to provide quality indicators that have face, construct, and predictive validity (28–30). This study was approved by the Northwestern University institutional review board.
Figure 1.
Overview of the modified RAND/UCLA Appropriateness Methodology used to develop pancreatic cancer care quality indicators.
Potential quality indicators were identified through extensive systematic literature reviews, assessment of existing guidelines from numerous organizations (eg, National Comprehensive Cancer Network guidelines), quality measures (eg, AHRQ), and semistructured interviews with pancreatic cancer experts in various subspecialties of medicine. Although high-level evidence (eg, from randomized trials) supporting clinical practice was frequently unavailable, we required that there was some evidence suggesting that the potential indicators would affect outcome (eg, institutional case series). The indicators were categorized into five domains—structure, process, appropriateness, efficiency, and outcomes (7,10,31)—and encompassed the diagnostic, perioperative, intraoperative, postoperative, and follow-up phases of pancreatic cancer care. To evaluate potential quality indicators, we assembled an expert panel of 20 physicians that included clinicians and researchers in the fields of surgery (12 members), medical oncology (three members), radiation oncology (two members), pathology (one member), radiology (one member), and gastroenterology (one member) (see Notes). Most of the panel members were from academic institutions, but some physicians from community hospitals were also included.
In the first round of rankings, panel members were sent via electronic mail a list of potential indicators and detailed instructions regarding the methodology and the process of ranking indicators for validity. The instructions given to panelists regarding the rankings were as follows. First, an indicator should be considered “valid” if adherence with this indicator is critical to provide quality care to patients with pancreatic cancer exclusive of costs or feasibility of implementation. Not providing the level of care addressed in the indicator would be a breach in clinical practice and an indication of unacceptable care. Second, validity rankings should be based on the panelist’s own judgment, not on what they think other experts or the panel believes. Third, the indicators should be considered for an “average” patient who presents to an “average” physician at an “average” hospital. Finally, the indicators need not necessarily apply to any one specific patient, but rather could pertain to the overall care of pancreatic cancer patients (eg, antibiotic discontinuation within 24 hours of surgery).
Each indicator was ranked on a 9-point scale for which 1 = definitely not valid, 5 = uncertain or equivocal validity, and 9 = definitely valid. Panelists were also given the opportunity to suggest wording modifications to improve the clarity or increase the potential validity of the quality indicator. The panel was also allowed to suggest entirely new indicators. Summary statistics were calculated for each individual candidate quality indicator to assess the median and distribution of rankings. For round 1, a potential quality indicator that had four or more rankings in the 1–3 range and four or more rankings in the 7–9 range was considered to have scores that were in disagreement. If all but four rankings were in any single 3-point range (eg, 1–3, 4–6, or 7–9), then the scores for that indicator were said to be in agreement. All other score distributions were deemed indeterminate. The round 1 rankings were used to guide discussion at the expert panel meeting.
Before the expert panel meeting, the panelists were provided with the relevant literature regarding indicators for which there was disagreement in the round 1 rankings. The panelists were also given a summary sheet of the round 1 rankings that showed the aggregated summary statistics for each indicator and a copy of their own round 1 rankings. Each potential quality indicator was discussed by the panel to identify opportunities to improve the wording of the indicators or to highlight evidence that may have been missed by the literature review. In addition, indicators could be reworded and new indicators could be proposed during the discussion. It was stressed that there was no need to establish a consensus among the panelists because each member would independently rank the indicators for validity after the panel discussion.
Immediately after the expert panel discussion, the panelists were sent an updated ranking form via electronic mail on which they were asked to re-rank all of the indicators for validity. These round 2 rankings were used for the final assessment of validity. The rankings were compiled, and the median ranking from the expert panel was calculated for each individual indicator. We used definitions from previous quality indicator development studies (23–26) to establish two levels of validity that were based on the stringency of the criteria used: relaxed and strict. According to the strict criteria, an indicator was deemed to have high validity if the median score and at least 90% of the individual rankings from the 20 panelists were within the 7–9 range. According to the relaxed criteria, an indicator was deemed to have moderate validity if the median score and at least 95% (all but one) of the individual rankings from the expert panel were within the 4–9 range.
Assessment of Hospital Performance
The National Cancer Data Base (NCDB) is a national cancer registry supported by the American College of Surgeons, the Commission on Cancer, and the American Cancer Society (21,32,33). All of the approximately 1450 Commission on Cancer–approved hospitals are required to report all of their cancer cases to the NCDB annually. The NCDB and state and national cancer registries share common mechanisms for data coding, collection, and accuracy assessment (21,34). According to incidence estimates from the American Cancer Society, the NCDB captures approximately 75% of newly diagnosed pancreatic cancers in the United States each year (21). The NCDB collects information regarding patient demographics, tumor characteristics and pathology, staging, diagnosis, treatment, and survival (34).
Patients who were diagnosed with pancreatic adenocarcinoma from January 1, 2004, to December 31, 2005, were identified from the NCDB based on International Classification of Diseases for Oncology, third edition, site and histology codes (35). At the time of this study, patients diagnosed through the end of 2005 were the most recent ones available for analysis. Patients who underwent pancreatectomy were identified based on the Commission on Cancer's Facility Oncology Registry Data Standards site-specific procedure coding (34). Patients were staged according to the American Joint Committee on Cancer sixth edition Cancer Staging Manual (36). We assessed adherence with valid quality indicators for which the relevant data are reported to the NCDB. Patients who underwent palliative procedures or exploratory surgery without a cancer-directed resection were not included in the cohort that was categorized as undergoing cancer-directed resection (ie, pancreatectomy).
We first assessed adherence with the individual quality indicators at the patient level to determine the proportion of patients at Commission on Cancer–approved hospitals who received care that was concordant with the quality indicators. We then assessed adherence with the individual indicators at the hospital level; adherence was defined a priori as hospitals for which at least 90% of patients received care in compliance with the specific quality indicator. A composite measure of hospital pancreatic cancer care was calculated by summing the points for the valid indicators. Adherence with each indicator was assigned 1 point (≥90% of patients received the recommended care). The quality indicators relating to documentation were aggregated into a single composite score for which the maximum score was 10 points. Valid indicators examining all domains of care (structure, process, appropriateness, efficiency, and outcome) were included in the composite measure.
Statistical Analysis
For the quality indicator addressing a hospital's risk-adjusted mortality rate within 30 days of surgery, a logistic regression model was used to adjust for differences in clinicopathologic characteristics among hospitals. The model included sex, age at diagnosis, race (white, black, Asian, Hispanic, other), stage, type of pancreatectomy (pancreaticoduodenectomy, distal pancreatectomy, total pancreatectomy, other), and Charlson comorbidity score. The NCDB requires reporting of six preexisting comorbidities based on International Classification of Disease, ninth edition classification (34,35). The primary cancer diagnosis and postoperative complications are not included when these six codes are reported. A modified Charlson comorbidity score was calculated to assess the severity of preexisting comorbidities (37–39). Analyses were performed using SPSS, version 15 (SPSS, Inc., Chicago, IL).
Results
Quality Indicator Development
On the basis of literature reviews, consensus guidelines, and interviews with experts, we identified 50 potential quality indicators for pancreatic cancer care (Table 1). These indicators were categorized into five domains of care: structure (12 indicators), processes (21 indicators), appropriateness (seven indicators), efficiency (five indicators), and outcomes (five indicators). Of the 50 indicators, 20 were hospital-level indicators and 30 were patient-level indicators.
Table 1.
Summary of pancreatic cancer quality indicators
Indicator | All indicators | High validity* | Moderate validity† | Not valid |
Number of indicators | 50 | 29 | 14 | 7 |
Domain | ||||
Structure | 12 | 6 | 5 | 1 |
Process | 21 | 15 | 4 | 2 |
Appropriateness | 7 | 4 | 0 | 3 |
Efficiency | 5 | 1 | 3 | 1 |
Outcome | 5 | 3 | 2 | 0 |
Level of measurement | ||||
Hospital or provider | 20 | 9 | 9 | 2 |
Patient | 30 | 20 | 5 | 5 |
Potential data source for assessment‡ | ||||
Cancer registries | 24 | 18 | 4 | 2 |
Administrative datasets | 19 | 9 | 5 | 5 |
Patient chart only | 8 | 6 | 2 | 0 |
Based on strict validity criteria (≥90% of expert panel rankings in the 7–9 range).
Based on relaxed validity criteria (≥95% of expert panel rankings in 4–9 range).
Totals in some columns may not equal the total number of indicators because some data can be found in more than one source and some data are not readily available in the specified sources.
Based on the round 2 expert panel rankings of the 50 potential quality indicators, 43 indicators (86%) were rated as valid (29 as having high validity and 14 as having moderate validity) and seven (14%) were rated as not valid (Table 1). Of the 43 valid indicators, 11 (25.6%) assessed structural factors, 19 (44.2%) assessed clinical processes of care, four (9.3%) assessed treatment appropriateness, four (9.3%) assessed efficiency, and five (11.6%) assessed outcomes (Tables 2 and 3). The assessment for the indicators would be at the hospital level for 18 indicators (41.9%) and at the patient level for 25 indicators (58.1%). Of the 43 indicators rated as valid, 22 are reported to cancer registries or can be derived from data submitted to cancer registries, another 14 are found in widely available multi-institutional administrative datasets, and eight are generally found only in patient charts (Table 1). Seven indicators were ranked as not valid (Table 4).
Table 2.
High-validity pancreatic cancer quality indicators*
No. | Quality indicator | Median ranking | Level of measurement | Domain |
1 | IF an institution performs pancreatic cancer surgery, THEN the institution should monitor their average annual case volume | 8.5 | Hospital | Structure |
2 | IF an institution performs pancreatic cancer surgery, THEN the institution should monitor their surgeons' annual case volume | 8.5 | Hospital | Structure |
3 | IF a patient undergoes resection for pancreatic cancer, THEN the patient should be treated in a multidisciplinary effort with a surgeon, medical oncologist, and a radiation oncologist | 8 | Hospital | Structure |
4 | IF a patient undergoes resection, THEN the hospital must ensure that the surgeon is certified by the American Board of Surgery or equivalent international organization | 9 | Hospital | Structure |
5 | IF an institution performs pancreatic cancer surgery, THEN the hospital should have interventional radiology services available on site | 9 | Hospital | Structure |
6 | IF an institution performs pancreatic cancer surgery, THEN the hospital should have an intensive care unit staffed by critical care specialists | 8 | Hospital | Structure |
7 | IF a patient undergoes resection, THEN a history and physical with thorough preoperative risk assessment should be performed | 9 | Patient | Process |
8 | IF a patient is diagnosed with pancreatic cancer, THEN a stage-specific treatment plan should be documented | 9 | Patient | Process |
9 | IF a patient is being considered for resection, THEN a triple-phase, multi-slice CT or MRI scan should be obtained | 9 | Patient | Process |
10 | IF a patient undergoes cancer-directed resection, THEN clinical and pathologic stage should be recorded | 9 | Patient | Process |
11 | IF a patient undergoes cancer-directed resection, THEN the tumor histology should be recorded | 9 | Patient | Process |
12 | IF a patient undergoes cancer-directed resection, THEN the tumor size should be recorded | 9 | Patient | Process |
13 | IF a patient undergoes cancer-directed resection, THEN the tumor grade should be recorded | 9 | Patient | Process |
14 | IF a patient undergoes cancer-directed resection, THEN the margin status should be recorded | 9 | Patient | Process |
15 | IF a patient undergoes cancer-directed resection, THEN the number of lymph nodes examined should be recorded | 9 | Patient | Process |
16 | IF a patient undergoes cancer-directed resection, THEN the number of lymph nodes positive should be recorded | 9 | Patient | Process |
17 | IF patient undergoes resection of a pancreatic head lesion, THEN in the operative note, the surgeon should document complete removal of all pancreatic tissue, lymph nodes, and connective tissue between the edge of the uncinate process and the right lateral wall of the superior mesenteric artery | 8 | Patient | Process |
18 | IF a patient undergoes resection, THEN suspicious adenopathy outside the scope of planned resection should be evaluated by frozen section | 8 | Patient | Process |
19 | IF a patient undergoes adjuvant therapy, THEN the timing relative to resection (before, after, both) should be recorded | 8 | Patient | Process |
20 | IF a patient undergoes resection, THEN the College of American Pathologists checklist or equivalent reporting system should be followed and fully documented | 8.5 | Patient | Process |
21 | IF a patient does not undergo resection, THEN a TNM clinical stage should be recorded | 8 | Patient | Process |
22 | IF a patient has clinical stage I or II disease, THEN the patient should undergo resection or have a valid reason documented for not undergoing resection | 9 | Patient | Appropriateness |
23 | IF a patient undergoes cancer-directed resection, THEN adjuvant chemotherapy with or without radiation should be considered or administered, or a valid reason should be documented for not receiving adjuvant therapy | 9 | Patient | Appropriateness |
24 | IF a patient has clinical stage IV disease, THEN cancer-directed surgery should not be done | 9 | Patient | Appropriateness |
25 | IF a patient does not undergo resection, THEN chemotherapy or chemoradiation should be considered or administered or a valid reason should be documented for not receiving non-surgical therapy | 8 | Patient | Appropriateness |
26 | IF a patient is to receive treatment, THEN the time from diagnosis to surgery or first treatment should be less than 2 months | 8 | Patient | Efficiency |
27 | IF an institution performs pancreatic cancer surgery, THEN the institution should monitor their margin-negative resection rate. | 8 | Hospital | Outcome |
28 | IF an institution performs pancreatic cancer surgery, THEN the hospital should monitor their pancreatic cancer resection risk-adjusted perioperative mortality | 8 | Hospital | Outcome |
29 | IF an institution performs pancreatic cancer surgery, THEN the hospital risk-adjusted perioperative mortality should be less than 5% | 8 | Hospital | Outcome |
Based on strict validity criteria (≥90% of expert panel rankings in the 7–9 range). CT = computed tomography; MRI = magnetic resonance imaging.
Table 3.
Moderate-validity pancreatic cancer quality indicators*
No. | Quality indicator | Median ranking | Level of measurement | Domain |
30 | IF an institution treats pancreatic cancer, THEN the institution should participate in clinical trials | 7.5 | Hospital | Structure |
31 | IF an institution performs pancreatic cancer surgery, THEN the institution should perform ≥12 cases per year | 8 | Hospital | Structure |
32 | IF an institution performs pancreatic cancer surgery, THEN the hospital should have endoscopic ultrasonography services available on site | 7 | Hospital | Structure |
33 | IF an institution treats pancreatic cancer, THEN the institution should have radiation therapy and chemotherapy services available within their institution | 8 | Hospital | Structure |
34 | IF an institution performs pancreatic cancer surgery, THEN the hospital should have ERCP services available on site | 8 | Hospital | Structure |
35 | If a patient is to undergo resection, THEN on the basis of the CT or MRI scan, the surgeon should preoperatively document 1) no metastatic disease, 2) patent superior mesenteric vein and portal vein, and 3) a definable tissue plane between the tumor and regional arterial structures | 9 | Patient | Process |
36 | IF a patient undergoes cancer-directed resection, THEN the margins should be macroscopically clear | 8 | Patient | Process |
37 | IF a patient undergoes resection, THEN in the operative note, the surgeon should document intraoperative findings including the absence of 1) regional arterial involvement, 2) metastatic disease (liver, peritoneal, omental), and 3) distant adenopathy | 8.5 | Patient | Process |
38 | IF a patient undergoes cancer-directed resection, THEN ≥10 regional lymph nodes should be resected and pathologically evaluated† | 8 | Patient | Process |
39 | IF an institution performs pancreatic cancer surgery, THEN the institution should monitor their median estimated blood loss | 8 | Hospital | Process |
40 | IF an institution performs pancreatic cancer surgery, THEN the institution should monitor the median operative time for resections | 8 | Hospital | Efficiency |
41 | IF an institution performs pancreatic cancer surgery, THEN the hospital should monitor their readmission-within-30-days rate | 8 | Hospital | Efficiency |
42 | IF a patient undergoes resection, THEN the operative time should be less than 10 hours‡ | 8 | Patient | Efficiency |
43 | IF an institution performs pancreatic cancer surgery, THEN the hospital should monitor the stage-specific 2-year and 5-year survival rates for their patients who underwent pancreatectomy | 8 | Hospital | Outcome |
Based on relaxed validity criteria (≥95% of expert panel rankings in 4–9 range). ERCP = endoscopic retrograde cholangiopancreatography; CT = computed tomography; MRI = magnetic resonance imaging.
The expert panel extensively discussed indicators with multiple nodal count thresholds, but the only indicator retained was for the resection and examination of greater than or equal to 10 nodes. The indicator for greater than or equal to 12 nodes was also moderately valid but had lower median score; thus, the indicator for greater than or equal to 10 nodes was retained. The indicator for greater than or equal to 15 nodes was ranked as not valid.
The expert panel discussed a variety of time thresholds and thought that 8 hours would be a reasonable maximum, but they settled on 10 hours because many panel members believed that operative times greater than 8 hours would not be excessive.
Table 4.
Pancreatic cancer quality indicators that were not valid
No. | Quality indicator | Median ranking | Level of measurement | Domain |
44 | IF a surgeon performs pancreatic cancer surgery, THEN the surgeon should perform ≥6 pancreatic resections per year | 8 | Hospital | Structure |
45 | IF a patient is to undergo resection, THEN diagnostic laparoscopy should be performed before resection (irrespective of whether laparoscopy was done during or before the intended resection) | 4.5 | Patient | Process |
46 | IF a patient undergoes resection, THEN a feeding jejunostomy should be performed | 5 | Patient | Appropriateness |
47 | IF a patient undergoes resection, THEN epidural anesthesia should be used | 5 | Patient | Appropriateness |
48 | IF a patient undergoes surgery with curative intent but is found to be unresectable, THEN the case must be discussed at the hospital's cancer conference, tumor board, or Morbidity and Mortality conference | 6.5 | Patient | Appropriateness |
49 | IF a patient undergoes resection, THEN the estimated blood loss should be less than 1 liter | 7 | Patient | Process |
50 | IF a hospital performs pancreatic cancer surgery, THEN the hospital should monitor the mean time from diagnosis to surgery or first treatment | 6 | Hospital | Efficiency |
The indicators that were rated as having high validity were diverse and included the diagnostic, preoperative, intraoperative, postoperative, and follow-up phases of care (Table 2). Structural indicators included factors that address case volume requirements, surgeon certification, and the availability of consulting physicians and services. Process indicators addressed the preoperative evaluation, assessment of resectability, treatment planning, and operative and pathology report documentation. The appropriateness indicators rated as having high validity focused on the use of surgical and nonsurgical treatment. Efficiency indicators addressed the time from diagnosis to treatment. Finally, outcome indicators that were rated as having high validity included monitoring the margin-negative resection rate and the perioperative mortality rate. The indicators rated as having moderate validity involved clinical trials participation, case volume thresholds, the availability of endoscopic ultrasound and endoscopic retrograde cholangiopancreatography, the availability of adjuvant therapy services, resection margin status, documentation of the assessment of resectability, estimated blood loss, operative time, the adequacy of nodal evaluation, readmission rates, and long-term survival rates (Table 3).
Seven indicators were rated as not valid by the expert panel. These indicators concerned specific case volume thresholds; the use of diagnostic laparoscopy, feeding jejunostomies, and epidural anesthesia; discussion of unresectable disease at a multidisciplinary conference; estimated blood loss thresholds; and the absolute time from diagnosis to treatment (Table 4).
Adherence With Pancreatic Cancer Quality Indicators
Of the 43 indicators rated as valid, 18 could be assessed by using data in the NCDB (Table 5). The indicators related to medical documentation were combined into a single indicator for which a patient was deemed to have had concordant care if all of those indicators were met. This approach resulted in 10 quality indicators for which we assessed adherence (nine individual indicators and the combined medical documentation measure). We first assessed adherence with indicators at the patient level. Adherence with the valid quality indicators of pancreatic cancer care ranged from 49.6% to 97.2% among the 49 065 patients treated at Commission on Cancer–approved hospitals. Next, hospital-level performance for adherence with each of the quality indicators was examined among 1134 Commission on Cancer–approved hospitals (1134 of the 1450 Commission on Cancer–approved hospitals reported a pancreatic cancer operation to the NCDB). A hospital was classified as being adherent with the quality indicator if the care it provided was concordant with the quality indicator in at least 90% of the patients at that hospital. The proportion of adherent hospitals ranged from 6.8% to 99.9%. Two indicators could only be assessed at the hospital level: number of pancreatectomies performed per year (Figure 2, A) and hospital mortality rate (Figure 2, B). Of the 1134 Commission on Cancer–approved hospitals that reported a pancreatic cancer operation to the NCDB, 748 (66.0%) had a perioperative mortality rate less than 5%, and only 77 (6.8%) performed 12 or more pancreatectomies for cancer per year.
Table 5.
Assessment of adherence with the pancreatic cancer quality indicators at the patient and hospital levels
Quality indicator | Patient-level assessment (%) | Hospital-level assessment* (%) |
Number of patients | 49 065 | |
Number of hospitals | 1134 | |
Patient-level measures | ||
IF a patient undergoes cancer-directed resection, THEN clinical Stage, pathological stage, histology, size, grade, margin status, number of lymph nodes examined and positive, and timing of adjuvant therapy should be documented† | 65.6 | 25.3 |
IF a patient has clinical stage I or II disease, THEN the patient should undergo resection or have a valid reason documented for not undergoing resection‡ | 52.9 | 6.9 |
IF a patient undergoes cancer-directed resection, THEN adjuvant chemotherapy with or without radiation should be considered or administered, or a valid reason should be documented for not receiving adjuvant therapy‡ | 67.1 | 37.3 |
IF a patient has clinical stage IV disease, THEN cancer-directed surgery should not be done | 97.2 | 99.9 |
IF a patient does not undergo resection, THEN chemotherapy or chemoradiation should be considered/administered or a valid reason should be documented for not receiving non-surgical therapy‡ | 69.7 | 9.5 |
IF a patient is to receive treatment, THEN the time from diagnosis to surgery or first treatment should be less than 2 months | 94.8 | 80.9 |
IF a patient undergoes cancer-directed resection, THEN the margins should be macroscopically clear | 91.3 | 50.4 |
IF a patient undergoes cancer-directed resection, THEN ≥10 regional lymph nodes should be resected and pathologically evaluated | 49.6 | 11.8 |
Hospital-level measures | ||
IF an institution performs pancreatic cancer surgery, THEN the hospital risk-adjusted perioperative mortality should be less than 5%§ | ‖ | 66.0 |
IF an institution performs pancreatic cancer surgery, THEN the institution should perform ≥12 per year | ‖ | 6.8 |
Proportion of hospitals that were adherent with the measure in ≥90% of their patients in 2004–2005.
The measure requiring documentation of tumor size was excluded from the composite measure because it is not currently required to be reported by the Commission on Cancer and the American Joint Committee on Cancer.
Valid reasons for not undergoing treatment include documentation of severe comorbidities, advanced age, or patient refusal.
Risk adjustment models included sex, age, race, stage, type of pancreatectomy, and Charlson comorbidity score.
These indicators cannot be assessed for individual patients.
Figure 2.
Examples of hospital performance with respect to hospital-level quality measures. A) Pancreatectomy volume. B) Perioperative mortality rate. Each circle represents one of the 1134 Commission on Cancer–approved hospitals included in this study. The horizontal line represents the threshold for adherence set by the expert panel.
To establish a composite score for hospital performance on these quality indicators, we assigned each hospital 1 point for each of the 10 quality indicators with which they were adherent (≥90% of patients received the recommended care) and then summed the scores for each hospital. The summed scores ranged from 1 to 9 (median score = 4, interquartile range = 3–5; maximum possible score = 10; Figure 3).
Figure 3.
Composite measure of hospital-level performance. The composite score comprises the 10 valid component measures.
Discussion
By using a formal, well-described methodology, an expert panel assessed potential quality indicators and identified 43 valid indicators of quality care for pancreatic cancer management. We then assessed performance on these measures at 1134 hospitals using data from a large national cancer registry and found that most hospitals were adherent with fewer than half of the indicators. The intent was to develop indicators of quality of care that hospitals could use for self-assessment to identify quality initiatives for improving pancreatic cancer care.
The RAND/UCLA Appropriateness Methodology has been used to develop quality indicators for many disease processes (21). In previous studies to develop quality indicators in surgery and oncology, 59%–81% of the potential indicators were ranked as valid (23–26). These studies individually used only one criterion for the assessment of validity (ie, the number of panelists who ranked an indicator within the 7–9 range); however, the definition of validity differed somewhat among these studies. Therefore, we used two frequently used definitions of validity to establish two potential validity levels based on the relative stringency of the criteria: high validity and moderate validity. We found that 58% of indicators met the strictest validity definition and 86% met the relaxed criteria. We expected that a large proportion of the indicators would be ranked as valid because all were derived from the literature, established guidelines, and interviews with experts in the field.
Previously, the only quality indicators involving the care of patients with pancreatic malignancies were two proposed by the AHRQ (30). These indicators require hospitals to track their pancreatectomy case volume and postoperative mortality rate and are currently under consideration by the National Quality Forum (40). However, neither of these two measures sets an absolute numerical threshold for mortality or case volume. The indicators we used for monitoring surgeon- and hospital-level operative volumes, as well as those for monitoring perioperative mortality, were ranked as having high validity and are similar to the AHRQ pancreas measures. In the preliminary semistructured interviews, all of the experts uniformly suggested that pancreatectomy case volume is a critical component for ensuring quality pancreatic cancer care. However, definitions of “high volume” vary widely in the literature, ranging from two to 200 cases per year (1). The expert panel debated numerous thresholds ranging from six to 24 cases per year and how case volume should be defined (ie, whether it should include benign and/or malignant lesions) and ultimately decided that the valid quality indicators for specific thresholds should be 12 cases per year for hospitals and six cases per year for surgeons. Similarly, the 5% postoperative mortality threshold was discussed and decided by the expert panel.
There is a paucity of high-level evidence (ie, from clinical trials) in pancreas surgery to guide clinical decision making. However, this circumstance is well suited for the application of the RAND Appropriateness Methodology, in which the best available literature is combined with expert opinion. Although the National Comprehensive Cancer Network and other organizations publish detailed recommendations for pancreatic cancer diagnosis, treatment, and follow-up, these guidelines serve a very different function than the intended purpose for quality measures (21,34). Guidelines make recommendations based on the best available evidence and suggest that certain disease management issues be discussed with the patient; quality indicators (or quality measures) are held to a much higher standard in that noncompliance with a quality indicator generally constitutes unacceptable or poor care (21). Moreover, quality indicators must be suitable and practical for potential use if they are to be used to assess hospitals and providers.
Once a set of quality indicators has been developed, the measures can be used by hospitals to assess the quality of care at their institutions. McGlynn et al. (41) developed 429 indicators of quality of care for 30 acute and chronic conditions as well as preventive care and found that recommended care was delivered to only approximately 55% of patients. However, for individual hospitals to assess adherence with quality indicators can require a considerable amount of data abstraction from the patient's chart. Thus, readily available data, such as those collected by cancer registries including the NCDB, are likely to be used to assess hospital performance because no additional data collection would be needed. For this reason, we used cancer registry data to evaluate adherence with the valid pancreatic cancer quality indicators at the patient and hospital levels. Patient-level adherence with individual indicators ranged from 49.6% to 97.2%, and the proportion of adherent hospitals ranged from 6.8% to 99.9%. Of note, only 77 hospitals met the volume threshold established by the panel. Thus, regionalization of surgical care to high-volume centers is likely an impractical policy initiative, and we suggest that these indicators should be used by all hospitals to attempt to raise the level of care provided to pancreatic cancer patients. In addition, we found that most hospitals were adherent with fewer than half of the 10 component indicators that we used to develop the composite score, and no hospital was adherent with all of the indicators. Thus, there is an opportunity for all hospitals to improve.
Hospital adherence with guidelines and consensus recommendations for pancreatic cancer management may vary for a number of reasons. First, the experience and training of the clinical teams are likely to vary. Experienced teams may be more familiar with the literature and guideline recommendations and, thus, may be more likely to follow those recommendations. High-volume hospitals and cancer centers have been shown to provide care concordant with guidelines more frequently than low-volume centers, including the appropriate use of curative resection (42), the completeness of resection (43,44), adequacy of nodal examination (45,46), the use of adjuvant treatments (47), clinical trials participation (48), and aggressiveness of cancer surveillance activities. Second, patient preferences may affect hospital adherence with quality indicators (49). Finally, the dismal prognosis for patients diagnosed with pancreatic cancer may lead to pessimism on the part of physicians and patients, which may result in nonadherence with guidelines (42).
Mechanisms are then needed by which individual hospitals are informed about their adherence rates with quality indicators. Numerous studies have demonstrated the benefits of quality assessment and feedback for a wide range of medical conditions (50–52). For many years, reporting of outcomes has been routine in New York and California for coronary artery bypass graft operations, as well as in the Veterans’ Health Administration system for a wide variety of surgeries (53–55). These efforts have been shown to prompt hospitals to initiate specific quality improvement efforts that have produced improvements in outcomes (53–55). However, it is unknown whether adherence with quality indicators will improve outcomes at individual hospitals, and some have suggested that this type of quality measurement and feedback initiatives may be detrimental to patient care and the health-care system (19,20,52,56–58).
For oncological care, a feedback mechanism through the NCDB is currently available for breast and colorectal cancer quality measures (21,59). The NCDB receives data from more than 1450 Commission on Cancer–approved hospitals, and these data can be used to calculate performance rates for individual hospitals for specific quality measures as demonstrated in this study. The NCDB can provide individual hospitals with their performance in a confidential manner on quality indicators compared with that of all the other Commission on Cancer–approved hospitals, as shown in Figure 2. Only the individual hospital can identify its outcomes. However, public reporting initiatives for hospital quality measure compliance and outcomes are becoming a reality in the United States (55). Thus, identification of measures and evaluation of performance by individual hospitals can be good preparation for a future that will likely include a great deal of public reporting of process and outcome measure performance. Importantly, readily available data sources such as cancer registries will likely be used for quality measurement initiatives by government oversight agencies and payers because these existing data sources provide a convenient assessment mechanism for which no additional data need to be collected. Thus, it is important for hospitals to ensure that the data they report to cancer registries are accurate and of high quality.
There are some important caveats regarding the application of the indicators identified in this study. First, 100% compliance is generally not required for all of the quality indicators. No matter how well defined the inclusion and exclusion criteria are for quality indicators, there will be some instances where the indicators are inappropriate (eg, the requirement to assess 12 or more lymph nodes for colon cancer in an intraoperatively unstable patient where resection of more nodes may not be safe). Moreover, patient preferences may also affect quality indicator compliance (eg, patient refusal to undergo chemotherapy for a stage III colon cancer). Second, it is also important to note that the development of quality indicators involves an iterative process. Even measures that are based on high-level evidence will become outdated or may need to be modified over time as the science advances. Measure development will need to be revisited periodically as new evidence accumulates and practice patterns change. When the ultimate goal of complete compliance with a quality measure is achieved, assessment can be discontinued and new measures can be added (40). Prompt feedback regarding quality measure performance could help decrease the time from publication of seminal studies and subsequent guideline development to the incorporation of measures into clinical practice. Finally, quality measures can be applied to different extents. The National Quality Forum has endorsed measures at two levels: accountability and quality improvement. Accountability measures meet the strictest criteria and generally have a clear impact on outcomes; thus, providers may be judged and incur financial consequences depending on their performance on these indicators of care. The criteria for endorsing quality improvement measures are somewhat less rigorous, and these measures are simply intended to provide feedback to hospitals. Although the two levels of validity used in this study do not directly correspond to the National Quality Forum guidelines for accountability and quality improvement, a similar paradigm could be considered to base the “accountability” and “quality improvement” designations on more objective criteria.
This study has some potential limitations. First, although we attempted to include all measures of quality in the indicator development process, it is likely that important indicators were missed. Moreover, another expert panel or a panel with a different composition of specialties or backgrounds represented may have ranked the quality indicators differently or developed a different set of indicators. Second, although the wording of the indicators was discussed at length by the panel, there was not always agreement on the wording, so some indicators may have received slightly lower rankings due to wording disagreement. These differences in wording do not appear to have qualitatively changed the validity category of the indicators. Third, for assessment of hospital performance, small sample size and inadequate risk adjustment (ie, the inability to adjust completely for differences in case mix among hospitals) may decrease the reliability of the comparisons; however, process measure performance is, in principle, insulated from these issues because we assumed that the indicator should be adhered to in nearly all cases. Thus, adherence with the indicator is either met or not met. Furthermore, because there is little evidence regarding a definitive method for threshold selection, we chose, a priori, a 90% threshold for adherence to allow for variability at hospitals while still requiring all hospitals to achieve a high level of adherence. Fourth, the poor quality indicator adherence rates demonstrated in this study may be partly related to poor documentation in the medical chart. For example, adjuvant therapy may be underreported to cancer registries by the individual hospitals because it is frequently administered in the outpatient setting, often many weeks after surgery (60); however, it will be the hospital's responsibility to ensure that accurate and complete data regarding all aspects of care are transmitted to cancer registries because these data will be used by federal agencies and providers for quality assessment (18,48). In addition, some indicators examine issues that are difficult to assess accurately, such as margin status and readmissions, due to variability in practice patterns. For example, low margin-positive resection rates may be indicative of less thorough pathological evaluation of the margins. Thus, centers that focus on pancreatic cancer and perform detailed margin assessments may have higher margin-positive resection rates, but these rates are likely paradoxically related to the quality of care because a more extensive margin assessment will identify higher margin-positive resection rates. Finally, our assessment of hospital performance was limited to Commission on Cancer–approved hospitals. Thus, the findings may not be generalizable to all hospitals. However, the NCDB receives data from a large number of hospitals that care for more than three-fourths of all the pancreatic cancer patients in the United States.
In conclusion, we used a standardized methodology to identify indicators of pancreatic cancer care. Noncompliance with these indicators is indicative of poor quality care. Hospitals can assess their performance on these quality indicators and compare it with that of other hospitals, thus identifying potential areas for internal quality improvement initiatives. Because hospitals’ resources for quality improvement efforts are limited, a mechanism to efficiently direct quality initiatives would be beneficial. Because the future of health care will certainly involve more measurement of the quality of care, there is a need for rigorously developed quality indicators put forth by clinicians. Moreover, individual quality measures can be used to develop a data-driven composite measure of hospital pancreatic cancer care that assesses care across multiple domains. These quality indicators offer an opportunity to monitor, standardize, and improve the care of patients with pancreatic cancer.
Funding
American College of Surgeons, Clinical Scholars in Residence program (to K.Y.B); American Cancer Society Illinois Division (to D.J.B); National Cancer Institute (NCI-60058-NE to C.Y.K).
Footnotes
The funding sources had no role in the design of the study; the collection, analysis, and interpretation of the data; the decision to submit the manuscript for publication; and the writing of the manuscript.
The American College of Surgeon's Pancreatic Cancer Quality Indicator Development Expert Panel included surgeons (Peter J. Allen, MD, Memorial Sloan-Kettering Cancer Center; Gerard V. Aranha, MD, Stritch School of Medicine, Loyola University Chicago; David J. Bentrem, MD, Feinberg School of Medicine, Northwestern University; Douglas B. Evans, MD, M.D. Medical College of Wisconsin; Keith D. Lillemoe, MD, Indiana University School of Medicine; Peter W. T. Pisters, MD, M.D. Anderson Cancer Center; Richard D. Schulick, MD, Johns Hopkins University School of Medicine; Stephen F. Sener, MD, NorthShore University HealthSystem; Mark S. Talamonti, MD, NorthShore University HealthSystem; Selwyn M. Vickers, MD, University of Minnesota; Andrew L. Warshaw, MD, Massachusetts General Hospital, Harvard Medical School; Charles J. Yeo, MD, Jefferson Medical College, Thomas Jefferson University), medical oncologists (David P. Kelsen, MD, Memorial Sloan-Kettering Cancer Center; Vincent J. Picozzi, MD, Virginia Mason Medical Center; Margaret A. Tempero, MD, University of California at San Francisco Medical Center), radiation oncologists (Ross A. Abrams, MD, Rush University Medical Center; Christopher G. Willett, MD, Duke University School of Medicine), a pathologist (N. Volkan Adsay, MD, Emory University School of Medicine), a radiologist (Alec J. Megibow, MD, MPH, New York University Medical Center), and a gastroenterologist (Stuart Sherman, MD, Indiana University School of Medicine). The National Cancer Data Base is supported by the American College of Surgeons, the Commission on Cancer, and the American Cancer Society.
References
- 1.Bentrem DJ, Brennan MF. Outcomes in oncologic surgery: does volume make a difference? World J Surg. 2005;29(10):1210–1216. doi: 10.1007/s00268-005-7991-x. [DOI] [PubMed] [Google Scholar]
- 2.Halm EA, Lee C, Chassin MR. Is volume related to outcome in health care? A systematic review and methodologic critique of the literature. Ann Intern Med. 2002;137(6):511–520. doi: 10.7326/0003-4819-137-6-200209170-00012. [DOI] [PubMed] [Google Scholar]
- 3.Lieberman MD, Kilburn H, Lindsey M, Brennan MF. Relation of perioperative deaths to hospital volume among patients undergoing pancreatic resection for malignancy. Ann Surg. 1995;222(5):638–645. doi: 10.1097/00000658-199511000-00006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Begg CB, Cramer LD, Hoskins WJ, Brennan MF. Impact of hospital volume on operative mortality for major cancer surgery. JAMA. 1998;280(20):1747–1751. doi: 10.1001/jama.280.20.1747. [DOI] [PubMed] [Google Scholar]
- 5.Gordon TA, Bowman HM, Tielsch JM, Bass EB, Burleyson GP, Cameron JL. Statewide regionalization of pancreaticoduodenectomy and its effect on in-hospital mortality. Ann Surg. 1998;228(1):71–78. doi: 10.1097/00000658-199807000-00011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Birkmeyer JD, Siewers AE, Finlayson EV, et al. Hospital volume and surgical mortality in the United States. N Engl J Med. 2002;346(15):1128–1137. doi: 10.1056/NEJMsa012337. [DOI] [PubMed] [Google Scholar]
- 7.Birkmeyer JD, Dimick JB, Birkmeyer NJ. Measuring the quality of surgical care: structure, process, or outcomes? J Am Coll Surg. 2004;198(4):626–632. doi: 10.1016/j.jamcollsurg.2003.11.017. [DOI] [PubMed] [Google Scholar]
- 8.Fong Y, Gonen M, Rubin D, Radzyner M, Brennan MF. Long-term survival is superior after resection for cancer in high-volume centers. Ann Surg. 2005;242(4):540–544. doi: 10.1097/01.sla.0000184190.20289.4b. discussion 544–547. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Birkmeyer JD, Sun Y, Wong SL, Stukel TA. Hospital volume and late survival after cancer surgery. Ann Surg. 2007;245(5):777–783. doi: 10.1097/01.sla.0000252402.33814.dd. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Ko CY, Maggard M, Agustin M. Quality in surgery: current issues for the future. World J Surg. 2005;29(10):1204–1209. doi: 10.1007/s00268-005-7990-y. [DOI] [PubMed] [Google Scholar]
- 11.Birkmeyer JD, Sun Y, Goldfaden A, Birkmeyer NJ, Stukel TA. Volume and process of care in high-risk cancer surgery. Cancer. 2006;106(11):2476–2481. doi: 10.1002/cncr.21888. [DOI] [PubMed] [Google Scholar]
- 12.Surgical Care Improvement Project. Available at http://www.qualitynet.org/dcs/ContentServer?c=MQParents&pagename=Medqic%2FContent%2FParentShellTemplate&cid=1137346750659&parentName=TopicCat. Accessed March 8, 2008. [Google Scholar]
- 13.Centers for Medicare & Medicaid Services. Physician Quality Reporting Program. Available at http://www.cms.hhs.gov/pqri/. Accessed March 8, 2008. [Google Scholar]
- 14.The Joint Commission. Performance Measures. http://www.jointcommission.org/PerformanceMeasurement/. Accessed March 8, 2008. [Google Scholar]
- 15.American Hospital Association. Quality and Patient Safety. Available at http://www.aha.org/aha_app/issues/Quality-and-Patient-Safety/index.jsp/. Accessed March 8, 2008. [Google Scholar]
- 16.Agency for Healthcare Research and Quality. Quality Indicators. Available at: http://www.qualityindicators.ahrq.gov/. Accessed March 8, 2008. [DOI] [PubMed] [Google Scholar]
- 17.National Quality Measures Clearinghouse. Available at http://www.qualitymeasures.ahrq.gov/ Accessed March 8, 2008. [DOI] [PubMed] [Google Scholar]
- 18.National Quality Forum Endorses Consensus Standards for Diagnosis and Treatment of Breast & Colorectal Cancer. 2007. http://www.qualityforum.org/pdf/news/prbreast-colon03-12-07.pdf. Accessed December 27, 2007. [Google Scholar]
- 19.Werner RM, Bradlow ET. Relationship between Medicare's hospital compare performance measures and mortality rates. JAMA. 2006;296(22):2694–2702. doi: 10.1001/jama.296.22.2694. [DOI] [PubMed] [Google Scholar]
- 20.Werner RM, Asch DA. The unintended consequences of publicly reporting quality information. JAMA. 2005;293(10):1239–1244. doi: 10.1001/jama.293.10.1239. [DOI] [PubMed] [Google Scholar]
- 21.Bilimoria KY, Stewart AK, Winchester DP, Ko CY. The National Cancer Data Base: a powerful initiative to improve cancer care in the United States. Ann Surg Oncol. 2008;15(3):683–690. doi: 10.1245/s10434-007-9747-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Washington DL, Bernstein SJ, Kahan JP, Leape LL, Kamberg CJ, Shekelle PG. Reliability of clinical guideline development using mail-only versus in-person expert panels. Med Care. 2003;41(12):1374–1381. doi: 10.1097/01.MLR.0000100583.76137.3E. [DOI] [PubMed] [Google Scholar]
- 23.McGory ML, Shekelle PG, Ko CY. Development of quality indicators for patients undergoing colorectal cancer surgery. J Natl Cancer Inst. 2006;98(22):1623–1633. doi: 10.1093/jnci/djj438. [DOI] [PubMed] [Google Scholar]
- 24.Maggard MA, McGory ML, Shekelle PG, Ko CY. Quality indicators in bariatric surgery: improving quality of care. Surg Obes Relat Dis. 2006;2(4):423–429. doi: 10.1016/j.soard.2006.05.005. discussion 429–430. [DOI] [PubMed] [Google Scholar]
- 25.McGory ML, Shekelle PG, Rubenstein LZ, Fink A, Ko CY. Developing quality indicators for elderly patients undergoing abdominal operations. J Am Coll Surg. 2005;201(6):870–883. doi: 10.1016/j.jamcollsurg.2005.07.009. [DOI] [PubMed] [Google Scholar]
- 26.Spencer BA, Steinberg M, Malin J, Adams J, Litwin MS. Quality-of-care indicators for early-stage prostate cancer. J Clin Oncol. 2003;21(10):1928–1936. doi: 10.1200/JCO.2003.05.157. [DOI] [PubMed] [Google Scholar]
- 27.Shekelle PG, Park RE, Kahan JP, Leape LL, Kamberg CJ, Bernstein SJ. Sensitivity and specificity of the RAND/UCLA Appropriateness Method to identify the overuse and underuse of coronary revascularization and hysterectomy. J Clin Epidemiol. 2001;54(10):1004–1010. doi: 10.1016/s0895-4356(01)00365-1. [DOI] [PubMed] [Google Scholar]
- 28.Brook RH, McGlynn EA, Shekelle PG. Defining and measuring quality of care: a perspective from US researchers. Int J Qual Health Care. 2000;12(4):281–295. doi: 10.1093/intqhc/12.4.281. [DOI] [PubMed] [Google Scholar]
- 29.Shekelle P. The appropriateness method. Med Decis Making. 2004;24(2):228–231. doi: 10.1177/0272989X04264212. [DOI] [PubMed] [Google Scholar]
- 30.Shekelle PG. Are appropriateness criteria ready for use in clinical practice? N Engl J Med. 2001;344(9):677–678. doi: 10.1056/NEJM200103013440912. [DOI] [PubMed] [Google Scholar]
- 31.Donabedian A. The quality of care. How can it be assessed? JAMA. 1988;260(12):1743–1748. doi: 10.1001/jama.260.12.1743. [DOI] [PubMed] [Google Scholar]
- 32.National Quality Forum. Composite Measure Evaluation Framework. National Quality Forum. Available at www.qualityforum.org/projects/ongoing/CEF/. Accessed August 23, 2008. [Google Scholar]
- 33.Winchester DP, Stewart AK, Bura C, Jones RS. The National Cancer Data Base: a clinical surveillance and quality improvement tool. J Surg Oncol. 2004;85(1):1–3. doi: 10.1002/jso.10320. [DOI] [PubMed] [Google Scholar]
- 34.Facility Oncology Registry Data Standards. Chicago, IL: Commission on Cancer; 2004. [Google Scholar]
- 35.International Classification of Disease for Oncology. 3rd ed. Geneva, Switzerland: World Health Organization; 2000. [Google Scholar]
- 36.AJCC Cancer Staging Manual. 6th ed. Chicago, IL: Springer; 2002. [Google Scholar]
- 37.Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis. 1987;40(5):373–383. doi: 10.1016/0021-9681(87)90171-8. [DOI] [PubMed] [Google Scholar]
- 38.Deyo RA, Cherkin DC, Ciol MA. Adapting a clinical comorbidity index for use with ICD-9-CM administrative databases. J Clin Epidemiol. 1992;45(6):613–619. doi: 10.1016/0895-4356(92)90133-8. [DOI] [PubMed] [Google Scholar]
- 39.Iezzoni L. Risk Adjustment for Measuring Healthcare Outcomes. Chicago, IL: Health Administration Press; 2003. [Google Scholar]
- 40.Lee TH. Eulogy for a quality measure. N Engl J Med. 2007;357(12):1175–1177. doi: 10.1056/NEJMp078102. [DOI] [PubMed] [Google Scholar]
- 41.McGlynn EA, Asch SM, Adams J, et al. The quality of health care delivered to adults in the United States. N Engl J Med. 2003;348(26):2635–2645. doi: 10.1056/NEJMsa022615. [DOI] [PubMed] [Google Scholar]
- 42.Bilimoria KY, Bentrem DJ, Ko CY, Stewart AK, Winchester DP, Talamonti MS. National failure to operate on early stage pancreatic cancer. Ann Surg. 2007;246(2):173–180. doi: 10.1097/SLA.0b013e3180691579. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Birbeck KF, Macklin CP, Tiffin NJ, et al. Rates of circumferential resection margin involvement vary between surgeons and predict outcomes in rectal cancer surgery. Ann Surg. 2002;235(4):449–457. doi: 10.1097/00000658-200204000-00001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Bilimoria KY, Talamonti MS, Sener SF, et al. Effect of hospital volume on margin status after pancreaticoduodenectomy for cancer. J Am Coll Surg. 2008;207(4):510–519. doi: 10.1016/j.jamcollsurg.2008.04.033. [DOI] [PubMed] [Google Scholar]
- 45.Miller EA, Woosley J, Martin CF, Sandler RS. Hospital-to-hospital variation in lymph node detection after colorectal resection. Cancer. 2004;101(5):1065–1071. doi: 10.1002/cncr.20478. [DOI] [PubMed] [Google Scholar]
- 46.Bilimoria KY, Talamonti MS, Wayne JD, et al. Effect of hospital type and volume on lymph node evaluation for gastric and pancreatic cancer. Arch Surg. 2008;143(7):671–678. doi: 10.1001/archsurg.143.7.671. discussion 678. [DOI] [PubMed] [Google Scholar]
- 47.Bilimoria KY, Bentrem DJ, Ko CY, et al. Multimodality therapy for pancreatic cancer in the U.S.: utilization, outcomes, and the effect of hospital volume. Cancer. 2007;110(6):1227–1234. doi: 10.1002/cncr.22916. [DOI] [PubMed] [Google Scholar]
- 48.Wennberg DE, Lucas FL, Birkmeyer JD, Bredenberg CE, Fisher ES. Variation in carotid endarterectomy mortality in the Medicare population: trial hospitals, volume, and patient characteristics. JAMA. 1998;279(16):1278–1281. doi: 10.1001/jama.279.16.1278. [DOI] [PubMed] [Google Scholar]
- 49.Walter LC, Davidowitz NP, Heineken PA, Covinsky KE. Pitfalls of converting practice guidelines into quality measures: lessons learned from a VA performance measure. JAMA. 2004;291(20):2466–2470. doi: 10.1001/jama.291.20.2466. [DOI] [PubMed] [Google Scholar]
- 50.Ferrer R, Artigas A, Levy MM, et al. Improvement in process of care and outcome after a multicenter severe sepsis educational program in Spain. JAMA. 2008;299(19):2294–2303. doi: 10.1001/jama.299.19.2294. [DOI] [PubMed] [Google Scholar]
- 51.Fung CH, Lim YW, Mattke S, Damberg C, Shekelle PG. Systematic review: the evidence that publishing patient care performance data improves quality of care. Ann Intern Med. 2008;148(2):111–123. doi: 10.7326/0003-4819-148-2-200801150-00006. [DOI] [PubMed] [Google Scholar]
- 52.Williams SC, Schmaltz SP, Morton DJ, Koss RG, Loeb JM. Quality of care in U.S. hospitals as reflected by standardized measures, 2002-2004. N Engl J Med. 2005;353(3):255–264. doi: 10.1056/NEJMsa043778. [DOI] [PubMed] [Google Scholar]
- 53.Khuri SF, Daley J, Henderson W, et al. The department of veterans affairs’ NSQIP: the first national, validated, outcome-based, risk-adjusted, and peer-controlled program for the measurement and enhancement of the quality of surgical care. National VA Surgical Quality Improvement Program. Ann Surg. 1998;228(4):491–507. doi: 10.1097/00000658-199810000-00006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54.Chassin MR. Achieving and sustaining improved quality: lessons from New York State and cardiac surgery. Health Aff (Millwood) 2002;21(4):40–51. doi: 10.1377/hlthaff.21.4.40. [DOI] [PubMed] [Google Scholar]
- 55.Carey JS, Danielsen B, Junod FL, Rossiter SJ, Stabile BE. The California Cardiac Surgery and Intervention Project: evolution of a public reporting program. Am Surg. 2006;72(10):978–983. [PubMed] [Google Scholar]
- 56.Apolito RA, Greenberg MA, Menegus MA, et al. Impact of the New York State Cardiac Surgery and Percutaneous Coronary Intervention Reporting System on the management of patients with acute myocardial infarction complicated by cardiogenic shock. Am Heart J. 2008;155(2):267–273. doi: 10.1016/j.ahj.2007.10.013. [DOI] [PubMed] [Google Scholar]
- 57.Casalino LP. The unintended consequences of measuring quality on the quality of medical care. N Engl J Med. 1999;341(15):1147–1150. doi: 10.1056/NEJM199910073411511. [DOI] [PubMed] [Google Scholar]
- 58.Burack JH, Impellizzeri P, Homel P, Cunningham JN., Jr Public reporting of surgical mortality: a survey of New York State cardiothoracic surgeons. Ann Thorac Surg. 1999;68(4):1195–1200. doi: 10.1016/s0003-4975(99)00907-8. discussion 1201–1202. [DOI] [PubMed] [Google Scholar]
- 59.Stewart A, Gay E, Patel-Parekh L, Winchester D, Edge S, Ko C. American Society of Clinical Oncology Annual Meeting. Chicago, IL: 2007. Provider feedback improves reporting on quality measures: national profile reports for adjuvant chemotherapy for stage III colon cancer. J Clin Oncol, ASCO Annual Meeting Proceedings Part I. Vol 25, No. 18S (June 20 Supplement):6572. [Google Scholar]
- 60.Cress RD, Zaslavsky AM, West DW, Wolf RE, Felter MC, Ayanian JZ. Completeness of information on adjuvant therapies for colorectal cancer in population-based cancer registries. Med Care. 2003;41(9):1006–1012. doi: 10.1097/01.MLR.0000083740.12949.88. [DOI] [PubMed] [Google Scholar]