Skip to main content
Journal of General Internal Medicine logoLink to Journal of General Internal Medicine
. 2009 May 5;24(7):833–840. doi: 10.1007/s11606-009-0997-6

Relationship Between Organizational Factors and Performance Among Pay-for-Performance Hospitals

Ernest R Vina 1,2, David C Rhew 1,2,3, Scott R Weingarten 1,2,3, Jason B Weingarten 1, John T Chang 1,2,3,
PMCID: PMC2695536  PMID: 19415390

ABSTRACT

BACKGROUND

The Centers for Medicare & Medicaid Services (CMS)/Premier Hospital Quality Incentive Demonstration (HQID) project aims to improve clinical performance through a pay-for-performance program. We conducted this study to identify the key organizational factors associated with higher performance.

METHODS

An investigator-blinded, structured telephone survey of eligible hospitals’ (N = 92) quality improvement (QI) leaders was conducted among HQID hospitals in the top 2 or bottom 2 deciles submitting performance measure data from October 2004 to September 2005. The survey covered topics such as QI interventions, data feedback, physician leadership, support for QI efforts, and organizational culture.

RESULTS

More top performing hospitals used clinical pathways for the treatment of AMI (49% vs. 15%, p < 0.01), HF (44% vs. 18%, p < 0.01), PN (38% vs. 13%, p < 0.01) and THR/TKR (56% vs. 23%, p < 0.01); organized into multidisciplinary teams to manage patients with AMI (93% vs. 77%, p < 0.05) and HF (93% vs. 69%, p < 0.01); used order sets for the treatment of THR/TKR (91% vs. 64%, p < 0.01); and implemented computerized physician order entry in the hospital (24.4% vs. 7.9%, p < 0.05). Finally, more top performers reported having adequate human resources for QI projects (p < 0.01); support of the nursing staff to increase adherence to quality indicators (p < 0.01); and an organizational culture that supported coordination of care (p < 0.01), pace of change (p < 0.01), willingness to try new projects (p < 0.01), and a focus on identifying system errors rather than blaming individuals (p < 0.05).

CONCLUSIONS

Organizational structure, support, and culture are associated with high performance among hospitals participating in a pay-for-performance demonstration project. Multiple organizational factors remain important in optimizing clinical care.

Electronic supplementary material

The online version of this article (doi:10.1007/s11606-009-0997-6) contains supplementary material, which is available to authorized users.

KEY WORDS: organizational factors, organizational culture, hospital performance measurement

INTRODUCTION

In recent years, the nationwide debate on how hospitals can improve quality of care has grown in intensity. Subsequently, the emerging concept of rewarding healthcare providers based on meeting performance measures for quality rather than on the volume of services provided, also known as pay-for-performance (P4P), has been widely discussed. In 2003, the Centers for Medicare & Medicare Services (CMS) and Premier Inc. launched the Hospital Quality Incentive Demonstration (HQID) project to determine if financial incentives improved performance. Hospitals were identified as high-performing and low-performing through the use of a composite quality score, a mixture of process and outcome measures, and decile placements associated with five clinical conditions: acute myocardial infarction (AMI), coronary artery bypass graft (CABG) surgery, heart failure (HF), community-acquired pneumonia (PN), and hip and knee replacement (THR/TKR).1 Participating hospitals in the top decile for a given clinical focus area were rewarded an additional 2% bonus, and those in the second decile were awarded a 1% bonus, on their Medicare payments for patients in that clinical area during each of the first three years of the project.

Recent literature on the benefits of pay-for-performance has been mixed. Internal studies by Premier have demonstrated that the composite quality scores increased for all clinical conditions studied during the first two years of the program.2,3 A recent study examined the benefits of pay-for-performance in improving hospital adherence to ten quality measures shared by the Hospital Quality Alliance and the HQID, for three clinical conditions (acute myocardial infarction, heart failure, and pneumonia) and found modest improvement.4 Comparing pay-for-performance hospitals with public reporting-only hospitals, the pay-for-performance hospitals demonstrated improvement (2.6%–4.1%) in most individual measures of quality and in all three composite measures.4 A subsequent study found that pay-for-performance was not associated with a significant improvement in quality of care or outcomes for AMI alone among hospitals participating in a voluntary quality improvement (QI) initiative.5 However, these studies did not focus on the factors associated with improved performance.

Earlier literature has demonstrated that several factors are associated with hospital performance and adherence to quality indicators, including hospital characteristics, such as being an academic center6; QI interventions, including the use of order sets711 and CPOE1215; data feedback10,1619; and physician leadership7,1923. Organizational support24 and culture2527 also have lead to overall quality improvements.

There has been an increasing focus on the use of P4P programs by both public and private sectors to stimulate improvements in medical quality and cost efficiency in the United States28.

We conducted this study to identify the key factors associated with higher performance among hospitals participating in the HQID pay-for-performance program.

METHODS

Study Design and Participants

We conducted an investigator-blinded cohort study of hospitals that participated in the CMS/Premier HQID project. Premier identified hospitals as high-performing or low-performing through the use of a composite quality score and decile placements. The overall composite scores were based on aggregation of process and outcome quality measures across five disease conditions or procedures: AMI, HF, CABG surgery, PN, and THR/TKR. Of note, decile placements used for payments to hospitals participating in the HQID project were condition-specific, not aggregate-based.

Premier computed the Overall Composite Quality Score (O-CQS) for year 2 of the demonstration project (October 1, 2004 through September 30, 2005; data released January 2007) across all five clinical conditions utilizing the same methodology used in calculating the individual Composite Quality Scores for the HQID project.1 Briefly, the O-CQS is calculated by combining a composite process score (CPS) and composite outcome score (COS) for each of the five clinical areas in which a hospital has sufficient volume (at least 30 cases/year). The CPS is the sum of the numerator values divided by the sum of the denominator values for each of the evidence-based process measures. To compute the COS, each hospital’s actual and risk adjusted rate are calculated for each outcome measure. The observed and risk-adjusted mortality rates are subtracted from 100% and then transposed to create an index. The CPS and COS are weighted to account for their relative contribution so that each measure is weighted equally. If a hospital does not have any patients eligible for an outcome measure the hospital’s weights are modified appropriately. The composite quality scoring and reporting and risk adjustment methodologies are described by CMS and Premier in detail elsewhere.1 For illustrative purposes, a sample O-CQS computation is presented in Appendix 1 (available online).

Next, Premier identified those hospitals reporting data for at least three conditions or procedures, and placed these hospitals into deciles based on the O-CQS. Hospitals that were in the top 2 deciles were categorized as “top performing” and those that were in the bottom 2 deciles were categorized as “bottom performing.” Only these hospitals were eligible for the study. Premier provided a simple list of names and contact information of hospitals eligible for the study, without any indication of the hospitals’ performance status, to the investigators at Zynx Health. Although Zynx investigators did not have initial access to nor searched for the performance scores of these hospitals, performances of hospitals in the top half in each clinical focus area were publicly available on the CMS and Premier web sites.

Data Collection

Structured Telephone Interview

The structured telephone interview was developed using the domains identified by Bradley et al.7 for eliciting data on quality improvement efforts. Some of the survey questions were modified with the goal of determining the roles of specific factors within these domains for improving adherence to quality measures for various disease conditions and procedures. Respondents were asked to focus on QI activities during the past year. The interview questions can be found in Appendix 2 (available online).

The survey addressed each of the domains identified by Bradley et al.7, including QI interventions, data feedback, physician leadership, organizational support for QI, and organizational culture. Quality improvement interventions included use of order sets, clinical pathways, educational programs for physicians and nurses, multidisciplinary teams, reminder forms or stickers used by QI teams, and computer support systems. Data feedback techniques encompassed systems used for collecting and reporting data on hospital-specific and physician-specific compliance to quality measures. They addressed the frequency that reports were generated, the currency of the data in the reports, and what was done with the information in the reports. Physician leadership focused on identifying “physician champions,” defined as those with a specific goal and task of improving the quality of care, and determining the roles of the chief medical officer (CMO). Organizational support for QI was assessed by determining agreement to statements indicating support from administration, nursing, and physicians; participation level of physicians; and availability of resources. Organizational culture was also assessed and reflected a broad measure of a hospital organization’s environment. Agreement was measured using a five-point Likert scale, where 1 represented “strongly agree”, 2 “agree”, 3 “neutral”, 4 “disagree”, and 5 “strongly disagree.”

Telephone interviews were conducted by Zynx Health investigators using a pre-defined script and protocol. The goal was to interview each hospital’s director of quality or performance improvement. If an interview with the director of quality or performance improvement was not possible, then we tried to interview any of the following hospital representatives (in preferential order, following a protocol): associate director of quality or performance improvement, member of either the quality or performance improvement department, or member of quality resource management department. Interviews were conducted between July and October 2007.

Performance and Administrative Data

The survey data were sent to Premier for linkage to administrative data. Premier identified the hospital’s performance category (top performing versus bottom performing) and also provided hospital characteristics data, including: mean number of staffed beds, geographic region, urban (population ≥1 million) or rural location (population ≤100,000), teaching status (defined by membership in the Association of American Medical Colleges Council of Teaching Hospitals), profit status, and payer mix. Premier then returned a de-identified, linked data set to the Zynx investigators for analyses.

Premier also provided summary data regarding the mean composite quality scores between top and bottom performing hospitals for the O-CQS and each specific condition of interest.

Statistical Analysis

Percentage reporting utilization of quality improvement interventions was calculated by taking the number of hospitals reporting utilizing that intervention divided by the total number of hospitals interviewed. Differences in hospital characteristics between participating and nonparticipating hospitals were examined. Top and bottom performing hospitals were compared, examining the relationships between performance and the multiple quality improvement efforts of interest. Differences in means were examined using a t-test. Categorical variables were examined using a χ2 test or a Fisher’s exact test, when appropriate. Statistical analyses were performed using Stata statistical software, version 10.1 (Stata Corp., College Station, Texas) and Microsoft Office Excel 2003 (Microsoft Corp., Redmond, Washington).

RESULTS

Hospital Characteristics

A total of 92 hospitals were eligible for the study and 84 (91%) completed the interview. Survey respondents included 66 directors of quality or performance improvement, one associate director of quality improvement, 14 members of the quality improvement team, and three quality resource managers. Of the nonparticipating hospitals, five refused participation or missed multiple appointments and three were non-responsive to multiple phone calls or e-mails.

Of 84 hospitals surveyed, 45 were top performing and 39 were bottom performing. Of the eight non-participants, one was top performing and seven were bottom performing (Fisher’s exact, p = 0.06).

There were no significant differences in hospital characteristics between top and bottom performing hospitals participating in the study except that a greater percentage of top performing hospitals had a CABG surgery program and that bottom performing hospitals had a slightly higher percentage of Medicaid patients (Table 1).

Table 1.

Hospital Characteristics

Characteristics Top Performing (n = 45) Bottom Performing (n = 39)
Mean number of beds 334 285
Geographic region
Pacific 2 14
Mountain 1 0
Midwest 24 6
East 18 19
Demographics
Urban (≥1 million) 40 31
Rural (≤100,000) 5 8
Teaching status
Academic 8 7
Non-academic 37 32
Profit status
Not-for-profit 45 39
Payer mix
Private 32% 30%
Medicare 40% 38%
Medicaid* 17% 22%
Self-pay 4% 6%
Charity 0.5% 0.2%
Other/Unknown 6% 3%
Perform coronary artery bypass graft surgery 31 16

t-test or Χ2 (or Fisher-exact) testing between bottom and top decile hospitals for each hospital characteristic category was not statistically significant (p > 0.05), except for *Medicaid payer percentage (p = 0.02) and perform coronary artery bypass graft surgery (p = 0.01)

Mean Performance Scores

Top performing hospitals’ mean overall (O-CQS) and condition-specific performance scores were significantly higher than bottom performing hospitals in all categories (Table 2).

Table 2.

Mean Performance Scores (Overall and Condition-specific) between Top and Bottom Performing Hospitals Participating in the Study

  O-CQS mean score (SD)* AMI CQS mean score (SD)* HF CQS mean score (SD)* PN CQS mean score (SD)* CABG CQS mean score (SD)†║ THR/TKR CQS mean score (SD)
Top performing (n = 45) 0.94 (0.01) 0.97 (0.02) 0.92 (0.05) 0.89 (0.04) 0.62 (0.47) 0.88 (0.28)
Bottom performing (n = 39) 0.79 (0.04) 0.83 (0.20) 0.65 (0.11) 0.77 (0.05) 0.33 (0.42) 0.71 (0.34)

t-test between top and bottom performing, *p < 0.0001, p < 0.01, p < 0.05

Only of respondents of hospitals that performed CABG, top performers (n = 31), bottom performers (n = 16).

Overall Composite Quality Score (O-CQS), Composite Quality Score (CQS), acute myocardial infarction (AMI), heart failure (HF), and pneumonia (PN), coronary artery bypass graft (CABG), and total hip replacement/knee replacement (THR/TKR)

Quality Improvement Interventions

The percentage of hospitals reporting utilization of specific QI interventions stratified by top performing vs. bottom performing for each condition of interest is presented in Table 3. More top performers (91.1%) than bottom performers (64.1%) used order sets for the treatment of THR/TKR (p < 0.01). More top performers than bottom performers used clinical pathways for the treatment of AMI (48.9% vs. 15.4%, p < 0.01), HF (44.4% vs. 17.9%, p < 0.01), PN (37.8% vs. 12.8%, p < 0.01), and THR/TKR (55.6% vs. 23.1%, p < 0.01). In addition, more top performers had a multidisciplinary team with the goal of improving care for AMI (93.3% vs. 76.9%, p < 0.05) and HF (93.3% vs. 69.2%, p < 0.01). Finally, more top performing hospitals used computerized physician order entry systems (24.4% vs. 7.9%, p < 0.05) (Appendix 3, available online).

Table 3.

Percentage Reporting Utilization of Quality Improvement Interventions by Condition

  AMI HF PN THR/TKR CABG
Top Bottom§ Top Bottom§ Top Bottom§ Top Bottom§ Top Bottom§
Order sets 93.3 89.7 88.9 76.9 93.3 84.6 91.1 64.1 93.5 87.5
Clinical pathways 48.9 15.4 44.4 17.9 37.8 12.8 55.6 23.1 45.2 31.3
Educational sessions, physicians 77.8 71.8 75.6 71.8 71.1 69.2 62.2 53.8 67.7 68.8
Educational sessions, nurses 86.7 76.9 86.7 74.4 82.2 76.9 71.1 79.3 74.2 68.8
Multidisciplinary team 93.3* 76.9* 93.3 69.2 86.7 74.4 84.4 66.7 96.8 81.3

Χ2 (or Fisher-exact) testing between top and bottom performing hospitals for each quality improvement intervention in each clinical condition was not statistically significant (p > 0.05), unless marked otherwise: *p < 0.05; p < 0.01

Top performing: n = 45 for AMI, HF, PN, THR/TKR & n = 31 for CABG

§Bottom performing: n = 39 for AMI, HF, PN, THR/TKR & n = 16 for CABG

Asked only of respondents of hospitals that performed CABG

Offering condition-specific educational sessions for physicians and nurses did not significantly differ between top and bottom performing hospitals. The reported use of order sets for AMI, HF, PN, and CABG were relatively high in both top and bottom performing hospitals and did not significantly differ (Table 3).

Data Feedback

Top and bottom performers generated hospital-specific quality performance reports at similar frequencies (Table 4). Neither discussion in general forums nor public display of these hospital data was significantly associated with performance.

Table 4.

Collection and Reporting of Hospital- and Physician-Specific Compliance Reports

  Frequency (mean in months) p value
Top Performing (n = 45) Bottom Performing (n = 39)
Collection of hospital-specific reports 1.47 1.44 0.89
Reporting physician-specific reports to Department Chair or Vice President for Medical Affairs 3.40 6.13 0.11
Reporting physician-specific reports to individual physicians on their own performance 5.08 5.00 0.95
Reporting physician-specific reports to individual physicians regarding de-identified peer performance 4.83 5.24 0.76

*t-test results between top and bottom performing hospitals on the mean frequency of collection or reporting of each report and the mean time period that each report covers, respectively

Physician Leadership

The percentage of hospital chief medical officer’s who had the general role of improving quality were not different between top and bottom performers (86.7% vs. 92.3%, p = 0.25). However, among the hospital CMOs who had this role, a greater percentage in the top performing hospitals (82.1% vs. 69.4%, p < 0.05) recruited physician champions who might focus on improving adherence to quality indicators. However, there were no statistically significant differences between the percentage of top and bottom performers who were able to identify one or more physician champions for each clinical condition (AMI 84.4% vs. 82.1%, p = 0.71; HF 80.0% vs. 69.2%, p = 0.08; PN 82.2% vs. 82.1%, p = 1.00; THR/TKR 73.3% vs. 61.5%, p = 0.10; CABG 80.6% vs. 81.3%, p = 0.86; respectively).

Organizational Support

The mean levels of agreement (5-point Likert scale) to statements on factors related to organizational support for QI were generally similar between top and bottom performing hospitals (Table 5). However, agreement on statements regarding nursing staff’s support for quality indicators (mean = 1.78 vs. 2.28, p < 0.01) and having adequate human resources for projects to increase quality indicator adherence (mean = 2.18 vs. 2.82, p < 0.01) was higher among top performing hospitals.

Table 5.

Mean Levels of Agreement to Statements on Factors Related to Organizational Support and Culture for Quality Improvement (1 = Strongly agree; 2 = Agree; 3 = Neutral; 4 = Disagree; 5 = Strongly Disagree)

  Top Performing (n = 45) Bottom Performing (n = 39)
Organizational Support
Medical staff physicians strongly support adherence to quality indicators 2.24 2.36
Hospital administration strongly supports adherence to quality indicators 1.16 1.26
Nursing staff strongly supports adherence to quality indicators 1.78 2.28
Medical staff physicians widely participate in quality improvement projects 2.44 2.64
Physicians are able to gain consensus on building order sets quickly 2.43 2.97
There are adequate human resources for projects to increase adherence to quality indicators 2.18 2.82
Organizational Culture
It is difficult to coordinate quality care across different departments 3.53 2.87
Decision-making at the hospital is participatory rather than “top-down” 1.87 1.95
Change takes place very slowly at the organization 3.49 2.23
Hospital has tried new activities or policies but not until others have found them to be successful 4.13 3.18
Hospital is likely to be the first to try new activities or policies related to quality improvement 1.84 3.10
Senior administrators see eye-to-eye with the medical staff on most matters of hospital policy 2.43 2.54
Hospital tends to assign blame to individuals rather than looking for system errors when something goes wrong 4.51* 4.05*

t-test between top and bottom performing hospitals for each statement was not statistically significant (p > 0.05), unless marked otherwise: *p < 0.05; p < 0.01

Organizational Culture

On the other hand, factors related to organizational culture differed significantly between top and bottom performers (Table 5). More top performing hospitals leaned towards disagreeing with the statement, “Coordinating quality care across different departments is difficult to do at this hospital” (mean = 3.53 vs. 2.87, 5-point Likert Scale, p < 0.01). Top performers tended to be neutral (mean = 3.49) and bottom performers tended to agree (mean = 2.23) that change takes place very slowly at their organizations (p < .01). Top performers were likely to agree (mean = 1.84) whereas bottom performers tended to be neutral (mean = 3.10) concerning their respective hospitals’ propensity to try new activities or policies (p < 0.01). The mean level of disagreement with the statement that their institution tended to assign blame to individuals when something goes wrong was relatively stronger with the top performing (4.51) as compared to the bottom performing (4.05) group (p < 0.05).

DISCUSSION

Our study is the first to identify significant organizational factors for improved performance across multiple clinical areas (including AMI, HF, CABG, PN, and THR/TKR) among hospitals participating in a pay-for-performance program. Many aspects of organizational culture and organizational support, quality improvement interventions, including clinical pathways, and physician leadership, such as taking an active role in recruiting condition-specific physician champions, were some of the significant factors distinguishing top performing hospitals in this select group of participants in a pay-for-performance program. Educational sessions and data feedback reports did not distinguish top from bottom performers.

Organizational support and culture were associated with being a higher performer. A culture and environment that fosters and cultivates quality among all members of the healthcare team are important features for success. Key to this include nursing staff support for adherence to quality indicators; a willingness to try new QI projects and improve coordination of care; and an quality improvement environment that fosters discussion rather than blame.

However, our study finds that it takes more than culture to improve performance. It also takes resources, akin to what Donabedian29 describes as the domain of structure in improving healthcare quality. Quality improvement projects, computerized physician order entry, clinical pathways, order sets and physician champions are associated with improved performance, and all require resources to create and maintain.

The significance of organizational support in achieving high quality is consistent with the findings of Weiner et al.24 in which hospitals with a higher percentage of hospital staff and senior managers participating in QI teams exhibited higher scores on quality indicators. Other research has also found that organizational culture is strongly related to performance.2527 Similarly, Bradley and colleagues’ qualitative30 and quantitative7 studies highlight the importance of organizational environment, including administrative support and physician leadership, in improving beta-blocker use in AMI patients.

Quality improvement interventions were also related to improved performance. The QI interventions that were significantly associated with higher performance in at least 1 clinical condition included the use of clinical pathways, multidisciplinary teams, order sets, and CPOE systems. A greater percentage of top performing hospitals used clinical pathways for the treatment of four out of five clinical conditions of interest. Other studies found similar associations for specific conditions. Critical pathways for AMI reduced door-to-drug time with thrombolytic therapy,31,32 decreased door-to-balloon times for angioplasty,33,34 and increased beta-blocker usage.8,35 Pathway use was associated with increased use of angiotensin-converting enzyme inhibitors in HF patients,36 and improved oxygen assessment and timely antibiotic administration in pneumonia patients.37 Among patients who underwent hip or knee arthroplasty, pathways were known to decrease the use of inappropriate perioperative antibiotics38 and to lower readmission rate.39

A greater percentage of top performing hospitals also had a multidisciplinary team with the goal of improving care for AMI and HF. This finding is consistent with findings from other studies, which showed that a multidisciplinary program improves adherence to evidence-based measures.4042

More top performing hospitals used order sets for the treatment of THR/TKR. A difference in order set use was not observed for patients with AMI, HF, PN, or CABG. However, this may be explained by the fact that the baseline rates of order set use for the latter four areas were already high (83.3% to 91.7%), thereby establishing a “ceiling effect.” Indeed, the use of order sets was previously shown to improve rates of aspirin administration8 and beta-blocker prescription7 after AMI diagnosis, and angiotensin-converting enzyme inhibitor use 9 and other Joint Commission HF measures10 among HF patients. Another study demonstrated that the use of order sets with “intensive clinical case management” increased the proportion of pneumonia patients who received influenza and pneumococcal vaccinations and smoking cessation counseling.11

A greater percentage of top performing hospitals used computerized physician order entry, although the adoption rates in both groups are still low. CPOE systems in general can be important QI tools.43,44 Evidence-based clinical decision support, including evidence-based reminders or standing orders, in CPOE systems have improved ordering rates for aspirin at discharge for patients with coronary artery disease and for pneumococcal and influenza vaccination in eligible patients.1214 Also, implementation of a CPOE discharge tool improved smoking cessation counseling and discharge instruction rates for AMI and HF patients.15

Finally, physician leadership was associated with higher performance. A greater percentage of chief medical officers in top performing hospitals actively recruited physician champions to improve adherence to quality indicators. Having adequate resources for QI and having the willingness to try new QI activities also typically require support from hospital leadership. This is consistent with other studies emphasizing the importance of physician leadership in implementing strategies for improving quality of care for AMI 7,20,21 HF,21 pneumonia,22 knee arthroplasty,23, and CABG.19

Data feedback through generation of quality performance reports did not differ between top and bottom performing hospitals. Bradley et al.7 and Beck et al.45 previously reported that their data feedback efforts were not sufficient for improving the quality of care in AMI quality care. Others have suggested that a feedback system could be used as a stimulus to initiate QI interventions, which could lead to improved quality of care in AMI.16 This sentiment was also reflected among experts who evaluated the role of data feedback in improving quality of care for patients with pneumonia17,18 and heart failure10 as well as those who underwent CABG.19 These findings highlight a potential bottleneck in the continuous quality improvement cycle where the cycle may be stalled at the reporting data step or differences in how the data is used in formulating quality improvement action(s).

LIMITATIONS

There are several limitations of the study that should be considered. First, the participants are a select group of hospitals that voluntarily chose to participate in the pay-for-performance program. This select group of hospitals might have implemented more QI strategies in their institution than other hospitals not participating in a pay-for-performance program. A future study may extend this comparison to hospitals not participating in a pay-for-performance program. Next, the survey of QI efforts was based on information provided by hospital representatives who were not blinded to their hospitals’ performance rankings. Recall bias might have led top performers to elaborate more on their QI efforts. Also, a Hawthorne effect might have led some participants to provide responses more on what they believed was correct versus what actually happened at their institution. Moreover, the survey relied on a single informant per institution; recall or information bias may find that actual institution strategies or processes may differ. However, by attempting to speak with the quality or performance leader of each hospital, we attempted to survey the person likely to be most familiar with the quality initiatives of his or her institution. Rigid definitions of EMR and CPOE were not explicitly stated during the interview; they were dependent on the hospital respondents’ self-report. Another limitation of our study is the sample size. Our sample size is not large enough to control for potential confounders. Many of the hospital characteristics did not differ significantly, so caution must be exercised in drawing causal inferences between QI interventions and performance. Some of the statistical associations may be potentially spurious ones among multiple comparisons. The relationships should be further examined in a much larger cohort of hospitals. In addition to sufficient sample size, future studies should collect detailed hospital information, such as financial status, and patient information, including sociodemographic information, to account for other areas of potential confounding. Finally, the temporal relationship between performance measurement and the survey of hospitals does not allow us to make causal inferences or study how these characteristics or strategies predict performance; our results are hypothesis generating. Surveys were conducted during the summer and fall of 2007 and were meant to reflect efforts “implemented in the past year.” Some of the quality improvement efforts reported may have been more recently implemented and would not be reflected in the year 2 performance scores. Further performance data in subsequent years would help future work to clarify whether differences in the quality improvement strategies lead to differences in performance. This subsequent data will also clarify the stability of the performance scores over time. Since the completion of our analysis, year 3 data of the demonstration project (October 1, 2005 through September 30, 2006; data released June 2008) have been released. There is a high correlation of the composite quality deciles, overall and condition-specific, between year 2 and year 3 in our study sample (Appendix 4, available online). However, the time lag limitation remains.

CONCLUSIONS

Multiple organizational factors remain important in optimizing clinical care. Many aspects of organizational culture and organizational support, quality improvement interventions, including clinical pathways, multidisciplinary teams, order sets, and CPOE, and physician leadership, such as taking an active role in recruiting condition-specific physician champions, distinguish top performing hospitals in a pay-for-performance program. Educational sessions and data feedback reports did not distinguish top from bottom performers. Future research should focus on whether these organizational characteristics and strategies lead to improved performance and whether strategies differ in settings with and without pay-for-performance incentives. These steps will help guide two important future health care system goals, improving the quality of medical care and improving the value of our health care system.

Electronic supplementary material

Below is the link to the electronic supplementary material.

ESM 1 (100KB, doc)

(DOC 100 KB)

Acknowledgments

We gratefully acknowledge the assistance of Premier, Inc. in providing the administrative data for this research. This study was self-funded by Zynx Health, Inc. The investigators retained control over all aspects of the survey data, analyses, and presentation of results. The views expressed in this article are those of the authors and do not necessarily represent the views of Zynx Health Inc. or Premier Inc. Portions of this study have been presented at the AcademyHealth Annual Research Meeting, June 2008.

Conflict of Interest Dr. Vina was a health services research fellow at Cedars-Sinai Medical Center and Zynx Health Incorporated. Mr. Weingarten was an employee of Zynx Health Incorporated. Drs. Rhew, Weingarten, and Chang are employees of Zynx Health Incorporated.

References

  • 1.Centers for Medicare and Medicaid Services. CMS HQI Demonstration Project: Composite Quality Score Methodology Overview. Available at: http://www.cms.hhs.gov/HospitalQualityInits/downloads/HospitalCompositeQualityScoreMethodologyOverview.pdf#search=%22CMS%20HQI%20Demonstration%20Project%22. Accessed April 7, 2009.
  • 2.Centers for Medicare and Medicaid Services (CMS)/Premier Hospital Quality Incentive Demonstration Project. Project Overview and Findings from Year One. April 13, 2006. Available at: http://www.premierinc.com/quality-safety/tools-services/p4p/hqi/hqi-whitepaper041306.pdf. Accessed April 7, 2009.
  • 3.Centers for Medicare and Medicaid Services (CMS)/Premier Hospital Quality Incentive Demonstration Project. Project Findings from Year Two. May 2007. Available at: http://www.premierinc.com/quality-safety/tools-services/p4p/hqi/resources/hqi-whitepaper-year2.pdf. Accessed April 7, 2009.
  • 4.Lindenauer PK, Remus D, Roman S, et al. Public reporting and pay for performance in hospital quality improvement. N Engl J Med. 2007;356(5):486–96. doi: 10.1056/NEJMsa064964. [DOI] [PubMed] [Google Scholar]
  • 5.Glickman SW, Ou FS, DeLong ER, et al. Pay for performance, quality of care, and outcomes in acute myocardial infarction. JAMA. 2007;297(21):2373–80. doi: 10.1001/jama.297.21.2373. [DOI] [PubMed] [Google Scholar]
  • 6.Jha AK, Li Z, Orav EJ, Epstein AM. Care in U.S. hospitals-the Hospital Quality Alliance program. N Engl J Med. 2005;353(3):265–74. doi: 10.1056/NEJMsa051249. [DOI] [PubMed] [Google Scholar]
  • 7.Bradley EH, Herrin J, Mattera JA, et al. Quality improvement efforts and hospital performance: rates of beta-blocker prescription after acute myocardial infarction. Med Care. 2005;43(3):282–92. doi: 10.1097/00005650-200503000-00011. [DOI] [PubMed] [Google Scholar]
  • 8.Ellerbeck EF, Bhimaraj A, Hall S. Impact of organizational infrastructure on beta-blocker and aspirin therapy for acute myocardial infarction. Am Heart J. 2006;152(3):579–84. doi: 10.1016/j.ahj.2006.02.011. [DOI] [PubMed] [Google Scholar]
  • 9.Reingold S, Kulstad E. Impact of human factor design on the use of order sets in the treatment of congestive heart failure. Acad Emerg Med. 2007;14(11):1097–105. doi: 10.1197/j.aem.2007.05.006. [DOI] [PubMed] [Google Scholar]
  • 10.Fonarow GC, Abraham WT, Albert NM, et al. Influence of a performance-improvement initiative on quality of care for patients hospitalized with heart failure: results of the Organized Program to Initiate Lifesaving Treatment in Hospitalized Patients With Heart Failure (OPTIMIZE-HF) Arch Intern Med. 2007;167(14):1493–502. doi: 10.1001/archinte.167.14.1493. [DOI] [PubMed] [Google Scholar]
  • 11.Fishbane S, Niederman MS, Daly C, et al. The impact of standardized order sets and intensive clinical case management on outcomes in community-acquired pneumonia. Arch Intern Med. 2007;167(15):1664–9. doi: 10.1001/archinte.167.15.1664. [DOI] [PubMed] [Google Scholar]
  • 12.Dexter PR, Perkins S, Overhage JM, Maharry K, Kohler RB, McDonald CJ. A computerized reminder system to increase the use of preventive care for hospitalized patients. N Engl J Med. 2001;345(13):965–70. doi: 10.1056/NEJMsa010181. [DOI] [PubMed] [Google Scholar]
  • 13.Dexter PR, Perkins SM, Maharry KS, Jones K, McDonald CJ. Inpatient computer-based standing orders vs. physician reminders to increase influenza and pneumococcal vaccination rates: a randomized trial. JAMA. 2004;292(19):2366–71. doi: 10.1001/jama.292.19.2366. [DOI] [PubMed] [Google Scholar]
  • 14.Ozdas A, Speroff T, Waitman LR, Ozbolt J, Butler J, Miller RA. Integrating “best of care” protocols into clinicians’ workflow via care provider order entry: impact on quality-of-care indicators for acute myocardial infarction. J Am Med Inform Assoc. 2006;13(2):188–96. doi: 10.1197/jamia.M1656. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Butler J, Speroff T, Arbogast PG, et al. Improved compliance with quality measures at hospital discharge with a computerized physician order entry system. Am Heart J. 2006;151(3):643–53. doi: 10.1016/j.ahj.2005.05.007. [DOI] [PubMed] [Google Scholar]
  • 16.Marciniak TA, Ellerbeck EF, Radford MJ, et al. Improving the quality of care for Medicare patients with acute myocardial infarction: results from the Cooperative Cardiovascular Project. JAMA. 1998;279(17):1351–7. doi: 10.1001/jama.279.17.1351. [DOI] [PubMed] [Google Scholar]
  • 17.Metersky ML, Galusha DH, Meehan TP. Improving the care of patients with community-acquired pneumonia: a multihospital collaborative QI project. Jt Comm J Qual Improv. 1999;25(4):182–90. doi: 10.1016/s1070-3241(16)30437-0. [DOI] [PubMed] [Google Scholar]
  • 18.Chu LA, Bratzler DW, Lewis RJ, et al. Improving the quality of care for patients with pneumonia in very small hospitals. Arch Intern Med. 2003;163(3):326–32. doi: 10.1001/archinte.163.3.326. [DOI] [PubMed] [Google Scholar]
  • 19.Ferguson TB, Jr., Peterson ED, Coombs LP, et al. Use of continuous quality improvement to increase use of process measures in patients undergoing coronary artery bypass graft surgery: a randomized controlled trial. JAMA. 2003;290(1):49–56. doi: 10.1001/jama.290.1.49. [DOI] [PubMed] [Google Scholar]
  • 20.Soumerai SB, McLaughlin TJ, Gurwitz JH, et al. Effect of local medical opinion leaders on quality of care for acute myocardial infarction: a randomized controlled trial. JAMA. 1998;279(17):1358–63. doi: 10.1001/jama.279.17.1358. [DOI] [PubMed] [Google Scholar]
  • 21.Nolan E, VanRiper S, Talsma A, et al. Rapid-cycle improvement in quality of care for patients hospitalized with acute myocardial infarction or heart failure: moving from a culture of missed opportunity to a system of accountability. J Cardiovasc Manag. 2005;16(1):14–9. [PubMed] [Google Scholar]
  • 22.Pines JM, Hollander JE, Lee H, Everett WW, Uscher-Pines L, Metlay JP. Emergency department operational changes in response to pay-for-performance and antibiotic timing in pneumonia. Acad Emerg Med. 2007;14(6):545–8. doi: 10.1197/j.aem.2007.01.022. [DOI] [PubMed] [Google Scholar]
  • 23.Horne M. Involving physicians in clinical pathways: an example for perioperative knee arthroplasty. Jt Comm J Qual Improv. 1996;22(2):115–24. doi: 10.1016/s1070-3241(16)30213-9. [DOI] [PubMed] [Google Scholar]
  • 24.Weiner BJ, Alexander JA, Shortell SM, Baker LC, Becker M, Geppert JJ. Quality improvement implementation and hospital performance on quality indicators. Health Serv Res. 2006;41(2):307–34. doi: 10.1111/j.1475-6773.2005.00483.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Shortell SM, Jones RH, Rademaker AW, et al. Assessing the impact of total quality management and organizational culture on multiple outcomes of care for coronary artery bypass graft surgery patients. Med Care. 2000;38(2):207–17. doi: 10.1097/00005650-200002000-00010. [DOI] [PubMed] [Google Scholar]
  • 26.Shortell SM, Zazzali JL, Burns LR, et al. Implementing evidence-based medicine: the role of market pressures, compensation incentives, and culture in physician organizations. Med Care. 2001;39(7 Suppl 1):I62–78. [PubMed] [Google Scholar]
  • 27.Nelson EC, Batalden PB, Huber TP, et al. Microsystems in health care: Part 1. Learning from high-performing front-line clinical units. Jt Comm J Qual Improv. 2002;28(9):472–93. doi: 10.1016/s1070-3241(02)28051-7. [DOI] [PubMed] [Google Scholar]
  • 28.Damberg C, Sorbero M, Mehrorta A, Teleki S, Lovejoy S, Bradley L. An Environmental Scan of Pay for Performance in the Hospital Setting: Final Report. November 2007. Available at: http://aspe.hhs.gov/health/reports/08/payperform/index.htm. Accessed April 7, 2009.
  • 29.Donabedian A. Evaluating the quality of medical care. 1966. Milbank Q. 2005;83(4):691–729. doi: 10.1111/j.1468-0009.2005.00397.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Bradley EH, Holmboe ES, Mattera JA, Roumanis SA, Radford MJ, Krumholz HM. A qualitative study of increasing beta-blocker use after myocardial infarction: Why do some hospitals succeed? JAMA. 2001;285(20):2604–11. doi: 10.1001/jama.285.20.2604. [DOI] [PubMed] [Google Scholar]
  • 31.Cannon CP, Johnson EB, Cermignani M, Scirica BM, Sagarin MJ, Walls RM. Emergency department thrombolysis critical pathway reduces door-to-drug times in acute myocardial infarction. Clin Cardiol. 1999;22(1):17–20. doi: 10.1002/clc.4960220108. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Pell AC, Miller HC, Robertson CE, Fox KA. Effect of “fast track” admission for acute myocardial infarction on delay to thrombolysis. BMJ. 1992;304(6819):83–7. doi: 10.1136/bmj.304.6819.83. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Caputo RP, Ho KK, Stoler RC, et al. Effect of continuous quality improvement analysis on the delivery of primary percutaneous transluminal coronary angioplasty for acute myocardial infarction. Am J Cardiol. 1997;79(9):1159–64. doi: 10.1016/S0002-9149(97)00074-X. [DOI] [PubMed] [Google Scholar]
  • 34.Ward MR, Lo ST, Herity NA, Lee DP, Yeung AC. Effect of audit on door-to-inflation times in primary angioplasty/stenting for acute myocardial infarction. Am J Cardiol. 2001;87(3):336–8. doi: 10.1016/S0002-9149(00)01370-9. [DOI] [PubMed] [Google Scholar]
  • 35.Cannon CP, Hand MH, Bahr R, et al. Critical pathways for management of patients with acute coronary syndromes: an assessment by the National Heart Attack Alert Program. Am Heart J. 2002;143(5):777–89. doi: 10.1067/mhj.2002.120260. [DOI] [PubMed] [Google Scholar]
  • 36.Ranjan A, Tarigopula L, Srivastava RK, Obasanjo OO, Obah E. Effectiveness of the clinical pathway in the management of congestive heart failure. South Med J. 2003;96(7):661–3. doi: 10.1097/01.SMJ.0000060581.77206.ED. [DOI] [PubMed] [Google Scholar]
  • 37.Meehan TP, Weingarten SR, Holmboe ES, et al. A statewide initiative to improve the care of hospitalized pneumonia patients: The Connecticut Pneumonia Pathway Project. Am J Med. 2001;111(3):203–10. doi: 10.1016/S0002-9343(01)00803-8. [DOI] [PubMed] [Google Scholar]
  • 38.Gregor C, Pope S, Werry D, Dodek P. Reduced length of stay and improved appropriateness of care with a clinical path for total knee or hip arthroplasty. Jt Comm J Qual Improv. 1996;22(9):617–28. doi: 10.1016/s1070-3241(16)30269-3. [DOI] [PubMed] [Google Scholar]
  • 39.Dowsey MM, Kilgour ML, Santamaria NM, Choong PF. Clinical pathways in hip and knee arthroplasty: a prospective randomised controlled study. Med J Aust. 1999;170(2):59–62. doi: 10.5694/j.1326-5377.1999.tb126882.x. [DOI] [PubMed] [Google Scholar]
  • 40.Feldman AM, Weitz H, Merli G, et al. The physician-hospital team: a successful approach to improving care in a large academic medical center. Acad Med. 2006;81(1):35–41. doi: 10.1097/00001888-200601000-00009. [DOI] [PubMed] [Google Scholar]
  • 41.O’Mahony S, Mazur E, Charney P, Wang Y, Fine J. Use of multidisciplinary rounds to simultaneously improve quality outcomes, enhance resident education, and shorten length of stay. J Gen Intern Med. 2007;22(8):1073–9. doi: 10.1007/s11606-007-0225-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Ellrodt G, Glasener R, Cadorette B, et al. Multidisciplinary rounds (MDR): an implementation system for sustained improvement in the American Heart Association’s Get With The Guidelines program. Crit Pathw Cardiol. 2007;6(3):106–16. doi: 10.1097/HPC.0b013e318073bd3c. [DOI] [PubMed] [Google Scholar]
  • 43.Kuperman GJ, Gibson RF. Computer physician order entry: benefits, costs, and issues. Ann Intern Med. 2003;139(1):31–9. doi: 10.7326/0003-4819-139-1-200307010-00010. [DOI] [PubMed] [Google Scholar]
  • 44.Chaudhry B, Wang J, Wu S, et al. Systematic review: impact of health information technology on quality, efficiency, and costs of medical care. Ann Intern Med. 2006;144(10):742–52. doi: 10.7326/0003-4819-144-10-200605160-00125. [DOI] [PubMed] [Google Scholar]
  • 45.Beck CA, Richard H, Tu JV, Pilote L. Administrative Data Feedback for Effective Cardiac Treatment: AFFECT, a cluster randomized trial. JAMA. 2005;294(3):309–17. doi: 10.1001/jama.294.3.309. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Below is the link to the electronic supplementary material.

ESM 1 (100KB, doc)

(DOC 100 KB)


Articles from Journal of General Internal Medicine are provided here courtesy of Society of General Internal Medicine

RESOURCES