Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2016 Jun 1.
Published in final edited form as: Med Care. 2015 Jun;53(6):524–529. doi: 10.1097/MLR.0000000000000363

Differences in the Rates of Patient Safety Events by Payer: Implications for Providers and Policymakers

Christine S Spencer 1, Eric Roberts 2, Darrell J Gaskin 3
PMCID: PMC4431906  NIHMSID: NIHMS674240  PMID: 25906014

Abstract

Background

The reduction of adverse patient safety events and the equitable treatment of patients in hospitals are clinical and policy priorities. Health services researchers have identified disparities in the quality of care provided to patients, both by demographic characteristics and insurance status. However, less is known about the extent to which disparities reflect differences in the places where patients obtain care, versus disparities in the quality of care provided to different groups of patients in the same hospital.

Objective

In this study, we examine whether the rate of adverse patient safety events differs by the insurance status of patients within the same hospital.

Methods

Using discharge data from hospitals in eleven states, we compared risk-adjusted rates for 13 AHRQ Patient Safety Indicators by Medicare, Medicaid and Private payer insurance status, within the same hospitals. We used multivariate regression to assess the relationship between insurance status and rates of adverse patient safety events within hospitals.

Results

Medicare and Medicaid patients experienced significantly more adverse safety events than private pay patients for 12 and 7 Patient Safety Indicators, respectively (at p<0.05 or better). However, Medicaid patients had significantly lower event rates than private payers on two Patient Safety Indicators.

Conclusions

Risk-adjusted Patient Safety Indicator rates varied with patients’ insurance within the same hospital. More research is needed to determine the cause of differences in care quality received by patients at the same hospital, especially if quality measures are to be used for payment.

INTRODUCTION

Two major trends have converged in the health policy arena: a focus on improving access to healthcare, by expanding health insurance coverage, and ensuring that all patients receive a high quality of care. A major focus of quality improvement policies is a reduction in preventable adverse medical and surgical patient-safety events. Fifteen years after the Institute of Medicine's groundbreaking report “To Err is Human” was published, policymakers, providers and patients have a continuing interest in improving the quality of care received in hospitals.1 In 2010, the Office of the Inspector General estimated that, for Medicare alone, one in four patients experienced some sort of medical harm during a hospitalization (44% was most likely preventable, with a cost estimated at $4.4 billion). 2 The public interest in reducing patient harm has intensified, resulting in provisions of the Affordable Care Act (ACA) that required the development of models for value-based purchasing and payment reform, along with the enhanced evaluation of care quality.3,4 Many private payers are following suit, by initiating their own pay-for-performance initiatives.5 Although earlier results have been mixed as to whether these payment reforms have been successful in changing provider behavior;6,7,8 more recent data suggests that patient safety is increasing in hospitals.9

The ACA will expand insurance coverage to an estimated 30 million Americans, with one-half gaining coverage through the expansion of Medicaid. Numerous studies have shown that acquiring health insurance coverage improves health and wellbeing10; however, the benefits of health insurance are not equal across all types of insurance.11,12 Disparities in hospital quality by payer status are well documented: Medicaid patients are less likely to be treated in adherence with process-of-care guidelines,13,14 face a higher risk of inpatient mortality for common medical conditions and surgical procedures,15,16 and are more likely to experience adverse safety events.17 However, these findings reflect a mix of differences in the hospitals where patients obtain care, and differences in the care administered to patients within the same hospital. Several recent studies have attempted to distinguish these factors by controlling for the site of care, thereby isolating the “within-hospital” disparity between patient groups, defined by insurance18 or race and ethnic status.19 Recently, for example, the authors showed that mortality rates for certain conditions differed by patients’ insurance status within the same hospitals. 20

In this paper, we examine disparities in adverse safety events within hospitals, using Patient Safety Indicators (PSI), which are developed by the Agency for Healthcare Research and Quality and which represent state-of-the-art measures of patient safety using administrative data.21 The PSI represent events that result in patient harm caused by the medical system, which could be avoided through changes in the process of health care delivery. 22,23 The PSI rates are risk-adjusted to distinguish adverse outcomes that may be due to patients’ underlying disease or condition, from processes of care that could be modified by providers. The policy relevance of the PSI is underscored by the fact that Medicare's hospital Value-Based Purchasing program now uses AHRQ's Patient Safety Indicators (PSI) to adjust payments to hospitals.4 Using PSI rates calculated at the hospital-payer level, we explore the following questions in this study: Does the occurrence of adverse patient-safety events vary by insurance status within the same hospital? If yes, what are some possible explanations for these differences?

METHODS

Data

We pooled 2006-08 discharge records at the hospital level from eleven states: Arizona, California, Florida, Iowa, Maryland, Massachusetts, New Jersey, New York, North Carolina, Washington, and Wisconsin. These data were obtained from the AHRQ State Inpatient Discharge Database.24 We selected these states because they are geographically diverse, report patients’ primary payers, and collect all data elements that are required to compute the Patient Safety Indicators (PSI).25 These states contain 41 percent of the nation's population and are responsible for 38.4 percent of the nation's acute care discharges.

We also used data on the average severity of inpatient admissions, measured at the hospital level, using a transfer-adjusted case-mix index provided by the Centers for Medicare and Medicaid Services (CMS). This project was reviewed and approved by the Institutional Review Board of the Johns Hopkins Bloomberg School of Public Health.

Measuring Hospital Quality

We use AHRQ's Patient Safety Indicators as our measure of hospital quality, which consist of adverse events related to medical and surgical discharges.26 We used AHRQ's PSI software (version 4.3, provided in the statistical software package SAS) to compute ratios of observed (unadjusted) to expected event rates for each payer group within hospitals. Expected rates are computed by applying regression coefficients for risk-adjustment variables, which were estimated from a national sample of inpatient discharges, to the patient population for each hospital in our sample to which a particular PSI applies. This approach adjusts for a hospital's own distribution of ages, sexes, selected co-morbidities, major diagnostic categories and DRGs, as well as the proportion of patients transferred to the hospital from another facility. The risk-adjusted rates were calculated using CMS's Present on Admission Indicators, which identify diagnoses that pre-existed the hospitalization. The PSI software imputes these indicators in cases where they are missing from the discharge records. Consistent with the terminology used by AHRQ, we refer to the ratio of observed to expected rates as risk-adjusted rates. Following AHRQ's guidance, we computed risk-adjusted rates only if a hospital–payer group category had at least thirty discharges in the denominator during the period 2006–08.

For each hospital in our sample, we computed separate risk-adjusted rates for three payer groups: private insurance, Medicare and Medicaid. We identified payers using the primary expected payer variable in the State Inpatient Data. The risk-adjusted rates are aggregated to the hospital-payer level. Our analyses are therefore based on comparisons of up to three risk-adjusted rates per PSI, and per hospital.

Study Sample

There are 1601 non-federally owned hospitals in our eleven states. We excluded the 167 specialty hospitals in these states because of their limited focus of treatment. This yielded a final sample of 1,434 acute-care general hospitals that treat a broad range of conditions (Table 1). Of the hospitals in the sample, 71.5% are located in a Metropolitan Statistical Area, 23.3% are classified as urban safety net hospitals, 7.7% are classified as rural safety net hospitals, 22.6% are primarily minority serving (with at least 50 percent of discharges were for non-white patients), and 29.1% have Urgent Care Centers.

Table 1.

Hospital Characteristics

All Hospitals
n=1434
Hospital is in an MSA (%) 71.5%
Hospital is Minority Serving (%) 22.6%
Hospital is an Urban Safety-Net Facility (%) 24.7%
Hospital is a Rural Safety-Net Facility (%) 7.7%
Hospital has an Urgent Care Center:
    Yes (%) 24.0%
    Unknown (%) 17.6%
Joint Commission on Accreditation of Health Care Organizations (JCAHO)
    Yes (%) 79.5%
    Unknown (%) 3.4%
Beds (Mean [SD]) 211 [212]
Council of Teaching Hospital Member
    Yes (%) 8.0%
    Unknown (%) 3.4%
Ownership
    Government, non-federal 18.1%
    Non-government, not-for-profit 64.1 %
    For-profit 14.4 %

Source: Authors’ analysis of data from the Agency for Healthcare Research and Quality's State Inpatient Databases and American Hospital Association Annual Survey.

Because not all hospitals provided every service or had at least thirty discharges in the service for each payer category, we were not able to compute risk-adjusted rates for every possible hospital-payer combination. We used all available PSIs that applied, with sufficient frequency to each of the five main payer groups in our paper. For example, we did not analyze PSIs related to obstetric trauma during vaginal delivery, since virtually no Medicare discharges applied to these indicators. Across the 13 PSIs we considered, the number of feasible within-hospital payer comparisons ranged from 192 (Medicaid compared to private risk-adjusted rates for death among surgical procedures) to 1,076 (Medicare compared to private risk-adjusted rates for accidental puncture or laceration).

Statistical Analysis

We used regression analysis to compare within-hospital risk-adjusted rates from Medicare or Medicaid to private payers. The unit of analysis was a hospital–payer group dyad. The dependent variable was a risk-adjusted rate from a hospital-payer pair, for a particular PSI. The independent variable was an indicator for the public payer being compared to private insurance, which served as the reference category in each regression. Following recommendations from AHRQ, we excluded a pair of payers, from a particular hospital, from our analysis of a PSI if either payer had fewer than 30 cases applicable to that PSI.

We used ordinary least squares regression with robust standard errors, clustered at the hospital level, to estimate the association between payer groups and their risk-adjusted rates. The regressions estimated whether there were significant differences on a PSI between patients in other payer groups and their privately insured counterparts at a given hospital. For all models, we excluded outlying observations in the dependent variable, which we defined as rates that exceeded three standard deviations of the mean risk-adjusted rate within a PSI (across all payers).

We included the following hospital-level measures as control variables. First, we defined a hospital as minority serving if at least 50 percent of its discharges were for nonwhite patients. Second, we identified safety-net hospitals as those hospitals with a disproportionate share of Medicaid, self-pay, and uninsured patients, compared to other hospitals in the same market (defined as a Metropolitan Statistical Area or state). 27,28

Third, we used the hospital's case mix index to control for the average patient severity. Lastly, we added state fixed effects to control for state-level policies that may affect variation in hospital quality.

RESULTS

The characteristics of the approximately 35 million patient discharges used in the analysis are described in Table 2. Medicare and Private Payers represent the predominant source of payment. Although there are observable differences in the patient characteristics across the three payer groups, including age, Major Diagnostic Categories, comorbidities, and point of origin, the risk-adjustment algorithm adjusts for these characteristics.29

Table 2.

Characteristics of Hospital Discharges, By Primary Payer, 2006-2008

Characteristic Medicare Medicaid Private Total
Thousands of discharges (a) 15,140 5,794 11,160 35,190
Gender
Female 55.7% 71.2% 63.5% 59.8%
Age (Years)
0-17 0.0% 2.2% 0.4% 0.5%
18-39 2.5% 55.2% 37.7% 25.7%
40-64 14.4% 37.4% 52.1% 33.4%
65-74 28.5% 2.7% 5.5% 14.7%
75+ 54.7% 2.5% 4.4% 25.6%
Race
White 69.3% 31.8% 62.7% 59.1%
Black 9.7% 20.6% 9.4% 12.0%
Hispanic 7.5% 28.4% 9.9% 12.6%
Asian/Pacific Islander 2.3% 3.5% 3.7% 2.9%
Native American 0.4% 0.9% 0.4% 0.5%
Other 10.9% 14.9% 14.0% 13.0%

Source: Authors’ analysis of data from the Agency for Healthcare Research and Quality's State Inpatient Databases. Notes: Total column includes discharges from other payers not included in the analysis. Percentages may not sum to 100 because of rounding. (a) Discharges used in the analysis of Inpatient Safety Indicators (a subset of all discharges across the hospitals in our sample).

Table 3 shows the risk-adjusted rates by payer for each of the 13 PSIs. An observed to expected event ratio greater (less) than one indicates that a hospital performed worse (better) than the average hospital with an equivalent case mix. For example, in our sample of hospitals we see a higher than expected rate for death in low mortality DRGs (1.501) and a lower than expected rate for Postoperative Hip Fracture for adults not susceptible to falling (0.3921). Overall, Medicare and Medicaid experience worse hospital performance on most PSIs than private payers.

Table 3.

Risk- Adjusted Patient Safety Indicators by Payer, 2006-08

Medicare Medicaid Private

Mean (Std. Dev). Mean (Std. Dev). Mean (Std. Dev).
Death in Low-Mortality DRGs 1.31 (2.55) 1.56 (8.69) 1.09 (4.57)
Pressure Ulcers (Stage III or IV) 0.95 (0.77) 1.18 (1.48) 0.65 (0.74)
Death among Elective-Surgical Inpatients 1.07 (0.36) 1.21 (0.51) 0.92 (0.43)
Iatrogenic Pneumothorax 0.78 (0.95) 1.01 (4.05) 0.73 (1.77)
Central Venous Catheter-Related Blood Stream Infection 1.36 (1.49) 1.77 (2.85) 1.13 (1.93)
Postoperative Hip Fracture for adults not susceptible to falling 0.88 (4.91) 0.35 (7.41) 0.09 (1.17)
Postoperative Hemorrhage or Hematoma with surgical drainage or evacuation 0.96 (1.30) 0.99 (1.65) 0.78 (0.92)
Postoperative Physiologic and Metabolic Derangement 0.90 (2.17) 1.49 (7.71) 0.56 (1.68)
Postoperative Respiratory Failure 1.02 (0.70) 1.26 (1.73) 0.73 (0.74)
Postoperative Pulmonary Embolism or Deep Vein Thrombosis 1.25 (0.94) 1.53 (1.80) 1.03 (0.88)
Postoperative Sepsis 1.02 (0.84) 1.10 (1.39) 0.85 (1.03)
Re-closure of Postoperative Abdominal Wound Dehiscence 1.13 (1.59) 1.43 (3.63) 0.86 (2.08)
Accidental Puncture or Laceration during Procedures 0.79 (2.13) 0.78 (1.14) 0.73 (0.65)

Note: The mean risk-adjusted rates presented here are not weighted by the volume of discharges that comprised the hospital-payer-level rates.

Table 4 shows expected differences in the number of risk-adjusted patient safety events between payers. Results in the table are regression-adjusted differences in risk-adjusted patient-safety-events, multiplied by the average patient-safety-event rate for all hospitals and payers in our sample, for that PSI. The results can be interpreted as the average increase or decrease in the number of patient safety events, per 1,000 discharges, experienced by patients in a particular payer group, compared to patients with private insurance. Statistical significance is determined using a p-value of 0.05. (However, we also report cases where the difference in risk-adjusted rates between payers was significant at p<0.01.) Medicare patients experience a significantly higher rate of adverse safety events than privately insured patients for 12 of the 13 patient safety indicators. Medicaid patients have significantly higher rates for 7 of the 13 indicators, but lower rates for 2 of the 13 indicators.

Table 4.

Regression-Adjusted Estimates of the Standardized Adverse Patient Safety Event Rates by Payer, compared to Private Insurance, per 1,000 patients, 2006-08

Medicare Medicaid

Difference in Patient Safety Events Compared to Private Insurance Number of Hospitals Difference in Patient Safety Events Compared to Private Insurance Number of Hospitals
Death in Low-Mortality DRGs 334.7* 1065 (217) 57 1019 (195)
Pressure Ulcers (Stage III or IV)a 248.8** 1054 (213) 294.7** 1001 (132)
Death among Elective-Surgical Inpatientsa 142.8** 572 (12) 177.4** 192 (5)
Iatrogenic Pneumothoraxa 95.1** 1074 (331) 106.3** 1056 (291)
Central Venous Catheter-Related Blood Stream Infectiona 402.2** 1072 (323) 893.4** 1053 (284)
Postoperative Hip Fracture for adults not susceptible to fallinga 169** 1037 (199) −23.9* 980 (87)
Postoperative Hemorrhage or Hematoma with surgical drainage or evacuationa 143.5** 1059 (201) 53.8 994 (119)
Postoperative Physiologic and Metabolic Derangementa 311** 719 (117) −159.1* 577 (54)
Postoperative Respiratory Failurea 299.6** 715 (117) 231.1** 542 (59)
Postoperative Pulmonary Embolism or Deep Veina Thrombosis 258.8** 1060 (200) 308.7** 987 (126)
Postoperative Sepsisa 211.7** 636 (52) 146.2* 263 (16)
Re-closure of Postoperative Abdominal Wound Dehiscencea 336.5** 1005 (124) −69.8 729 (68)
Accidental Puncture or Laceration during Proceduresa 6.9 1076 (329) 14.5 1065 (282)
*

Significant at 5% level

**

significant at 1% level

Parentheses indicate the number of hospitals that were excluded from the payer comparison, becausec either hospital-payer rate in the pair was considered to be an outlier. An outlier is a risk-adjusted rate that is more than two standard deviations above the overall mean of risk-adjusted rates for the PSI, for all hospitals and payers that contributed a valid rate (i.e., comprised of at least 30 cases) in our sample.

a

Patient Safety Indicators that are part of the weighted composite measure PSI 90

Lastly, we consider whether aspects of the health and frailty of Medicare patients that are not captured in the risk-adjustment algorithm could explain our finding that Medicare patients experience poorer care on most PSI. To minimize the influence of potentially unobserved differences between private and Medicare patients, we reran our analysis on private patients aged 55 to 75 compared to Medicare patients aged 65 to 75. In this subgroup, we find very similar results; with 9 out of the 13 indicators continue to show significantly worse outcomes for Medicare patients (Table 5).

Table 5.

Regression-Adjusted Estimates of Standardized Adverse Patient Safety Event Rates, Medicare Compared to Private Insurance, Per 1,000 Patients, 2006-08 all discharges versus restricted age category

All Discharges Discharges Restricted to Patients Aged 55 to 75

Difference in Adverse Safety Events Number of hospitals Difference in Adverse Safety Events Number of hospitals Percent of Discharges in the 55 to 75 analysis
Death in Low-Mortality DRGs 334.7* 1065 (217) 343.7 980 (90) 12.1%
Pressure Ulcers (Stage III or IV) 248.8** 1054 (213) 231.5** 1029 (134) 30.3%
Death among Elective-Surgical Inpatients 142.8** 572 (12) 106** 339 (3) 33.4%
Iatrogenic Pneumothorax 95.1** 1074 (331) −51.1 1072 (308) 31.4%
Central Venous Catheter-Related Blood Stream Infection 402.2** 1072 (323) 241.6** 1065 (279) 25.5%
Postoperative Hip Fracture for adults not susceptible to falling 169** 1037 (199) 70.6 1004 (96) 37.6%
Postoperative Hemorrhage or Hematoma with surgical drainage or evacuation 143.5** 1059 (201) 138.3** 1019 (131) 38.6%
Postoperative Physiologic and Metabolic Derangement 311** 719 (117) 169.1** 665 (96) 42.8%
Postoperative Respiratory Failure 299.6** 715 (117) 206.7** 672 (79) 41.4%
Postoperative Pulmonary Embolism or Deep Vein Thrombosis 258.8** 1060 (200) 109.2** 1022 (128) 38.6%
Postoperative Sepsis 211.7** 636 (52) 168** 534 (37) 44.4%
Re-closure of Postoperative Abdominal Wound Dehiscence 336.5** 1005 (124) 187.5** 861 (60) 34.6%
Accidental Puncture or Laceration during Procedures 6.9 1076 (329) 16 1070 (311) 31.6%
*

significant at 5% level

**

significant at 1% level

Parentheses indicate the number of hospitals that were excluded from the payer comparison, because either hospital-payer rate in the pair was considered to be an outlier. An outlier is a risk-adjusted rate that is more than two standard deviations above the overall mean of risk-adjusted rates for the PSI, for all hospitals and payers that contributed a valid rate (i.e., comprised of at least 30 cases) in our sample.

Percentage of discharges from private payers and Medicare only.

CONCLUSIONS

In our analysis of hospitals from eleven states, we found that, within the same hospital, the quality of care varied by patients’ insurance status. This result suggests that differences in payment between public and private payers may result in inferior care. That is, privately insured patients may have fewer patient safety events because insurance pays more to hospitals, attending physicians and surgeons than public payers. The Medicare Payment Advisory Committee reports that private insurers pay 20% higher rates on average than the Medicare program to physicians and 30% higher rates to hospitals.30 On the other hand, it is possible that unobserved factors associated with insurance status, which are not controlled for by risk-adjustment, increase publicly insured patients’ risk of adverse events.

Both explanations may account for our findings. First, differences in care processes that lie outside of a hospital's control can affect performance on quality measures. For example, privately insured patients may be able to obtain better care coordination from attending physicians and hospitalists employed by their insurers. Second, although our sensitivity analysis indicated that unobserved differences between patients did not materially bias our findings, it remains possible that the risk-adjustment algorithm omits important information about high-risk patients. In such cases, policies that adjust payments to hospitals based on performance on quality measures may unintentionally penalize facilities that treat high-risk patients. Third, coding of comorbidities present at the time of a hospital admission could vary by payer, based on different payment incentives that hospitals face. For example, we found that Medicare patients had the highest number of recorded co-morbid conditions on each discharge record, on average, while privately insured patients had the fewest. While this may indicate differences in the baseline health status of patients, other system-level factors may affect how diagnoses are reported, making it more difficult to distinguish differences in quality from differences in risk-adjustment.31 To the extent that there are differences in the coding of co-morbidities by payer, we may not be able to distinguish differences in risk-adjustment from differences in quality.

While the PSIs are important indicators of quality, they may not provide a complete picture of hospital performance. This calls attention to the Institute of Medicine's recommendation that paying hospitals for performance on specific quality measures should not come at the expense of efforts to improve quality on other, non-measured processes of care.32 Caution should be used in adjusting payments to providers or insurers based on performance on specific quality measures.

In conclusion, we find that, within the same hospitals, Medicare and Medicaid patients have significantly higher risk-adjusted rates of adverse safety events on most of AHRQ's Patient Safety Indicators. Recent reductions in the number of adverse safety events indicate that these events are amenable to intervention. In addition to policies designed to address overall rates of adverse events, policies to redress quality disparities across insurers should also be examined. Such policies could include payment reforms, provided that the payment model provides hospitals with sufficient resources to improve care for publicly insured patients, and that it properly adjusts payments for the clinical and social needs of the patients that a hospital serves.33 In some cases, the level of payment alone may not be a sufficient instrument for quality improvement. Researchers and policymakers should investigate whether patterns of care, during and preceding a hospitalization, put patients at different levels of risk for adverse safety events. This can help to identify whether global payments, care coordination payments, or other financial incentives hold the potential for improving the management of hospital care for certain payers. Such reforms will require insurer, provider, and government collaboration to define and reach objectives for improving inpatient care quality for all patients. Providers should also explore differences in care process that may lead to differences in hospital outcomes for patients with different insurance. Research on processes of care that may explain the equality disparities we observe, and the effects of payment reforms on these quality disparities, is needed.

Contributor Information

Christine S. Spencer, School of Health and Human Services College of Public Affairs University of Baltimore 1420 N. Charles Street, LAP #410 Baltimore, MD 21201.

Eric Roberts, Johns Hopkins Bloomberg School of Public Health.

Darrell J. Gaskin, Center for Health Disparities Solutions Department of Health Policy and Management Johns Hopkins Bloomberg School of Public Health.

References

  • 1.Kohn LT, Corrigan JM, Donaldson MS, editors. To Err is Human: Building a Safer Health System. Institute of Medicine; Washington, D.C.: [June 30, 2014]. (at http://www.nap.edu/books/0309068371/html.) [PubMed] [Google Scholar]
  • 2.Adverse Events in Hospitals: National Incidence among Medicare Beneficiaries. Department of Health and Human Services, Office of Inspector General; Washington, D.C.: [July 1, 2014]. (at http://oig.hhs.gov/oei/reports/oei-06-09-00090.pdf) [Google Scholar]
  • 3. [July 1, 2014];The Patient Protection and Affordable Care Act. 2010 Jan; (at http://www.gpo.gov/fdsys/pkg/BILLS-111hr3590enr/pdf/BILLS-111hr3590enr.pdf.)
  • 4.Medicare program: hospital inpatient value-based purchasing program: final rule. [July 1, 2014];Federal Register. 2011 May 6;76:26490–547. (at http://www.gpo.gov/fdsys/pkg/FR-2011-05-06/pdf/2011-10568.pdf.) [PubMed] [Google Scholar]
  • 5.Health Policy Brief: Pay-for-Performance. [June 30, 2014];Health Affairs. 2012 Oct 11; (at http://www.healthaffairs.org/healthpolicybriefs/brief.php?brief_id=78.)
  • 6.Jha AK, Joynt KE, Orav EJ, Epstein AM. The Long-Term Effect of Premier Pay for Performance on Patient Outcomes. N Eng J Med. 2012;366:606–1615. doi: 10.1056/NEJMsa1112351. [DOI] [PubMed] [Google Scholar]
  • 7.Pronovost PJ, Miller MR, Wachter RM. Tracking Progress in Patient Safety: An Elusive Target. JAMA. 2006;296:696–699. doi: 10.1001/jama.296.6.696. [DOI] [PubMed] [Google Scholar]
  • 8.Lindenaure PK, Remus D, Roman S, et al. Public Reporting and Pay for Performance in Hospital Quality Improvement. N Eng J Med. 2007;356:486–496. doi: 10.1056/NEJMsa064964. [DOI] [PubMed] [Google Scholar]
  • 9.Agency for Healthcare Research and Quality [January 18, 2015];Partnership for Patients, December. 2014 Available at: http://www.ahrq.gov/professionals/quality-patient-safety/pfp/index.html.
  • 10.IOM (Institute of Medicine) America's Uninsured Crisis: Consequences for Health and Health Care. The National Academies Press; Washington, DC: 2009. [July 2, 2014]. (at http://books.nap.edu/openbook.php?record_id=12511&page=R2.) [Google Scholar]
  • 11.Hasan O, Orav EJ, Hick LS. Insurance status and Hospital Care for myocardial infarction, stroke, and pneumonia. J Hosp Med. 2010;5(8):452–9. doi: 10.1002/jhm.687. [DOI] [PubMed] [Google Scholar]
  • 12.Franks P, Clancy CM, Gold MR. Health insurance and mortality. Evidence from a national cohort. JAMA. 1993;270(6):737–41. [PubMed] [Google Scholar]
  • 13.Weissman JS, Vogeli C, Levy DE. The Quality of Hospital Care for Medicaid and Private Pay Patients. Medical Care May. 2013;5(51):389–395. doi: 10.1097/MLR.0b013e31827fef95. [DOI] [PubMed] [Google Scholar]
  • 14.Goldman LE, Vittinghoff E, Dudley RA. Quality of Care in Hospitals with a High Percent of Medicaid Patients. Medical Care June. 2007;45(6):579–583. doi: 10.1097/MLR.0b013e318041f723. [DOI] [PubMed] [Google Scholar]
  • 15.Hasan O, Orav EJ, Hicks LS. Insurance Status and Hospital Care for Myocardial Infarction and Pneumonia. Journal of Hospital Medicine October. 2010;5(8):452–459. doi: 10.1002/jhm.687. [DOI] [PubMed] [Google Scholar]
  • 16.LaPar DJ, Bhamidipati CM, Mery CM, Stukenborg GJ, Jones DR, Schirmer BD, Kron IL, Ailawadi G. Primary Payer Status Affects Mortality for Major Surgical Operations. Ann Surg. 2010 Sep;252(3):544–551. doi: 10.1097/SLA.0b013e3181e8fd75. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Clement JP, Lindrooth RC, Chukmaitov AS, Chen H-F. Does the Patient's Payer Matter in Hospital Patient Safety? A Study of Urban Hospitals. Medical Care February. 2007;45(2):131–138. doi: 10.1097/01.mlr.0000244636.54588.2b. [DOI] [PubMed] [Google Scholar]
  • 18.Barnato AE, Lucas FL, Staiger D, Wennberg D, Chandra A. Hospital-level Racial Disparities in Acute Myocardial Infarction Treatment and Outcomes. Medical Care April. 2005;43(4):308–319. doi: 10.1097/01.mlr.0000156848.62086.06. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.He D, Mellor JM, Jankowitz E. Racial and Ethnic Disparities in the Surgical Treatment of Acute Myocardial Infarction: The Role of Hospital and Physician Effects. Medical Care Research and Review. 2012;70(3):287–309. doi: 10.1177/1077558712468490. [DOI] [PubMed] [Google Scholar]
  • 20.Spencer CS, Gaskin DJ, Roberts ET. The Quality of Care Delivered to Patients within the Same Hospital Varies by Insurance Type. Health Affairs October. 2013;32(10):1731–1738. doi: 10.1377/hlthaff.2012.1400. [DOI] [PubMed] [Google Scholar]
  • 21.Agency for Healthcare Research and Quality [January 18, 2015];Quality Indicators: Patient Safety Indicators. 2007 Mar 12; Available at: http://www.qualityindicators.ahrq.gov/Downloads/Modules/PSI/V31/psi_guide_v31.pdf.
  • 22.Agency for Healthcare Research and Quality. Medical errors & patient safety. AHRQ; Rockville, MD: [July 2, 2014]. (at http://www.ahrq.gov/qual/errorsix.htm.) [Google Scholar]
  • 23.Agency for Healthcare Research and Quality . Patient Safety Indicators overview. AHRQ; Rockville, MD: [June 30, 2014]. (at http://www.qualityindicators.ahrq.gov/modules/psi_resources.aspx.) [Google Scholar]
  • 24.Agency for Healthcare Research and Quality . Healthcare Cost and Utilization Project (HCUP) AHRQ; Rockville MD: [June 30, 2014]. (at http://www.ahrq.gov/research/data/hcup/index.html.) [PubMed] [Google Scholar]
  • 25.Agency for Healthcare Research and Quality . Patient Safety Indicators overview. AHRQ; Rockville, MD: [June 30, 2014]. (at http://www.qualityindicators.ahrq.gov/modules/psi_overview.aspx.) [Google Scholar]
  • 26.Coffey R, Barrett M, Houchens R, et al. National Healthcare Quality Report (NHQR) and National Healthcare Disparities Report (NHDR). HCUP Method Series Report # 2012-03. U.S. Agency for Healthcare Research and Quality; Nov 12, 2013. [June 30, 2014]. Methods Applying AHRQ Quality Indicators to Healthcare Cost and Utilization Project (HCUP) Data for the Eleventh. ONLINE 2012 (at http://www.hcup-us.ahrq.gov/methods/methods.jsp). [Google Scholar]
  • 27.Gaskin DJ, Hadley J. Population characteristics of markets of safety-net and non-safety-net hospitals. J Urban Health. 1999;76(3):351–70. doi: 10.1007/BF02345673. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Zuckerman S, Bazzoli G, Davidoff A, LoSasso A. How did safety-net hospitals cope in the 1990s? Health Aff. Millwood. 2001;20(4):159–68. doi: 10.1377/hlthaff.20.4.159. [DOI] [PubMed] [Google Scholar]
  • 29.Geppert J. Quality Indicator Empirical Methods (Prepared by Battelle. Agency for Healthcare Research and Quality; Rockville, MD: May, 2011. under Contract No. 290-04-0020). [Google Scholar]
  • 30.Medicare Payment Advisory Commission (U.S.) The Commission. Washington, DC: 2012. Report to Congress: Medicare Payment Policy. 1730 K Street, NW, Suite 800, Washington 20006. [Google Scholar]
  • 31.Iezzoni LI. Assessing Quality Using Administrative Data. Annals of Internal Med. 1997;1278(2):666–674. doi: 10.7326/0003-4819-127-8_part_2-199710151-00048. [DOI] [PubMed] [Google Scholar]
  • 32.Institute of Medicine . Crossing the Quality Chasm: A New Health System for the 21st Century. The National Academies Press; Washington, DC: 2001. [July 2, 2014]. (at http://books.nap.edu/openbook.php?record_id=10027.) [PubMed] [Google Scholar]
  • 33.Karve AM, Ou F, Lytle BL, et al. Potential unintended financial consequences of pay-for-performance on the quality of care for minority patients. Am Heart J. 2008;155(3):571–6. doi: 10.1016/j.ahj.2007.10.043. [DOI] [PubMed] [Google Scholar]

RESOURCES