Skip to main content
Health Services Research logoLink to Health Services Research
. 2007 Apr;42(2):682–705. doi: 10.1111/j.1475-6773.2006.00631.x

Hospital Competition, Managed Care, and Mortality after Hospitalization for Medical Conditions in California

Jeannette Rogowski, Arvind K Jain, José J Escarce
PMCID: PMC1955358  PMID: 17362213

Abstract

Objective

To assess the effect of hospital competition and health maintenance organization (HMO) penetration on mortality after hospitalization for six medical conditions in California.

Data Source

Linked hospital discharge and vital statistics data for short-term general hospitals in California in the period 1994–1999. The study sample included adult patients hospitalized for one of the following conditions: acute myocardial infarction (N = 227,446), hip fracture (N = 129,944), stroke (N = 237,248), gastrointestinal hemorrhage (GIH,N = 216,443), congestive heart failure (CHF,N = 355,613), and diabetes (N = 154,837).

Study Design

The outcome variable was 30-day mortality. We estimated multivariate logistic regression models for each study condition with hospital competition, HMO penetration, hospital characteristics, and patient severity measures as explanatory variables.

Principal Findings

Higher hospital competition was associated with lower 30-day mortality for three to five of the six study conditions, depending on the choice of competition measure, and this finding was robust to a variety of sensitivity analyses. Higher HMO penetration was associated with lower mortality for GIH and CHF.

Conclusions

Hospitals that faced more competition and hospitals in market areas with higher HMO penetration provided higher quality of care for adult patients with medical conditions in California. Studies using linked hospital discharge and vital statistics data from other states should be conducted to determine whether these findings are generalizable.

Keywords: Competition, managed care, mortality, quality of care


Over the past two decades the structure of the U.S. health care industry has changed dramatically. In particular, the growth in managed care—especially health maintenance organizations (HMOs)—has led to the introduction of price competition among health care providers. Numerous studies have assessed the effects of changes in health care market structure on health care system performance. Studies of the hospital sector have confirmed that managed care has led to the intensification of price competition among hospitals (e.g.,Feldman et al. 1990), and that price competition has resulted in lower rates of cost growth, lower prices and price-cost margins, and changes in the adoption and use of technology (e.g.,Zwanziger and Melnick 1988; Robinson 1991; Melnick et al. 1992; Gaskin and Hadley 1997; Keeler, Melnick, and Zwanziger 1999; Baker and Phibbs 2002; Heidenreich et al. 2002; Bundorf et al. 2004). However, the effects of changes in health care market structure on the quality of care provided by hospitals is less well understood. Only a handful of studies have addressed this issue and the findings have not been consistent.

The literature provides mixed evidence on the effects of hospital competition and managed care penetration on the quality of hospital care. In the earliest study, based on data from the early 1980s,Shortell and Hughes (1988) found that in-hospital mortality rates for 16 clinical conditions were higher in market areas with higher HMO penetration, but the number of competing hospitals in a market area was unassociated with mortality. More recently,Kessler and McClellan (2000) studied 1-year mortality for Medicare patients with acute myocardial infarction (AMI) in the 1980s and 1990s. They found that higher hospital competition was associated with decreased mortality, especially after 1990. HMO penetration was not associated with mortality, but the beneficial effects of hospital competition were stronger in high penetration market areas. By contrast,Mukamel, Zwanziger, and Tomaszewski (2001), using data from the 1990s, found no effects of hospital competition on 30-day mortality for Medicare patients with a variety of conditions, but higher HMO penetration was associated with lower mortality.

In a study based on hospital discharge data from 16 states in the 1990s,Sari (2002) examined the effects of hospital competition and HMO penetration on the rates of in-hospital mortality following common elective procedures, selected in-hospital complications, and inappropriate surgery. They found that higher hospital competition and higher HMO penetration were associated with lower rates of wound infections, iatrogenic complications, and inappropriate surgery. Shen (2003) found that faster growth in non-Medicare HMO penetration increased 7-, 30-, and 90-day mortality for Medicare patients with AMI, but did not affect longer-term mortality.

While the five studies reviewed in the preceding paragraphs considered competition globally, for all patients in a market, a recent study by Gowrisankaran and Town (2003) took a different approach by assuming that hospitals compete for HMO and Medicare patients separately. The investigators used data for Southern California in the early 1990s to study in-hospital mortality for pneumonia and 30-day mortality for AMI. They found that competition for HMO patients was associated with lower mortality, whereas competition for Medicare patients was associated with higher mortality. The findings of the existing studies of hospital market structure and hospital quality are summarized in Table 1.

Table 1.

Summary of Existing Studies on the Effects of Hospital Competition and Managed Care Penetration on Hospital Quality of Care

Effects of

Author Area Patients Quality Measure Hospital Competition HMO Penetration
Shortell and Hughes (1988) Many states All payers, 16 conditions In-hospital mortality None Worse
Kessler and McClellan (2000) United States Medicare, AMI 1-year mortality Better None
Mukamel, Zwanziger, and Tomaszewski (2001) United States Medicare, many conditions 30-day mortality None Better
Sari (2002) 16 states All payers, many conditions In-hospital mortality, complications, and inappropriate surgery Better Better
Shen (2003) United States Medicare, AMI 7-, 30-, 90-day, 9-month, 1-year, and 15-month mortality Worse
Gowrisankaran and Town (2003) Southern California All payers, pneumonia and AMI In-hospital mortality (pneumonia) and 30-day mortality (AMI) Mixed

AMI, acute myocardial infarction.

There are several potential reasons for the differences in findings across existing studies. Some of the differences may be attributable to differences in the time periods studied or geographic areas. The effects of hospital competition and managed care on hospital quality may have evolved over time as hospitals adapted to the new regime of price competition. Additionally, these effects may vary across areas depending on the maturity of managed care markets or other factors. Methodological differences across studies, including differences in measures of hospital quality or hospital competition, may contribute to differences in findings as well. For example, since hospitals in more competitive market areas and in areas with higher HMO penetration have shorter lengths of stay, and since shorter length of stay may shift deaths from the hospital to the period following discharge (Baker et al. 2002), use of in-hospital mortality (or complication rates) as the measure of quality may lead to bias toward finding that higher hospital competition and HMO penetration are associated with higher quality. The only studies that have been able to use mortality within a fixed time interval as the quality measure are those that focus on Medicare patients.

The goal of this paper is to assess the effect of hospital competition and HMO penetration on hospital quality of care for six medical conditions. Our study is based on 1994–1999 data from California, a state with mature managed care markets, and extends the existing literature in two important ways. First, by linking hospital discharge and vital statistics data, we are able to use 30-day mortality as the quality measure while including all patients in the analysis, not just Medicare patients. Second, we test the robustness of our findings to a variety of measures of hospital competition. Our study also uses more recent data than prior studies.

CONCEPTUAL FRAMEWORK

Hospital markets have evolved rapidly over the past two decades. In the 1970s, health insurance consisted of fee-for-service plans that allowed their insured members to use any hospital and paid for hospital care on a cost-reimbursement basis. Cost-based payment led hospitals to compete through the services, amenities, and convenience they offered rather than price. Under this form of nonprice competition, often called the “medical arms race,” hospitals in more competitive markets had higher costs (Joskow 1980; Robinson and Luft 1985).

Changes in the types of health insurance plans and in-hospital payment beginning in the early 1980s led to dramatic changes in the nature of competition among hospitals. Rapid escalation in hospital costs led to the introduction of a prospective payment system for hospitals by Medicare, reducing incentives for hospitals to increase costs. More important, the emergence and rapid growth of managed care plans, especially HMOs, was enormously consequential. Managed care plans selectively contracted with hospitals to create networks of hospitals that would provide care to their insured members. Selective contracting by managed care plans introduced price competition into hospital markets and began to erode the “medical arms race” model of competition (e.g.,Feldman et al. 1990).

Documented effects of price competition among hospitals include lower rates of hospital cost growth and lower rates of use of costly services and technologies (e.g.,Zwanziger and Melnick 1988; Robinson 1991; Gaskin and Hadley 1997; Baker and Phibbs 2002; Heidenreich et al. 2002; Bundorf et al. 2004). Therefore, price competition, left unchecked, might be expected to adversely affect hospital quality of care by inducing hospitals to skimp on the resources used to care for patients. However, there is considerable evidence that HMOs have introduced quality competition into hospital markets as well, at least in mature managed care markets like those in California.

The first line of evidence comes from interviews and surveys of HMO executives in California, who report that they consider information on quality when choosing hospitals for contracts, even if the information is based on surrogate quality measures (Schulman et al. 1997; Rainwater and Romano 2003). Notably,Schulman et al. (1997) found that the use of quality information was infrequent in less mature managed care markets in Florida and Pennsylvania. The second line of evidence comes from studies that have assessed quality of care in hospitals used by large numbers of HMO patients compared with hospitals used by large numbers of patients with fee-for-service insurance. Escarce et al. (1999) found that HMO patients who received coronary artery bypass surgery were more likely than their peers with fee-for-service coverage to use low-mortality hospitals in California, but not in Florida.1 The final line of evidence comes from econometric studies of the determinants of contracts between HMOs and hospitals. Gaskin et al. (2002) found that HMOs are more likely to have contracts with hospitals that have low-mortality rates, other things equal, while Young, Burgess, and Valley (2002) found that a variety of nonprice attributes of hospitals affect the likelihood of a contract.

This discussion suggests that the effects of hospital competition and HMO penetration on the quality of hospital care are theoretically ambiguous, especially in mature managed care markets. Although price competition might be expected to affect hospital quality adversely, quality competition would tend to offset this effect. Thus the net impact of hospital competition and HMO penetration on hospital quality depends on the relative strength of these counterbalancing influences.

DATA AND METHODS

Data and Study Sample

The main source of data for this study consisted of linked hospital discharge and vital statistics data for the state of California for 1994–1999. The hospital discharge data contain detailed information on all discharges from short-term general hospitals in California during the period in question, including admitting hospital; source of admission; patient age, sex, race/ethnicity, and zip code of residence; principal diagnosis and up to 24 secondary diagnoses; principal procedure and up to 24 secondary procedures; and type of health insurance (if any). The vital statistics data contain information on all deaths in California during the period. We obtained the linked data from the California Office of Statewide Health Planning and Development (OSHPD).

We identified adult residents of California who were admitted to hospitals in metropolitan areas from the community or from nursing homes for AMI, hip fracture (HIP), stroke (CVA), gastrointestinal hemorrhage (GIH), congestive heart failure (CHF), or diabetes mellitus (DM). We chose these six conditions because at least one study of geographic variation in hospital admission rates found that each was a low variation condition (e.g.,Wennberg, McPherson and Caper 1984; Chassin et al. 1986; McMahon, Wolf, and Tedeschi 1989; Gittelsohn and Powe 1995). In the framework developed by Wennberg (1987), low variation conditions tend to be those in which the criteria for hospitalizing patients are narrowly defined and for which there is a relatively high degree of professional consensus regarding the need for hospital admission. We wished to assess quality of care for conditions with these characteristics because hospital competition and HMO penetration may influence admission decisions for conditions in which hospital admission is discretionary. Thus, for discretionary conditions, hospital competition and HMO penetration are more likely to be correlated with unmeasured severity of illness, which may lead to biased estimates of their effects on health outcomes. Nonetheless, there are differences in the degree of consensus regarding the need for hospital admission even among the six study conditions. Whereas most clinicians agree that all patients with AMI, HIP, or CVA and most patients with GIH require hospitalization, admission decisions for CHF and DM are more discretionary. We excluded patients who lived in other states because vital statistics data are more likely to record deaths for California residents. We excluded admissions to hospitals in nonmetropolitan areas because we did not have data on HMO penetration.

We refined the study sample further in four ways. First, we excluded admissions for certain clinical variants of the study conditions to ensure more clinically homogenous samples or a high likelihood of needing hospital admission. For example, we excluded HIP admissions where the fracture was due to primary or metastatic bone cancer or to major multiple trauma because these cases are very different from typical fractures, and we excluded GIH admissions where the hemorrhage was due to esophagitis or a Mallory–Weiss tear because these cases generally do not require admission.

Second, we included only first admissions in an “episode of care,” defined as admissions where another admission for the same condition had not occurred within the preceding 90 days. For CHF, in particular, two or more admissions often occurred in close succession. Third, although we had data for the entire 1994–1999 period, we included only admissions that began between April 1, 1994 and November 30, 1999. We excluded admissions in the first quarter of 1994 so we could identify first admissions in an episode of care. We excluded admissions in December 1999 so we could assess 30-day mortality without censoring. Fourth, for each study condition, we excluded admissions to hospitals that had fewer than 25 admissions for that condition during the period of the study.

The final study sample consisted of 227,446 AMI admissions; 129,944 HIP admissions; 237,248 CVA admissions; 216,443 GIH admissions; 355,613 CHF admissions; and 154,837 DM admissions. These admissions were to 363 different hospitals located in 25 different metropolitan areas.

Other data sources used in the study were the American Hospital Association Annual Surveys of Hospitals, Medicare Cost Reports, and Medicare PPS Impact Files for 1994–1999.

Empirical Analyses

The measure of hospital quality in the study was 30-day mortality. We assessed the effect of hospital competition and HMO penetration on quality by estimating admission-level logistic regression models for each study condition with death within 30 days of admission as the dependent variable and hospital competition, HMO penetration, hospital characteristics,2 and patient severity measures as explanatory variables.

The key explanatory variables in the models were hospital competition and HMO penetration. We assessed the degree of competition facing each hospital using the predicted 75 percent and 90 percent radii for the hospital, obtained from Gresenz, Rogowski, and Escarce (2004), to define the hospital's local market area.3 After identifying all the other hospitals in each hospital's local market area, we derived the following competition measures: (1) a competition index calculated as one minus the Herfindahl index based on bed shares; (2) a competition index calculated as one minus the share of beds held by the largest three hospitals; and (3) the number of hospitals. We used one minus the Herfindahl index based on the 90 percent radius in our main analyses, and conducted sensitivity analyses using the alternate measures. HMO penetration in each metropolitan area was measured as the fraction of the population enrolled in an HMO, obtained from the InterStudy Regional Market Analysis database for 1994–1999.

The hospital characteristics included in the models were teaching status, categorized as none, minor, or major based on the intern- and resident-to-bed ratio4; ownership, categorized as public (i.e., city or county-owned), private nonprofit, or private for-profit5; bed size, categorized as less than 100 beds, 100–199 beds, 200–399 beds, or 400 or more beds; and private high disproportionate share (DSH) status, defined as private hospitals in the upper quartile of the distribution of Medicare DSH percentage.6 We used the DSH percentage to identify private hospitals that played a strong safety net role.

To control for differences in patient severity, we also included a variety of severity measures as covariates in the regression models, including patient age, sex, whether the patient was admitted from a nursing home, chronic comorbidities, and a set of condition-specific measures for each study condition. The chronic comorbidities used in the analyses were the conditions identified by Iezzoni et al. (1994) as conditions that are nearly always present before hospital admission; hence they are extremely unlikely to represent complications due to poor care. They included primary cancer with a poor prognosis, metastatic cancer, chronic pulmonary disease, coronary artery disease, CHF, peripheral vascular disease, severe chronic liver disease, diabetes mellitus with end-organ damage, chronic renal failure, nutritional deficiencies, dementia, and functional impairment.7 Examples of the condition-specific measures for AMI include indicators for the location of the infarction and for the presence of complete heart block; for CVA, indicators for hemorrhagic stroke and for different types of ischemic stroke; for GIH, indicators for the source of the bleeding, such as esophageal varices, different types of peptic ulcer with or without perforation or obstruction, arteriovenous malformations, and diverticulosis; and for DM, indicators for the type of diabetes (type 1 or type 2), for the presence of ketoacidosis and nonketotic coma, and for different end-organ complications (see Appendix A for a full list).8 Finally, the models included indicator variables for year of admission.

Standard errors were corrected for clustering of admissions within hospitals using a Huber–White sandwich estimator.

RESULTS

Descriptive Data

Table 2 reports the characteristics of the study sample for each study condition. HIP patients were older, more likely to be female, and more likely to have been admitted from a nursing home than patients with the other conditions. Patients with diabetes were the youngest. Thirty-day mortality rates ranged from a high of 16.0 percent for CVA to a low of 3.3 percent for DM.

Table 2.

Descriptive Data: Patient Characteristics

AMI (%) HIP (%) CVA (%) GIH (%) CHF (%) DM (%)
Age (years)
 20–44 4.9 2.2 3.3 12.6 3.8 25.7
 45–54 12.9 2.3 6.6 11.0 6.4 17.3
 55–59 8.6 1.8 5.2 5.8 5.2 9.2
 60–64 9.8 2.8 7.1 6.7 7.3 9.5
 65–69 12.3 5.3 10.6 9.3 10.6 10.1
 70–74 14.4 10.3 14.9 12.4 14.5 10.0
 75–79 14.0 16.6 17.5 14.0 16.4 8.6
 80–84 11.5 21.7 16.4 13.0 15.8 5.7
 85+ 11.5 36.9 18.5 15.3 20.1 4.1
Female 38.8 73.1 54.6 46.5 53.8 48.3
Chronic comorbidities
 Coronary artery disease 17.3 20.7 16.8 45.3 16.5
 Congestive heart failure 13.7 14.9 11.9 12.8
 Metastatic cancer 0.7 0.7 1.2 2.0 1.1 0.8
 Chronic pulmonary disease 15.4 17.1 11.0 12.8 26.1 7.4
 Cancer with a poor prognosis 0.7 1.1 1.0 1.6 1.0 0.7
 Chronic renal failure 1.7 1.0 1.2 2.3 4.6 6.4
 Dementia 3.4 18.4 8.6 5.4 4.7 3.3
 Diabetes with end organ damage 3.8 1.8 4.1 2.8 7.9
 Functional impairment 3.8 6.4 41.3 4.8 4.4 4.3
 Nutritional deficiencies 1.0 2.5 1.8 2.9 2.0 2.2
 Peripheral vascular disease 5.9 3.5 4.6 3.2 6.6 11.0
 Severe chronic liver disease 0.3 0.7 0.5 5.0 0.9 1.1
Admission from nursing home 2.2 10.9 4.7 5.4 4.1 3.2
30-day mortality 13.2 6.3 16.0 5.7 8.5 3.3
N 227,446 129,944 237,248 216,443 355,613 154,837

Note: The lowest age included in the analyses for AMI was 25 years; the lowest age for DM was 15 years.

AMI, acute myocardial infarction; HIP, hip fracture; CVA, stroke; GIH, gastrointestinal hemorrhage; CHF, congestive heart failure; DM, diabetes mellitus.

Table 3 reports the characteristics of the 363 hospitals that contributed admissions to the study sample. The competition index calculated as one minus the Herfindahl index averaged 0.79 for the 90 percent radius and 0.61 for the 75 percent radius, whereas the competition index calculated as one minus the share of beds held by the largest three hospitals averaged 0.47 for the 90 percent radius to 0.26 for the 75 percent radius. The hospitals had, on average, 19.8 hospitals in their 90 percent radius and 7.3 hospitals in their 75 percent radius. The average HMO penetration in the metropolitan areas where the hospitals were located was 0.43.

Table 3.

Descriptive Data: Hospital Competition, HMO Penetration, and Other Hospital Characteristics.

Mean (SD) or %
Hospital competition
 90% radius measures
  1—Herfindahl index 0.79 (0.25)
  1—market share of top 3 hospitals 0.47 (0.30)
  Number of hospitals 19.8 (25.6)
 75% radius measures
  1—Herfindahl index 0.61 (0.33)
  1—market share of top 3 hospitals 0.26 (0.27)
  Number of hospitals 7.3 (11.1)
HMO penetration 0.43 (0.13)
Hospital characteristics
 Hospital ownership (%)
  Nonprofit 67.1
  Public 5.9
  For-profit 27.0
 Teaching hospital (%)
  No 70.3
  Minor 22.1
  Major 7.6
 Hospital bed size (%)
   < 100 beds 25.5
  100–199 beds 36.0
  200–399 beds 32.1
  400+beds 6.4
 Private high DSH (%) 19.9

Note: HMO, Health Maintenance Organization; DSH, Disproportionate Share Hospital.

Two-thirds of the hospitals were nonprofit, 27 percent were for-profit, and 6 percent were public. Two-thirds of the hospitals had between 100 and 400 beds, while one-fourth were small facilities with fewer than 100 beds and only 6 percent were large facilities with 400 or more beds. Nearly one-third of the hospitals had teaching programs, but three-fourths of the hospitals with teaching programs were categorized as minor teaching hospitals. One-fifth of the hospitals were private hospitals with a high DSH percentage.

Regression Results

Table 4 reports our main findings regarding the effects of hospital competition, HMO penetration, and hospital characteristics on 30-day mortality for the study conditions. We found that higher hospital competition significantly reduced mortality for three conditions: HIP (odds ratio [OR] = 0.74,p < 0.01), CVA (OR = 0.67,p < 0.001), and GIH (OR = 0.82,p < 0.05). The point estimates for the other three conditions suggested protective effects of competition as well, but these estimates did not achieve statistical significance.9 Higher HMO penetration reduced mortality for GIH (OR = 0.75,p < 0.05) and CHF (OR = 0.77,p < 0.01).

Table 4.

Regression Results: Effects of Hospital Competition, HMO Penetration, and Other Hospital Characteristics on 30-Day Mortality for Six Medical Conditions

Odds Ratio

Explanatory Variable AMI HIP CVA GIH CHF DM
Hospital competition (1—Herfindahl, 90% radius) 0.92 0.74** 0.67*** 0.82* 0.88 0.81
HMO penetration 0.90 0.86 0.98 0.75* 0.77** 0.93
Hospital ownership
 Nonprofit (excluded) 1.00 1.00 1.00 1.00 1.00 1.00
 Public 1.45*** 0.96 1.03 1.08 0.82** 0.66***
 For profit 1.08* 1.03 0.92* 0.98 0.93* 0.87*
Teaching status
 No (excluded) 1.00 1.00 1.00 1.00 1.00 1.00
 Minor 0.98 1.07 1.05 1.00 1.02 1.11*
 Major 0.91 1.06 1.17* 1.21** 0.99 1.19*
Hospital bed size
  < 100 beds (excluded) 1.00 1.00 1.00 1.00 1.00 1.00
 100–199 beds 0.97 1.01 0.99 1.05 1.03 1.00
 200–399 beds 0.93 1.02 0.91 1.05 0.99 0.90
 400+ beds 0.89* 0.95 0.87 0.94 0.84** 0.71***
Private high DSH 1.09** 0.94 0.88** 0.95 0.80*** 0.83**

Notes: Statistical significance is indicated as follows:

*

p < 0.05

**

p < 0.01

***

p < 0.001.

The regression models included patient severity measures and year indicators.

The odds ratios for hospital competition are for a one-unit difference in the competition index, which corresponds to the difference between a monopoly market and a market with a very large number of competitors having more or less equal market shares.

The odds ratios for HMO penetration are for a one unit difference in penetration, which corresponds to the difference between a market with zero HMO enrollment and a market where everyone is enrolled in an HMO.

AMI, acute myocardial infarction; HIP, hip fracture; CVA, stroke; GIH, gastrointestinal hemorrhage; CHF, congestive heart failure; DM, diabetes mellitus; DSH, Disproportionate Share Hospital; HMO, Health Management Organization.

The ORs summarized in the preceeding paragraph are consistent with clinically significant effects of hospital competition on mortality. For example, other things equal, a California hospital at the 10th percentile of the competition index (one minus the Herfindahl index based on the 90 percent radius) had a 30-day mortality rate of 7.1 percent for HIP, compared with 6.2 percent in a hospital at the 90th percentile of the index. The corresponding figures for other conditions were 18.4 and 15.6 percent for CVA and 6.2 and 5.6 percent for GIH.

Because the estimated effects of hospital competition and HMO penetration were qualitatively similar for the six study conditions (Table 4), we estimated a model where we pooled all the conditions and included interactions between condition and all the explanatory variables except competition and penetration. These analyses found protective effects of both hospital competition (OR = 0.81,p < 0.001) and HMO penetration (OR = 0.86,p < 0.05). Additionally, to assess whether the protective effect of hospital competition was greater in metropolitan areas with high HMO penetration, we estimated models that included an interaction between hospital competition and an indicator for metropolitan areas in the top half of the distribution of HMO penetration. These analyses found a significantly greater protective effect of hospital competition in high penetration areas for HIP, CVA, and CHF.

Other hospital characteristics were also associated with mortality for medical conditions. Public hospitals had higher mortality than private nonprofit hospitals for AMI, but lower mortality for CHF and DM. Major teaching hospitals had higher mortality than nonteaching hospitals for CVA, GIH, and DM. The largest hospitals (over 400 beds) had lower mortality than smaller facilities for AMI, CHF, and DM. Last, private high DSH hospitals had higher mortality than other private hospitals for AMI, but lower mortality for CVA, CHF, and DM.

Alternate Measures of Hospital Competition

We explored the robustness of our findings for hospital competition using alternate competition measures, including the two competition indexes and the number of hospitals based on 90 and 75 percent radii. As shown in Table 5, the finding of a protective effect of hospital competition was quite robust. In fact, the evidence for a protective effect was even stronger when we used the competition index calculated as one minus the share of beds held by the top three hospitals or the number of hospitals than when we used the competition index calculated as one minus the Herfindahl index. Notably, AMI was the only condition for which we did not find a protective effect of competition. The estimated effects of HMO penetration did not change as we varied the competition measure.

Table 5.

Regression Results: Effects of Hospital Competition Using Alternate Measures of Competition

Odds Ratio

Competition Measure AMI HIP CVA GIH CHF DM
90% radius measures
 1—Herfindahl index 0.92 0.74** 0.67*** 0.82* 0.88 0.81
 1—Market share of top 3 0.95 0.81** 0.70*** 0.85* 0.82*** 0.82**
 Number of hospitals 0.99 0.97** 0.95*** 0.98** 0.97*** 0.97**
75% radius measures
 1—Herfindahl index 1.02 0.86* 0.75*** 0.94 0.90* 0.93
 1—market share of top 3 0.93 0.80** 0.66*** 0.86* 0.77*** 0.80*
 Number of hospitals 0.99 0.96** 0.92*** 0.97** 0.95*** 0.96**

Notes: Statistical significance is indicated as follows:

*

p < 0.05

**

p < 0.01

***

p < 0.001.

The regression models included HMO penetration, hospital ownership, teaching status, bed size, private high DSH, patient severity measures, and year indicators.

The odds ratios for hospital competition are for a one-unit difference in the competition index, which corresponds to the difference between a monopoly market and a market with a very large number of competitors having more or less equal market shares.

AMI, acute myocardial infarction; HIP, hip fracture; CVA, stroke; GIH, gastrointestinal hemorrhage; CHF, congestive heart failure; DM, diabetes mellitus; DSH, Disproportionate Share Hospital; HMO, Health Management Organization.

Additional Sensitivity Analyses

We conducted additional sensitivity analyses to further assess the robustness of our results for hospital competition. First, we reestimated the regression models in our main analyses excluding the indicator variable for private hospitals with a high DSH percentage. The results in Table 4 were unchanged. Second, because there are only 25 metropolitan areas in California, we estimated models for 30-day mortality where we replaced the HMO penetration variable with metropolitan-area fixed effects. We found that higher hospital competition, measured as one minus the Herfindahl index based on the 90 percent radius, significantly reduced mortality for HIP (OR = 0.70,p < 0.001), CVA (OR = 0.70,p < 0.001), GIH (OR = 0.81,p < 0.05), and DM (OR = 0.69,p < 0.001).

Third, we estimated models with 90 and 180-day mortality, rather than 30-day mortality, as the outcome.10 As Table 6 shows, the effects of hospital competition and HMO penetration on 90 and 180-day mortality were similar to their effects on 30-day mortality, although the effects of competition were slightly attenuated.

Table 6.

Regression Results: Effects of Hospital Competition and HMO Penetration on 90- and 180-Day Mortality for Six Medical Conditions

Odds Ratio

Outcome/Competition Measure AMI HIP CVA GIH CHF DM
90-day mortality
Hospital competition (1—Herfindahl, 90% radius) 0.92 0.83* 0.71*** 0.88 0.90 0.83
HMO penetration 0.91 0.92 1.02 0.80* 0.77** 1.02
180-day mortality
Hospital competition (1—Herfindahl, 90% radius) 0.95 0.85* 0.73*** 0.92 0.94 0.90
HMO penetration 0.88 0.89 1.06 0.77** 0.83** 1.03

Notes: Statistical significance is indicated as follows:

*

p < 0.05

**

p < 0.01

***

p < 0.001.

The regression models included hospital ownership, teaching status, bed size, private high DSH, patient severity measures, and year indicators.

AMI, acute myocardial infarction; HIP, hip fracture; CVA, stroke; GIH, gastrointestinal hemorrhage; CHF, congestive heart failure; DM, diabetes mellitus; HMO, Health Management Organization.

Exploring the Influence of Unobserved Severity of Illness

Hospital discharge data have been criticized for lacking the clinical detail necessary to capture illness severity adequately enough to assess hospital quality of care (e.g.,Hannan et al. 1992; Pine et al., 1997). We used two indirect approaches to examine whether our findings for hospital competition might be due to higher unobserved illness severity among patients admitted to hospitals facing a low degree of competition.

First, we reestimated the models in our main analyses including an indicator for uninsured patients and an indicator for patients with insurance coverage from Medicaid or a county indigent program under the rationale that these categories may capture unobserved dimensions of health status (e.g.,Hadley 2003; Parkerson et al. 2005). The results in Table 4 were unchanged.

Second, we reasoned that unobserved severity of illness was likely to be directly correlated with observed severity as captured by the severity measures in our models. In support of this view,Pine et al. (1997) found that regression models based on measures similar to ours from discharge data consistently underpredicted mortality for the sickest hospitalized patients with AMI, CVA, CHF, and pneumonia, and that adding laboratory data to the models corrected the underpredictions. Therefore, we divided the 363 hospitals in our study into high- and low-competition groups using the median value of one minus the Herfindahl index, and for each study condition we compared the two groups' predicted 30-day mortality, derived from logistic models that included only the severity measures as explanatory variables. We found that the two groups had nearly identical predicted mortality for every condition, indicating that observed illness severity did not differ systematically between high-competition and low-competition hospitals and suggesting that unobserved severity was unlikely to have differed either.

Thirty-Day versus In-Hospital Mortality

Finally, we assessed whether using in-hospital mortality as the measure of hospital quality leads to biased estimates of the effects of hospital competition and HMO penetration on quality. To do so, we reestimated the models in our main analyses using in-hospital mortality, rather than 30-day mortality, as the dependent variable and compared the results with those in Table 4. The proportion of all deaths within 30 days that occurred during the hospital admission was 72.1 percent for AMI, 36.5 percent for HIP, 58.4 percent for CVA, 49.4 percent for GIH, 45.8 percent for CHF, and 43.4 percent for DM. The effects of hospital competition were similar in the models for in-hospital mortality and in the models for 30-day mortality (Table 4), with both slightly higher and slightly lower ORs across conditions. However, the protective effects of HMO penetration were consistently greater in the models for in-hospital mortality. The largest difference in the effect of HMO penetration occurred for CHF (OR = 0.77,p < 0.01 with 30-day mortality; OR = 0.63,p < 0.001 with in-hospital mortality) and HIP (OR = 0.82,p > 0.05 with 30-day mortality; OR = 0.59,p < 0.01 with in-hospital mortality). These findings support the notion that using in-hospital mortality as the quality measure results in bias toward finding higher quality in market areas with high HMO penetration.

CONCLUSION

This study examined the effects of hospital competition and HMO penetration on the quality of hospital care for six medical conditions in California. We found that hospitals that faced a higher degree of competition generally had lower mortality rates within 30 days of hospital admission. In our main analyses using a Herfindahl index to measure hospital competition, higher competition led to lower mortality for three conditions: HIP, CVA, and GIH. In sensitivity analyses that used alternate competition measures, higher competition led to lower mortality for CHF and diabetes as well. Our findings regarding the effect of hospital competition were robust to a wide range of sensitivity analyses in which we varied the explanatory variables in the regression models. Further, analyses using 90- or 180-day mortality, rather than 30-day mortality, as the outcome also found beneficial effects of hospital competition, although these effects were attenuated. Attenuation of the effects of competition on mortality over longer time intervals after admission is not surprising, since longer-term mortality is less influenced by the quality of inpatient hospital care and more influenced by postdischarge care and the natural history of the underlying condition.

Our study also found that higher HMO penetration led to lower mortality for GIH and CHF. However, we interpret our findings for HMO penetration with caution due to the fact that penetration was measured at the level of metropolitan areas and California only has 25 metropolitan areas. Nonetheless, analyses that included an interaction between hospital competition and HMO penetration found that the salutary effect of competition on mortality was stronger in high penetration areas, consistent with earlier findings for Medicare patients with acute myocardial infarction (Kessler and McClellan 2000). Pooled analyses of the six study conditions also found a beneficial effect of both hospital competition and HMO penetration on the 30-day mortality.

The results of this study are consistent with the thesis that hospitals compete on “true” quality of care—i.e., processes and outcomes of care—rather than just on price or amenities, at least in a state like California where managed care is prevalent and managed care markets are mature. Thus our findings imply that the reductions in hospital costs and resource use that have resulted from price competition induced by HMOs and other managed care plans in California have not led to a deterioration in the quality of hospital care. Although our analyses were not designed to shed light on the mechanisms by which hospitals facing a high degree of competition improved quality, our failure to find an effect of hospital competition on mortality for patients with acute myocardial infarction suggest that changes in care processes may have played a role. Romano and Mutter (2004) have argued that hospitals may be relatively uninterested in improving the care of patients with acute myocardial infarction in response to competition, because these patients typically receive care at the nearest hospital with an open emergency department. Consequently, there is little opportunity for HMOs' contracting decisions to influence patient choice of hospital.

Additional findings of our study included higher 30-day mortality for acute myocardial infarction in public hospitals; lower mortality for CHF and diabetes in public hospitals; higher mortality for stroke, GIH, and diabetes in major teaching hospitals; and lower mortality for myocardial infarction, CHF, and diabetes in large hospitals with more than 400 beds. Because the finding of lower mortality for two conditions in public hospitals was unexpected (e.g.,Kuhn et al. 1994; Shapiro et al. 1994), we compared the characteristics of patients admitted to public and private hospitals. We found that patients admitted to public hospitals were less severely ill than those admitted to private hospitals; specifically, patients admitted to public hospitals were much younger, had fewer comorbidities, and had substantially lower predicted probabilities of death for all the study conditions. If our severity measures failed to capture all the differences in illness severity between public and private hospital patients, the finding of lower mortality for CHF and diabetes in public hospitals could reflect unobserved differences in severity.

The finding that major teaching hospitals had higher mortality for three conditions was also unexpected. Several studies have reported better quality of care and lower mortality in major teaching hospitals compared with other hospitals (e.g.,Keeler et al. 1992; Rosenthal et al. 1997; Ayanian and Weissman 2002). On the other hand, reports on hospital quality of care for myocardial infarction and pneumonia in California developed by OSHPD are not consistent with the published literature. For myocardial infarction, major teaching hospitals in California were more likely than other hospitals in metropolitan areas to be both low-mortality and high-mortality outliers (OSHPD 2002). For pneumonia, major teaching hospitals were less likely than other hospitals to be low-mortality outliers but more likely to be high-mortality outliers (OSHPD 2004). In our study, major teaching status may be partially confounded with hospital bed size, since most major teaching hospitals are large. When we repeated our analyses excluding hospital bed size as an explanatory variable, major teaching status was no longer associated with 30-day mortality.

A noteworthy strength of our study is that we employed linked hospital discharge and vital statistics data, which enabled us to use 30-day mortality, rather than in-hospital mortality, as the measure of quality of care for all patients. Previous studies that used mortality within a fixed time interval after hospital admission as the quality measure were limited to Medicare patients (e.g.,Kessler and McClellan 2000; Mukamel, Zwanziger and Tomaszewski 2001; Shen 2003). Using in-hospital mortality to assess quality may lead to biased estimates of the effects of market structure on quality because market structure may affect hospital length of stay and, consequently, the likelihood of dying in the hospital (e.g.,Baker et al. 2002). In fact, we found that using in-hospital mortality leads to a bias toward finding stronger protective effects of HMO penetration. An additional strength is that we examined mortality for six medical conditions that vary in the degree of professional consensus regarding the need for hospitalization, including several for which there is a great deal of consensus. Further, our focus on a small number of conditions enabled us to include a variety of carefully selected, condition-specific severity measures in our regression models.

Our study also has several limitations. First, despite our use of multiple measures to assess patient severity, discharge data are inherently limited in their ability to capture severity, since they lack clinical detail such as laboratory and physiologic data (e.g.,Pine et al. 1997). We addressed the concern that unobserved differences in patient severity between hospitals that faced high and low levels of competition might be responsible for our findings using indirect approaches. These analyses suggested that patients admitted to high- and low-competition hospitals were unlikely to have differed systematically in unobserved severity.

Second, radius-based measures of hospital competition have been criticized because all hospitals inside the radius count equally whereas hospitals just outside the radius do not count at all. We addressed this concern by using competition measures based on two different radii—the 75 and 90 percent radii—which in practice led to sizable differences in the number of hospitals that contributed to the competition measures. As noted earlier, our results did not change. Radius-based competition measures have also been criticized for being endogenous, but we addressed this concern by using predicted, rather than observed, radii.

Third, we identified the effects of competition on hospital quality using cross-sectional differences in hospital quality. The resulting estimates could differ from estimates based on changes in hospital competition over time.

Fourth, our analyses were based on California and may not be generalizable to other states. Studies of the information that HMOs use to select hospitals for contracts and studies of the quality of care in hospitals used by HMO patients compared with fee-for-service patients suggest that HMOs in California may be more successful than HMOs in other states at channeling their members to high-quality hospitals (Schulman et al. 1997; Escarce et al. 1999; Erickson et al. 2000). Unfortunately, studies that have directly assessed the effect of hospital competition (or HMO penetration) on hospital quality have not examined differences across states (Kessler and McClellan 2000; Mukamel, Zwanziger, and Tomaszewski 2001; Sari 2002; Gowrisankaran and Town 2003; Shen 2003).

This study offers robust evidence that California hospitals that faced more competition provided higher quality of care for a range of medical conditions. The findings of the study also suggest that higher HMO penetration leads to better quality and that the salutary effects of hospital competition on quality are stronger in high penetration markets, but the evidence for these effects is weaker. Additional studies using linked hospital discharge and vital statistics data from other states would help determine whether our findings are generalizable. Researchers and policymakers should also be attentive to the possibility of an adverse effect on hospital quality from the “backlash” against managed care.

Acknowledgments

This research was funded by grant number P01 HS10770-01 from the Agency for Healthcare Research and Quality. We would like to thank Randy Hirscher and Jill Gurvey for expert programming assistance, Elaine Quiter for project management, and Kate Lee for administrative assistance.

NOTES

1

Similar to Escarce et al. (1999) findings for Florida,Erickson et al. (2000) found that patients in New York with managed care insurance were less likely than fee-for-service patients to use low-mortality hospitals for coronary artery bypass surgery.

2

We used the characteristics of the hospital to which the patient was initially admitted even if the patient was subsequently transferred to a different hospital (e.g.,Kessler and McClellan 2000; Gowrisankaran and Town 2003). Transfer rates were low for all study conditions except AMI (AMI, 22.0 percent; HIP, 1.6. percent; CVA, 3.4 percent; GIH, 1.6 percent; CHF, 3.4 percent; and DM, 1.4 percent). In effect, our approach holds the admitting hospital responsible for making transfers that would improve patient outcomes.

3

Briefly, we used hospital discharge data for nine states in 1997 to determine, for each short-term general hospital in those states, the distance from the hospital to patient zip codes required to account for 75 and 90 percent of the admissions to the hospital. We then developed a regression model for the 75 and 90 percent radii as functions of hospital and market area characteristics, and we used the estimated coefficients to predict the radii for every metropolitan hospital in the United States (for details, see Gresenz, Rogowski, and Escarce [2004]).

4

Teaching hospitals were those with any interns or residents, and major teaching hospitals were those with more than 0.25 interns and residents per bed. Data on interns and residents were obtained from Medicare Cost Reports.

5

District hospitals in California are tax supported, but they resemble private nonprofit hospitals in most other ways. Because there are few district hospitals in metropolitan areas, we included them in the nonprofit category.

6

Under Medicare's Prospective Payment System, the DSH percentage is defined as the sum of the ratios of Medicare Part A SSI Patient Days to total Medicare patient days and Medicaid patient days to total patient days in the hospital. Data on DSH percentage were obtained from Medicare PPS Impact Files.

7

Elixhauser et al. (1998) developed a more comprehensive list of comorbidities. However, this list includes conditions, such as hypertension, paralysis, obesity, and uncomplicated diabetes, that have been found to be underreported in hospital discharge data, especially for patients who die (Iezzoni et al. 1992,1994; Romano and Mark 1994). We updated the diagnosis codes used to define each comorbidity to ensure that new codes developed since the work of Iezzoni et al. (1994) were included, as appropriate.

8

To test the performance of these severity measures, we estimated logistic regression models for each study condition using death within 30 days as the dependent variable and the severity measures, alone, as explanatory variables. The c-statistics for these models ranged from 0.75 to 0.81 (with the exception of 0.70 for CHF), indicating excellent discrimination. Additionally, comparisons of predicted and observed mortality across deciles of risk showed good calibration.

9

Kessler and McClellan (2000) and Gowrisankaran and Town (2003) have argued that measures of hospital competition based on patient flows may be endogenous to hospital quality, since hospitals with higher quality may draw patients from longer distances. We based our competition measures on predicted rather than observed radii, as described earlier, in order to avoid endogeneity, and we tested whether this strategy worked by estimating additional logistic models for 30-day mortality that included the hospital's predicted radius as an explanatory variable. The coefficient of the predicted radius was zero for all the study conditions. Moreover, none of the coefficients of hospital competition changed in magnitude, and the statistical significance of the findings for competition was strengthened. We also reestimated our models using the Kessler–McClellan approach to developing measures of hospital competition. We obtained similar point estimates for the effects of competition for CHF (OR = 0.76,p < 0.01), HIP (OR = 0.81,p > 0.10), CVA (OR = 0.72,p < 0.01), and DM (0.89,p > 0.10). The point estimates differed for GIH (OR = 1.06,p > 0.10) and AMI (OR = 1.25,p < 0.05), although only the estimate for AMI reached statistical significance.

10

We limited the samples in the analyses of 90 and 180-day mortality to admissions that began by September 30, 1999 and June 30, 1999, respectively, so we could assess the outcomes without censoring. Ninety- and 180-day mortality rates were as follows: AMI, 16.5 and 19.2 percent; HIP, 11.9 and 16.1 percent; CVA, 21.2 and 25.0 percent; GIH, 9.7 and 13.2 percent; CHF, 15.5 and 22.1 percent; and DM, 6.8 and 10.1 percent.

Supplementary material

APPENDIX A

Condition-Specific Severity Measures

hesr0042-0682-s1.pdf (12.9KB, pdf)

REFERENCES

  1. Ayanian JZ, Weissman JS. Teaching Hospitals and Quality of Care: A Review of the Literature. Milbank Quarterly. 2002;80(3):569–93. doi: 10.1111/1468-0009.00023. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Baker DW, Einstadter D, Thomas CL, Husak SS, Gordon NH, Cebul RD. Mortality Trends during a Program that Publicly Reported Hospital Performance. Medical Care. 2002;40(10):879–90. doi: 10.1097/00005650-200210000-00006. [DOI] [PubMed] [Google Scholar]
  3. Baker LC, Phibbs CS. Managed Care, Technology Adoption, and Health Care: The Adoption of Neonatal Intensive Care. RAND Journal of Economics. 2002;33(3):524–48. [PubMed] [Google Scholar]
  4. Bundorf MK, Schulman KA, Stafford JA, Gaskin D, Jollis JG, Escarce JJ. Impact of Managed Care on the Treatment, Costs, and Outcomes of Fee-for-Service Medicare Patients with Acute Myocardial Infarction. Health Services Research. 2004;39(1):131–52. doi: 10.1111/j.1475-6773.2004.00219.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Chassin MR, Brook RH, Park RE, Keesey J, Fink A, Kosecoff J, Kahn K, Merrick N, Solomon DH. Variations in the Use of Medical and Surgical Services by the Medicare Population. New England Journal of Medicine. 1986;314(5):285–90. doi: 10.1056/NEJM198601303140505. [DOI] [PubMed] [Google Scholar]
  6. Elixhauser A, Steiner C, Harris DR, Coffey RM. Comorbidity Measures for Use with Administrative Data. Medical Care. 1998;36(1):8–27. doi: 10.1097/00005650-199801000-00004. [DOI] [PubMed] [Google Scholar]
  7. Erickson LC, Torchiana DF, Schneider EC, Newburger JW, Hannan EL. The Relationship between Managed Care Insurance and Use of Lower-Mortality Hospitals for CABG Surgery. Journal of the American Medical Association. 2000;283(15):1976–82. doi: 10.1001/jama.283.15.1976. [DOI] [PubMed] [Google Scholar]
  8. Escarce JJ, Van Horn RL, Pauly MV, Williams SV, Shea JA, Chen W. Health Maintenance Organizations and Hospital Quality for Coronary Artery Bypass Surgery. Medical Care Research and Review. 1999;56(3):340–62. doi: 10.1177/107755879905600304. [DOI] [PubMed] [Google Scholar]
  9. Feldman R, Chan HC, Kralewski J, Dowd B, Shapiro J. Effects of HMOs on the Creation of Competitive Markets for Hospital Services. Journal of Health Economics. 1990;9(2):207–22. doi: 10.1016/0167-6296(90)90018-x. [DOI] [PubMed] [Google Scholar]
  10. Gaskin DJ, Escarce JJ, Schulman K, Hadley J. The Determinants of HMOs' Contracting with Hospitals for Bypass Surgery. Health Services Research. 2002;37(4):963–84. doi: 10.1034/j.1600-0560.2002.61.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Gaskin DJ, Hadley J. The Impact of HMO Penetration on the Rate of Hospital Cost Inflation, 1985–1993. Inquiry. 1997;34(3):205–16. [PubMed] [Google Scholar]
  12. Gittelsohn A, Powe NR. Small Area Variations in Health Care Delivery in Maryland. Health Services Research. 1995;30(2):295–317. [PMC free article] [PubMed] [Google Scholar]
  13. Gowrisankaran G, Town RJ. Competition, Payers and Hospital Quality 1. Health Services Research. 2003;38(6, part 1):1403–21. doi: 10.1111/j.1475-6773.2003.00185.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Gresenz CR, Rogowski J, Escarce JJ. Updated Variable-Radius Measures of Hospital Competition. Health Services Research. 2004;39(2):417–30. doi: 10.1111/j.1475-6773.2004.00235.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Hadley J. Sicker and Poorer—the Consequences of Being Uninsured: A Review of the Research on the Relationship between Health Insurance, Medical Care Use, Health, Work, and Income. Medical Care Research and Review. 2003;60(2 suppl):3S–75S. doi: 10.1177/1077558703254101. [DOI] [PubMed] [Google Scholar]
  16. Hannan EL, Kilburn H, Jr, Lindsey ML, Lewis R. Clinical versus Administrative Data Bases for CABG Surgery. Does It Matter? Medical Care. 1992;30(10):892–907. doi: 10.1097/00005650-199210000-00002. [DOI] [PubMed] [Google Scholar]
  17. Heidenreich PA, McClellan M, Frances C, Baker LC. The Relation between Managed Care Market Share and the Treatment of Elderly Fee-for-Service Patients with Myocardial Infarction. American Journal of Medicine. 2002;112(3):176–82. doi: 10.1016/s0002-9343(01)01098-1. [DOI] [PubMed] [Google Scholar]
  18. Iezzoni LI, Foley SM, Daley J, Hughes J, Fisher ES, Heeren T. Comorbidities, Complications, and Coding Bias. Does the Number of Diagnosis Codes Matter in Predicting In-Hospital Morality? Journal of the American Medical Association. 1992;267(16):2197–203. doi: 10.1001/jama.267.16.2197. [DOI] [PubMed] [Google Scholar]
  19. Iezzoni LI, Heeren T, Foley SM, Daley J, Hughes J, Coffman GA. Chronic Conditions and Risk of In-Hospital Death. Health Services Research. 1994;29(4):435–60. [PMC free article] [PubMed] [Google Scholar]
  20. Joskow PL. The Effects of Competition and Regulation on Hospital Bed Supply and the Reservation Quality of the Hospital. Bell Journal of Economics. 1980;11(2):421–47. [Google Scholar]
  21. Keeler EB, Melnick G, Zwanziger J. The Changing Effects of Competition on Non-Profit and For-Profit Hospital Pricing Behavior. Journal of Health Economics. 1999;18(1):69–86. doi: 10.1016/s0167-6296(98)00036-8. [DOI] [PubMed] [Google Scholar]
  22. Keeler EB, Rubenstein LV, Kahn KL, Draper D, Harrison ER, McGinty MJ, Rogers WH, Brook RH. Hospital Characteristics and Quality of Care. Journal of the American Medical Association. 1992;268(13):1709–14. [PubMed] [Google Scholar]
  23. Kessler DP, McClellan MB. Is Hospital Competition Socially Wasteful? Quarterly Journal of Economics. 2000;115(4):577–615. [Google Scholar]
  24. Kuhn EM, Hartz AJ, Krakauer H, Bailey RC, Rimm AA. The Relationship of Hospital Ownership and Teaching Status to 30- and 180-Day Adjusted Mortality Rates. Medical Care. 1994;32(11):1098–108. doi: 10.1097/00005650-199411000-00003. [DOI] [PubMed] [Google Scholar]
  25. McMahon LF, Jr, Wolfe RA, Tedeschi PJ. Variation in Hospital Admissions among Small Areas. A Comparison of Maine and Michigan. Medical Care. 1989;27(6):623–31. doi: 10.1097/00005650-198906000-00005. [DOI] [PubMed] [Google Scholar]
  26. Melnick GA, Zwanziger J, Bamezai A, Pattison R. The Effects of Market Structure and Bargaining Position on Hospital Prices. Journal of Health Economics. 1992;11(3):217–33. doi: 10.1016/0167-6296(92)90001-h. [DOI] [PubMed] [Google Scholar]
  27. Mukamel DB, Zwanziger J, Tomaszewski KJ. HMO Penetration, Competition, and Risk-Adjusted Hospital Mortality. Health Services Research. 2001;36(6, part 1):1019–35. [PMC free article] [PubMed] [Google Scholar]
  28. OSHPD Healthcare Quality and Analysis Division. Report on Heart Attack Outcomes in California 1996–1998, User's Guide. Vol. 1. Sacramento, CA: California Office of Statewide Health Planning and Development; 2002. [Google Scholar]
  29. OSHPD Hospital Outcomes Center. Report on Hospital Outcomes for Community-Acquired Pneumonia in California, 1999–2001. Sacramento, CA: Healthcare Quality and Analysis Division, California Office of Statewide Health Planning and Development; 2004. [Google Scholar]
  30. Parkerson GR, Jr, Hammond WE, Michener JL, Yarnall KS, Johnson JL. Risk Classification of Adult Primary Care Patients by Self-Reported Quality of Life. Medical Care. 2005;43(2):189–93. doi: 10.1097/00005650-200502000-00013. [DOI] [PubMed] [Google Scholar]
  31. Pine M, Noriusis M, Jones B, Rosenthal GE. Predictions of Hospital Mortality Rates: A Comparison of Data Sources. Annals of Internal Medicine. 1997;126(5):347–54. doi: 10.7326/0003-4819-126-5-199703010-00002. [DOI] [PubMed] [Google Scholar]
  32. Rainwater JA, Romano PS. What Data Do California HMOs Use to Select Hospitals for Contracting? American Journal of Managed Care. 2003;9(8):553–61. [PubMed] [Google Scholar]
  33. Robinson JC. HMO Market Penetration and Hospital Cost Inflation in California. Journal of the American Medical Association. 1991;266(19):2719–23. [PubMed] [Google Scholar]
  34. Robinson JC, Luft HS. The Impact of Hospital Market Structure on Patient Volume, Average Length of Stay, and the Cost of Care. Journal of Health Economics. 1985;4(4):333–56. doi: 10.1016/0167-6296(85)90012-8. [DOI] [PubMed] [Google Scholar]
  35. Romano PS, Mark DH. Bias in the Coding of Hospital Discharge Data and Its Implications for Quality Assessment. Medical Care. 1994;32(1):81–90. doi: 10.1097/00005650-199401000-00006. [DOI] [PubMed] [Google Scholar]
  36. Romano PS, Mutter R. The Evolving Science of Quality Measurement for Hospitals: Implications for Studies of Competition and Consolidation. International Journal of Health Care Finance and Economics. 2004;4(2):131–57. doi: 10.1023/B:IHFE.0000032420.18496.a4. [DOI] [PubMed] [Google Scholar]
  37. Rosenthal GE, Harper DL, Quinn LM, Cooper GS. Severity-Adjusted Mortality and Length of Stay in Teaching and Nonteaching Hospitals. Journal of the American Medical Association. 1997;278(6):485–90. [PubMed] [Google Scholar]
  38. Sari N. Do Competition and Managed Care Improve Quality? Health Economics. 2002;11(7):571–84. doi: 10.1002/hec.726. [DOI] [PubMed] [Google Scholar]
  39. Schulman KA, Rubenstein LE, Seils DM, Harris M, Hadley J, Escarce JJ. Quality Assessment in Contracting for Tertiary Care Services by HMOs: A Case Study of Three Markets. Joint Commission Journal on Quality Improvement. 1997;23(2):117–27. doi: 10.1016/s1070-3241(16)30304-2. [DOI] [PubMed] [Google Scholar]
  40. Shapiro MF, Park RE, Keesey J, Brook RH. The Effect of Alternative Case-Mix Adjustments on Mortality Differences between Municipal and Voluntary Hospitals in New York City. Health Services Research. 1994;29(1):95–112. [PMC free article] [PubMed] [Google Scholar]
  41. Shen YC. The Effect of Financial Pressure on the Quality of Care in Hospitals. Journal of Health Economics. 2003;22(2):243–69. doi: 10.1016/S0167-6296(02)00124-8. [DOI] [PubMed] [Google Scholar]
  42. Shortell SM, Hughes EF. The Effects of Regulation, Competition, and Ownership on Mortality Rates among Hospital Inpatients. New England Journal of Medicine. 1988;318(17):1100–7. doi: 10.1056/NEJM198804283181705. [DOI] [PubMed] [Google Scholar]
  43. Wennberg JE. Population Illness Rates Do Not Explain Population Hospitalization Rates. Medical Care. 1987;25(4):354–9. [PubMed] [Google Scholar]
  44. Wennberg JE, McPherson K, Caper P. Will Payment Based on Diagnosis-Related Groups Control Hospital Costs? New England Journal of Medicine. 1984;311(5):295–300. doi: 10.1056/NEJM198408023110505. [DOI] [PubMed] [Google Scholar]
  45. Young GJ, Burgess JF, Jr, Valley D. Competition among Hospitals for HMO Business: Effect of Price and Nonprice Attributes. Health Services Research. 2002;37(5):1267–89. doi: 10.1111/1475-6773.01088. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Zwanziger J, Melnick GA. The Effects of Hospital Competition and the Medicare PPS Program on Hospital Cost Behavior in California. Journal of Health Economics. 1988;7(4):301–20. doi: 10.1016/0167-6296(88)90018-5. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

APPENDIX A

Condition-Specific Severity Measures

hesr0042-0682-s1.pdf (12.9KB, pdf)

Articles from Health Services Research are provided here courtesy of Health Research & Educational Trust

RESOURCES