Abstract
Objective
To compare the costs of physician-owned cardiac, orthopedic, and surgical single specialty hospitals with those of full-service hospital competitors.
Data Sources
The primary data sources are the Medicare Cost Reports for 1998–2004 and hospital inpatient discharge data for three of the states where single specialty hospitals are most prevalent, Texas, California, and Arizona. The latter were obtained from the Texas Department of State Health Services, the California Office of Statewide Health Planning and Development, and the Agency for Healthcare Research and Quality Healthcare Cost and Utilization Project. Additional data comes from the American Hospital Association Annual Survey Database.
Study Design
We identified all physician-owned cardiac, orthopedic, and surgical specialty hospitals in these three states as well as all full-service acute care hospitals serving the same market areas, defined using Dartmouth Hospital Referral Regions. We estimated a hospital cost function using stochastic frontier regression analysis, and generated hospital specific inefficiency measures. Application of t-tests of significance compared the inefficiency measures of specialty hospitals with those of full-service hospitals to make general comparisons between these classes of hospitals.
Principal Findings
Results do not provide evidence that specialty hospitals are more efficient than the full-service hospitals with whom they compete. In particular, orthopedic and surgical specialty hospitals appear to have significantly higher levels of cost inefficiency. Cardiac hospitals, however, do not appear to be different from competitors in this respect.
Conclusions
Policymakers should not embrace the assumption that physician-owned specialty hospitals produce patient care more efficiently than their full-service hospital competitors.
Keywords: Hospitals, specialty, cost, inefficiency, competition
During the last 25 years, the U.S. hospital industry has undergone dramatic changes in its competitive landscape. Such changes include the consolidation of independent hospitals into systems, the rise of a proprietary sector, a marked shift from inpatient care to ambulatory services, and formation of hospital–physician joint ventures (Federal Trade Commission and Department of Justice 2004). The most notable recent development is the rapid rise of small hospitals specializing in cardiac, orthopedic, or surgical services (SSHs). While accounting for a small percentage of U.S. hospitals, SSHs have tripled in number over the last 15 years, to a current total in operation exceeding 100, with dozens under construction or in planning stages (United States Government Accountability Office [GAO] 2005). SSHs have concentrated into a small number of states and most are owned totally or in part by physicians who refer patients to them.
As their number has increased, SSHs have become the center of an intense debate about their value to the U.S. health care industry. Proponents of SSHs argue that these organizations can set a new competitive benchmark for hospital services by promoting cost efficiency, augmenting patient choices, and providing quality health care at competitive prices. In this vein, a number of business strategists have argued that hospitals can create value in health care by increased focus and specialization of services (Herzlinger 1997, 2004; Porter and Teisberg 2004). Opponents of SSHs claim that these organizations engender unfair competition by targeting patient referrals, offering services that encourage overutilization, focusing on the most profitable patients, and limiting the ability of full-service community hospitals to cross-subsidize unprofitable services (Kahn 2006).
Central to this complex debate are a number of empirical questions about SSHs that have yet to be resolved. In this paper, we address a key claim of SSH proponents: that SSHs are more cost efficient than their full-service competitor hospitals. To date, the scientific evidence addressing this claim is limited and contains conflicting findings. We examine this issue with a longitudinal statistical analysis of the cost inefficiency of SSHs as compared with their local full-service competitors, estimated over the recent period of accelerated SSH market entry. Our results do not provide evidence in support of the general argument that specialty hospitals are more efficient. On the contrary, orthopedic and surgical SSHs exhibit significantly higher levels of overall cost inefficiency, suggesting that the comparative cost question is complex and depends on factors other than whether or not a hospital has a general versus specialized service orientation.
BACKGROUND
The federal “Stark” anti-self-referral law generally prohibits a physician from referring patients for services payable under Medicare or Medicaid to a health care entity in which the physician holds financial interest. While the Stark law prohibits physicians from holding a financial interest in hospital subdivisions, financial interest in an entire hospital is exempt, as any pecuniary gain to physicians from referring patients to such a large entity is assumed too small to influence physician referrals.
Yet policy makers recognize that due to their specialization and small size, SSHs bear greater resemblance to hospital subdivisions than entire hospitals, and that physician–owners stand to gain financially by their referrals (Carey, Burgess, and Young 2007). In December 2003, as part of the Medicare Modernization Act, the Congress declared a moratorium on payment for physician referrals of Medicare and Medicaid patients to new SSHs, and required that during the moratorium period, the Centers for Medicare and Medicaid Services (CMS) and the Medicare Payment Advisory Commission (MedPAC) investigate several characteristics and consequences of SSH behavior. The mandate to MedPAC included an analysis of SSH relative costs. Using the Medicare Cost Reports (MCR), MedPAC compared the costs per Medicare inpatient discharge between SSHs (19 cardiac and 55 orthopedic or surgical hospitals) with costs of their competitors. Results indicated comparable average costs for cardiac hospitals but higher (and statistically significant) average costs for orthopedic/surgical hospitals (Medicare Payment Advisory Commission [MedPAC] 2005, 2006). MedPAC attributed some of the higher costs at orthopedic/surgical hospitals to low inpatient volume and relatively high unused capacity.
Only two other empirical studies of which we are aware address the issue of SSH relative cost efficiency. One study found lower Medicare expenditures per beneficiary in a sample of seven cardiac hospitals relative to a comparison group of general hospitals, despite the former having a relatively higher case severity (Dobson, Haught, and Sen 2003). Another study focused on spillover effects from entry of SSHs in health care markets between 1993 and 1999. Results indicated that Medicare expenditures on overall cardiac services grew more slowly in markets where cardiac SSHs were located, because of entry-induced increases in efficiency at general hospitals (Barro, Huckman, and Kessler 2006).
The moratorium on federal reimbursement for SSHs was lifted in 2006, reinstating the stimulus for SSH development. During the moratorium, SSH development was dampened but by no means halted, because SSHs could receive payment from private payers. In this paper, we address the gap in knowledge of this growing organizational form with a comparative cost analysis of SSHs that accounts for a wider set of factors and encompasses a broader set of patients than considered previously.
METHODS
Conceptual Framework and Empirical Approach
There are two leading theoretical perspectives on the control and operation of the traditional general hospital. One views professional managers to be in control of hospital resources and the means of production (Newhouse 1970) and the other views physicians to be in control of the hospital and to treat it as their own workshop (Pauly and Redisch 1973). The formation and diffusion of the SSH form may be seen as an outgrowth of dynamic forces, as both managers and physicians compete for control of resources and working conditions, with the balance of power shifting from one constituency to another over time as a result of market, regulatory, and internal political forces. However, physicians who believe that the balance of control in a hospital is skewed in favor of managers may be inclined to look for other settings where they will have greater control over the resources needed to perform their work. Whether physician control of such resources will enhance or reduce operating efficiency is, as noted, a key policy question associated with the SSH debate.
In addition to the question concerning who controls the resources of the hospital, there is the question of who owns those resources. Another common characteristic of SSHs is their for-profit physician ownership. The incentives of physician owners are potentially conflicted. On one hand, they directly benefit from increased hospital profits and higher degrees of cost efficiency in ways that differ from the more diffuse incentives present in general hospitals. But these physician owners also directly benefit from greater physician productivity in their clinical practices through higher physician payments. Some of their activities to improve clinical efficiency may harm hospital cost efficiency, such as operating room scheduling set to favor surgeon efficiency and convenience rather than to minimize hospital operating costs.
To address these questions, we employ the cost function, the basic building block for assessing organizational efficiency in economics. The cost function is a model of the relationship between total cost of production and quantity of outputs produced, accounting for input prices as well as other factors that have been empirically demonstrated to account for variation in costs. Recent hospital studies rely on the stochastic frontier regression analysis (SFA) model of the firm for estimation of hospital cost functions (Bradford et al. 2001; Worthington 2004; Burgess 2005), when relative inefficiency is the focus of interest. While the ordinary center of attention in statistical estimation of cost functions is the predicted relationship between observed explanatory variables and cost, SFA focuses on the unobservable or the residual deviation between observed and predicted costs (Jacobs, Smith, and Street 2006). SFA allows the researcher to decompose the residual (error) term from the regression into two hospital-specific components (Jondrow et al. 1982). A stochastic component allows for random factors involved in producing care over which the hospital has no control, and an inefficiency component relates to care production activities by the hospital over which it does have control. The inefficiency measure can be interpreted as the percentage difference between the observed cost of a particular hospital and the frontier determined by the aggregate behavior of all other hospitals in the sample. Given that the technology is defined by what is possible or observed, this frontier represents its minimum technically feasible cost.
Hospital-specific measures of inefficiency may be highly sensitive to the research model, the scope of services of the hospitals measured, and the specific sample of hospitals employed. As a result, the SFA technique has been heavily critiqued when used to rank hospitals or to gauge the relative inefficiency of individual hospitals (Newhouse 1994; Cremieux and Ouellette 2001). However, inefficiency scores are more plausible for making general statements at higher levels of aggregation (Chirikos and Sear 2000; Folland and Hofler 2001), and thus are well suited to assess the relative aggregate inefficiency of SSHs compared with local competitors.
Data
Our main sources of data are the MCR and the state administrative data from Texas, California, and Arizona for the years 1998 through 2004, supplemented by the American Hospital Association Annual Survey Database (AHA). The state data (discharge abstracts) were obtained from the Texas Department of State Health Services Center for Health Statistics, the California Office of Statewide Health Planning and Development, and the State Inpatient Data (Arizona) produced within the Agency for Healthcare Research and Quality's (AHRQ's) Healthcare Cost and Utilization Project. The availability of patient-level data places limitations on our study; however, the significant geographic concentration of SSHs allows us to include the state with the highest concentration of SSHs (Texas) and three of the seven states recently identified by the Government Accountability Office as containing two-thirds of the SSHs nationally (United States Government Accountability Office [GAO] 2003). These are also high-growth regions for SSH development, as 58 percent of applications to CMS between 2003 and 2005 for a determination of exemption from the moratorium (on grounds of already being under construction) came from these three states (GAO 2005). From the state hospital associations and our own web searches, we identified 34 acute care SSHs that were wholly or partially physician-owned: 24 in Texas, six in California, and four in Arizona. For each SSH, we identified as competitor hospitals all full-service acute care hospitals located in the same Hospital Referral Regions (HRRs) defined in the Dartmouth Atlas of Health Care. Acute care competitor hospitals included 260 hospitals in Texas, 46 in California, and 49 in Arizona.
Variables
The dependent variable was hospital total costs, obtained from the MCR. It excluded costs associated with capital-related investments and nonreimbursable cost centers unrelated to patient care. Total costs were expressed in 2004 dollars, and the dependent variable was measured in natural logarithm form, as is standard in the literature.
The key output variables were number of discharges and number of outpatient visits. Economists also use cost functions to measure economies of scope, a measure of efficiencies obtained by simultaneous production of more than one output. Accordingly, we included the interaction between discharges and outpatient visits. Output intensity differences not captured by number of discharges were incorporated by including average length of stay.
We controlled for the prices of inputs to production by including the index of local area wage rates used by Medicare for reimbursing hospitals under the Prospective Payment System. Data on prices of other inputs were unavailable; however, labor accounts for the majority of hospital expenses and local wages are probably correlated with the prices of other inputs.
Our approach to controlling for product heterogeneity at the patient level is informed by previous work attempting to meld cost and quality concepts in traditional cost function analysis (Carey and Burgess 1999). That study found that state of the art quality measures such as risk-adjusted mortality rates and hospital readmission rates seemed to better capture unmeasured within DRG case complexity. To our knowledge, we are among the first studies to address the remaining patient heterogeneity in cost function analysis by including a within-DRG inpatient case-mix index, created by applying the 3M © All-Payer-Refined DRG (APR-DRG) software product to the state inpatient databases. The APR-DRG case-mix index is superior to the Medicare case-mix index commonly used in hospital cost studies. First, the patient classification is more reflective of a comprehensive hospital patient mix than the Medicare system, which was designed for hospital reimbursement by Medicare. More importantly, the APR-DRGs are adjusted for within DRG severity, reflecting a growing body of recent evidence indicating that SSHs treat lower severity cases than their competitors (Barro, Huckman, and Kessler 2006; Cram, Rosenthal, and Vaughan-Sarrazin 2005; Mitchell 2005; Greenwald et al. 2006; Guterman 2006). Moreover, one of the most biting on-going criticisms of the SFA approach is that it captures within-DRG case mix (Newhouse 1994; Burgess 2005). The APR-DRG case-mix index is an inpatient construct; however, we did adjust for outpatient severity by incorporating a measure of the proportion of outpatient visits that are surgeries.
The idea that failure to account for quality in the cost function represents an omitted variables bias is long standing (Braeutigam and Pauly 1986), yet data measurement and availability problems are rife, and the majority of hospital cost function studies do not include measures of quality. Some studies have relied on structuring unobserved variation within empirical models (Gertler and Waldman 1992; Carey 1997) and others on observed variation in mortality rates. However, mortality is a relatively rare event in hospitals and can be inevitable in certain cases, so use of it as a quality measure can result in incorrect assessments of provider performance (Thomas, Holloway, and Guire 1993). As noted above, poor control of quality of hospital care can be particularly serious in SFA models, making it difficult to differentiate between higher costs resulting from superior quality and higher costs resulting from slack. Employing quality of care measures in SFA studies has been an important, but recent, addition to the literature (Yaisawarng and Burgess 2006). Because our study focuses on services with many surgeries and other complex procedures, our conceptual approach to measuring quality incorporates patient safety indicators (PSIs). PSIs represent a wide range of potentially preventable events that compromise the safety of inpatient care, such as complications following surgeries, other procedures, and some medical care services. We produced PSI measures from the state inpatient data using definitions and software recently created by AHRQ (Miller et al. 2001). Adverse inpatient events tend to be cost increasing because they involve additional resource utilization from extra care that needs to be provided to repair damages or problems caused by the lapse in patient care leading to the adverse event. This could take the form of further days of hospitalization, readmission for conditions such as fever, pneumonia, or blood infections, greater intensity of services (e.g., days spent in intensive care units), more ancillary services, and/or extra medications.
Previous literature has shown that among full-service hospitals, local market competition is a factor in explaining cost variation (Robinson and Luft 1985; Bamezai et al. 1999). We include a Herfindahl–Hirschman Index (HHI) measure of hospital competition using the Dartmouth Health Service Area (HSA) as our market area. HHIs, calculated as the sum of the squared market shares of individual firms competing in the same market, is a function of the number of competitors and the distribution of their relative market shares. Values of the HHI metric fall in the range (0<HHI≤1), where lower measures signify many hospitals competing within the market and higher measures signify fewer hospitals. HSAs are subdivisions of HRRs. HRRs, which are based on patient flows to obtain coronary artery bypass graft surgery, are more appropriate market areas in which to identify SSH competitors, because many patients are expected to travel relatively long distances for major surgery. However, for calculation of HHIs, the smaller HSAs are more comparable to what has been found in previous literature to capture the general dynamics of hospital market competition (Spang, Bazzoli, and Arnould 2001). If price competition characterizes the market, theoretically the HHI measure should be positively associated with costs, as the greater number of hospitals competing with each other places downward pressure on costs. Alternatively, a negative effect on the HHI measure suggests that a greater number of hospitals are associated with higher hospital costs, because competition is driven more by cost-increasing services and technology.
We also included a set of binary variables to control for ownership type (for profit, nonprofit, and public) and for whether the hospital is a member of a multihospital system. As a control for additional costs attributable to a hospital's teaching mission, we included a measure of the ratio of full-time residents to beds. Finally, we included the number of hospital beds as a proxy measure of the level of fixed costs. Table 1 lists descriptive statistics for the variables included in the analyses by hospital type. Because of the similarities between and overlap among orthopedic and surgical hospitals, we follow MedPAC and treat these two hospital types as a single specialty.
Table 1.
Measures | Variable Descriptions | Full- Service Hospitals (n=975) | Orthopedic/ Surgical Hospitals (n=33) | Cardiac Hospitals (n=10) |
---|---|---|---|---|
Costs (dependent variable) | Total hospital costs (000; dollars) | 121,742 | 14,861 | 49,770 |
(141,644) | (7,389) | (11,240) | ||
Outputs | Number of discharges | 10,915 | 904.6 | 4,096 |
(10,640) | (671.2) | (1,032) | ||
Number of outpatient visits | 137,177 | 6,636 | 11,171 | |
(156,240) | (4,817) | (3,834) | ||
Average length of stay (days) | 4.22 | 2.26 | 3.20 | |
(0.96) | (0.83) | (0.71) | ||
Input price | Index of local area wage rates | 0.968 | 1.066 | 0.968 |
(0.132) | (0.191) | (0.037) | ||
Case mix | Severity-adjusted all-payer-refined case-mix index | 0.885 | 1.088 | 2.080 |
(0.296) | (0.363) | (0.225) | ||
Outpatient surgeries (as % of outpatient visits) | 4.65 | 62.7 | 12.4 | |
— | — | — | ||
Patient safety indicators | Infections due to medical care (events per thousand patients at risk) | 1.65 | 0.036 | 1.33 |
— | — | — | ||
Postoperative hemorrhage or hematoma (events per thousand patients at risk) | 1.81 | 1.30 | 6.04 | |
— | — | — | ||
Accidental puncture or laceration (events per thousand patients at risk) | 3.62 | 6.04 | 6.84 | |
— | — | — | ||
Competition | Herfindahl–Hirschman index within HSA market area | 0.614 | 0.685 | 0.278 |
(0.352) | (0.350) | (0.106) | ||
Ownership | % nonprofit | 56.3 | 0 | 20.0 |
% for-profit | 27.3 | 100 | 80.0 | |
% public | 16.4 | 0 | 0 | |
System | % members of multihospital system | 73.3 | 45.5 | 90.0 |
— | — | — | ||
Teaching | Ratio of full-time residents to beds | 0.066 | 0.000 | 0.006 |
(0.297) | (0.000) | (0.020) | ||
Hospital size | Number of beds | 203.4 | 22.5 | 55.6 |
(180.7) | (13.9) | (5.36) |
HSA, health service area.
Estimation
Hospital cost functions commonly use the translog specification for estimation. A drawback to this particular functional form is potential multicollinearity due to the many parameters that must be estimated, a problem that is generally mitigated by joint estimation with share equations (Li and Rosenman 2001). However, as noted by Kumbhakar (1996), the SFA approach creates difficulties with the distributional assumptions across equations. We modified the model by using a simple log linear specification, in which outputs and input prices are entered in logarithmic form, as others have done in the literature (Carey 2003; Yaisawarng and Burgess 2006). This functional form is equivalent to the translog cost function where the coefficients of the second-order terms are restricted to zero, avoiding the need for estimation of a high number of interaction terms.
Although SFA was originally developed in a cross-sectional context (Aigner, Lovell, and Schmidt 1977), more recent applications have extended the models to analysis of panel data, making it possible to control for unobservable hospital-specific effects (Schmidt and Lin 1984; Battese and Coelli 1988; Kumbhakar and Lovell 2000). Although our data set contains annual observations on individual hospitals for 7 years, we chose not to apply panel data techniques. Fixed effects panel models present practical problems of estimation due to the large number of parameters to be estimated and also preclude use of time-invariant covariates. More importantly, fixed effects SFA models assume that the inefficiency parameters are time invariant, and hence do not fully separate the sources of heterogeneity among hospitals (the hospital-level fixed effects) from inefficiency at the hospital level (Greene 2004, 2005). Random effects models in SFA allow for time-varying inefficiency parameters but assume no correlation between the inefficiency parameters and the observable variables, an unlikely premise in this application.
We estimated the SFA models using the FRONTIER routine in the software package LIMDEP (version 8.0). Estimation is by maximum likelihood (MLE) using ordinary least squares estimates as starting values. LIMDEP allows the researcher to choose among four distributional assumptions for the inefficiency parameter. As in most hospital SFA analyses, given that results generally are not sensitive to this assumption, we assume that the inefficiency parameter has a half-normal distribution. The random component is assumed to be normally distributed.
RESULTS
Table 2 presents the results of the SFA estimation. The MLE model converged after 39 iterations. The estimated coefficients on the output, input price, and case complexity variables are all highly significant and exhibited the expected signs. Economies of scope realized by joint production are indicated by the negative coefficient on the interaction between discharges and outpatient visits; inpatient care is less expensive in hospitals that also treat relatively high numbers of outpatients (and vice versa). Average length of stay is cost increasing, all other factors being equal.
Table 2.
Explanatory Variable | Coefficient | Standard Error |
---|---|---|
Log of discharges | 0.845** | 0.053 |
Log of outpatient visits | 0.248** | 0.049 |
Log of discharges × log of outpatient visits | −0.010* | 0.005 |
Log of average length of stay | 0.264** | 0.030 |
Wage index | 0.206** | 0.046 |
APR-DRG case-mix index | 0.389** | 0.028 |
Outpatient case mix | 0.894** | 0.057 |
Rate of infection due to medical care | 13.72** | 5.83 |
Rate of postoperative hemorrhage or hematoma | 9.33** | 4.04 |
Rate of accidental puncture/laceration | 19.88** | 2.48 |
Nonprofit hospital dummy variable | 0.040* | 0.020 |
Public hospital dummy variable | 0.144** | 0.024 |
Herfindahl index | 6.04E−4 | 0.024 |
System dummy variable | 0.018 | 0.017 |
Resident ratio | 0.091** | 0.010 |
Beds | 6.92E−4** | 7.77E−5 |
Constant | −6.52** | .457 |
N=1,018.
p<.05.
p<.01.
APR-DRG, all-payer-refined DRG.
There are 20 different AHRQ PSIs. Six were excluded on grounds of being inappropriate to our model either because SSH patients were not at risk for the adverse events (obstetric related) or their relationship to cost was ambiguous because the patients died in the hospital (death in low-mortality DRGs, failure to rescue) and our focus is on quality measures that are cost increasing. We eliminated two others because they had extremely low frequencies, four additional on the basis of too few observations among SSHs, and two PSIs that caused nonconvergence of the models. There was a high degree of correlation among the remaining six PSIs. Our final model includes three PSIs: infection due to medical care, accidental puncture/laceration following surgery, and postoperative hemorrhage or hematoma. These are all highly significant in explaining cost variation. Because they are risks in orthopedic and cardiac surgery as well as general surgery, they match the clinical issues in our SSH sample. We performed sensitivity analyses using different sets of PSIs; the general cost function results were not sensitive to the choice of PSIs.
Our results show no association between competition and costs, suggesting that neither form of competition is dominant in the markets that we studied. Additionally, results indicate that nonprofit hospitals were more costly than for-profit hospitals (reference class), and that public hospitals were the most costly. The relationship between system membership and cost was also insignificant. This result is consistent with the bulk of literature (Carey 2003). Finally, we tested dummy variables for year of observation. None of the effects was significantly different from zero, and we excluded these from the final analysis.
The estimated stochastic frontier cost function yields an inefficiency score for each observation, which is interpreted as the percentage difference between a hospital's actual cost and its minimum feasible cost, or the frontier formed statistically from the data sample. Inefficiency score results are displayed in Table 3. Overall, the mean inefficiency score is 0.281, meaning that on average, hospitals had costs that were about 28.1 percent higher than the minimum feasible costs. This value lies within the scope of other SFA analyses of U.S. hospitals, which have produced efficiency scores in the range of 14–33 percent (Zuckerman, Hadley, and Iezzoni 1994; Chirikos and Sear 2000; Frech and Mobley 2000; Rosko 2001; Carey 2003).
Table 3.
Hospital Category | Mean Inefficiency Score | Number of Observations | Comparison of Means | Difference* (t-Value) |
---|---|---|---|---|
ALL hospitals | 0.280 | 1,018 | — | — |
FULL service | 0.274 | 975 | — | — |
SSH | 0.425 | 43 | FULL versus SSH | 0.151** |
(2.83) | ||||
ORTH/SURG | 0.471 | 33 | FULL versus ORTH/SURG | 0.197** |
(3.12) | ||||
CARDIAC | 0.277 | 10 | FULL versus CARDIAC | 0.003 |
(0.04) |
The difference in the means is calculated as the full-service hospital group mean subtracted from the SSH mean. Significance of the difference is determined using the two sample t-test.
p<0.01.
SSH, surgical service; ORTH, orthopedic; SURG, surgical.
Our primary interest lies in comparing the inefficiency scores of the SSHs with those of competitors. The difference is considerable, with SSHs averaging 42.9 percent compared with an average of 27.4 percent in competitor hospitals. The higher scores for SSHs are driven by the orthopedic/surgical hospitals, which average 46.8 percent inefficiency, compared with 28.1 percent for cardiac SSHs. We applied a two sample t-test to the means under the assumption of unequal variances. Despite the relatively small number of nonmissing observations on SSHs, the difference is significant, as seen in column 5. We also performed a separate test by SSH type. The difference between orthopedic/surgical SSHs and competitors is also highly significant. There was no significant difference between cardiac SSH inefficiency scores and those of competitors; however, there were few observations on which to achieve a statistical effect. Nonetheless, it is worth noting that the mean values are very close, differing by less than a single percentage point. All of our empirical results were robust to a sensitivity analysis that excluded the 26 percent of competitor hospitals that offered no cardiac specialty services.
DISCUSSION
In this paper we examined a highly policy-relevant question: whether physician-owned hospitals with services limited to cardiac, orthopedic, and SSHs are more cost efficient than full-service hospitals with whom they compete locally. Results from a stochastic frontier cost function analysis showed orthopedic/surgical SSHs to be significantly more inefficient than their full-service hospital competitors. However, this effect was not observed among cardiac SSHs. In both respects, our results are consistent with those of MedPAC, even though we have taken a much more theoretically and empirically detailed approach to evaluating comparative hospital costs, and have expanded our analysis to include all treated patients, rather than just Medicare patients.
For policy purposes, an important finding from our study is that inefficiency measures differed significantly across SSH type. In this vein, several key differences exist between cardiac and orthopedic/surgical hospitals. Cardiac hospitals are more similar to full-service hospitals in many respects than are orthopedic/surgical hospitals. They are much larger than orthopedic/surgical hospitals, falling at the 30th percentile of the bed size distribution among acute care hospitals in the U.S. (AHA 2004). The percentage of physician ownership in cardiac SSHs is only about one-half that of orthopedic/surgical SSHs (MedPAC 2005), so that concerns over physician self-interest should be lower in cardiac SSHs. Finally, unlike orthopedic/surgical SSHs, most cardiac SSHs offer emergency services. The debate over specialty hospitals has focused on cardiac, orthopedic, and surgical hospitals, the three specialties subject to the moratorium, and most of the policy recommendations proposed to date target these three categories of physician-owned specialty hospitals as a group. The broad array of differences between cardiac and orthopedic/surgical hospitals, which we complement in this study, suggests that policy makers should remain open to the notion that SSHs are not all alike, and should not necessarily be treated in the same way.
Our study generated some additional results worthy of note. Theory predicts that for-profit hospitals have greater incentives to control costs and should therefore exhibit greater cost efficiency than other hospitals. Yet research shows mixed results for the effects of ownership on hospital costs, and overall the literature does not demonstrate systematic differences in costs between for-profit and nonprofit hospitals (Sloan 2000; Shen, Eggleston, and Schmid 2007). Our results show lower costs for the for-profit hospitals in our sample. Stratification of inefficiency scores among full-service hospitals also shows lower inefficiency in for-profit hospitals, so that among full-service hospitals in the markets we study, the for-profit incentive appears to promote cost-containing efficiencies. Because virtually all SSHs are for-profit enterprises, why the profit motive does not have the same effect on SSHs or, put the other way, why SSHs blossom in markets where for-profit hospitals are successful in containing costs is a question worth exploring in the future.
Another interesting finding was that the HHI measure of competition among hospitals was insignificant. Recent studies on the effects of competition are also mixed, with some evidence of increasing price competition among hospitals (Gift, Arnould, and DeBrock 2002) and other evidence of increased emphasis on nonprice competition (Devers, Brewster, and Casalino 2003). A powerful argument that has been put forth in support of SSHs is that through the competitive process, they induce community hospitals to perform more efficiently. But this argument assumes robust price competition in hospital markets. If the nature of competition in hospital markets is primarily one of nonprice competition, SSHs will be less motivated to focus on productive efficiency. Recent development in the study of hospital competition suggests a potential positive correlation between HHI measures and unobserved quality, owing to patient flows to higher quality hospitals (Kessler and McClellan 2000). We do not test for endogeneity arising from this possibility here; however, such endogeneity would only have the effect of understating the presence of nonprice competition.
There are limitations to our analysis. Specialty hospitals are a new and growing phenomenon, and data are only beginning to materialize with enough time for these new entities either individually or collectively to achieve desired levels of efficiency. To date, SSH development has been almost exclusively in states that do not have Certificate of Need (CON) laws. If the absence of hospital CON regulation allows less-efficient hospitals to enter the market, or alternatively, if the absence of CON laws increases competition among providers, our results may not generalize to future SSH development in CON states. While our controls on quality are highly significant and account for considerable variation in cost, they are still proxy measures for hospital quality. This is a perennial problem for hospital cost function analyses; however, the measures and approaches we employ here for both case mix and quality measurement exceed those in almost all previous literature. Site visits and focus groups conducted as part of the CMS study showed that some specialty hospitals had more spacious private rooms, more comfortable surroundings, space for families, and better food. If specialty hospitals are offering service amenities that are systematically different from those of competitors, and that are unobservable, then some of what is regarded as inefficiency in our analysis may be extra costs incurred by these service offerings in specialty hospitals. Private and societal perspectives may differ on the extent to which such amenities add to value in health care by increasing quality of care versus reducing value by adding unnecessary costs in markets characterized by substantial degrees of insurance.
Our study focuses on relative cost inefficiency of SSHs. In the context of escalation of U.S. health care costs, it should be borne in mind that cost inefficiency is only one element of the effect of services on total medical care expenditures, which are also driven by quantity. We do not assess whether SSHs lead to a higher quantity of hospital services being provided. However, we do conclude from our analysis that policy makers should not adopt the assumption that physician-owned specialty hospitals produce patient care more efficiently than their full-service competitors. Further study of why specialty hospitals are not less costly than the hospitals with whom they compete for the same services, as well as further effort to uncover what alternative goals the SSHs may be trying to optimize over, is very important for future research.
Acknowledgments
This research was supported by the Robert Wood Johnson Foundation, Grant #56468, Kathleen Carey, Principal Investigator. The authors thank Jean M. Mitchell for providing information on specialty hospitals, Susan Loveland for computer programming support, and Kyung Hye Kim for research assistance.
Joint Acknowledgement/Disclosure Statement: This research was supported by the Robert Wood Johnson Foundation Grant #56468, Kathleen Carey, Principal Investigator. Jean M. Mitchell provided the researchers with information regarding the location and opening dates of specialty hospitals. Susan Loveland provided computer programming support needed to produce the PSIs and the APR-DRG case-mix index. Kyung Hye Kim assisted in data organization and compilation. None of the co-authors have any financial or other disclosures or conflicts of interests related to the material contained in this manuscript.
Disclosures: None.
Disclaimers: The opinions expressed in this paper are those of the authors alone and do not reflect those of the Department of Veterans Affairs.
Supplementary material
The following supplementary material for this article is available online Appendix SA1: Author matrix.
This material is available as part of the online article from http://www.blackwell-synergy.com/doi/abs/10.1111/j.1475-6773.2008.00881.x (this link will take you to the article abstract).
Please note: Wiley-Blackwell is not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing material) should be directed to the corresponding author for the article.
REFERENCES
- Aigner D, Lovell C A K, Schmidt P. Formulation and Estimation of Stochastic Frontier Production Function Models. Journal of Econometrics. 1977;6:21–37. [Google Scholar]
- American Hospital Association [AHA] American Hospital Association Annual Survey Database. Chicago: American Hospital Association; 2004. [Google Scholar]
- Bamezai A, Zwanziger J, Melnick GA, Mann JM. Price Competition and Hospital Cost Growth in the United States (1989–1994) Health Economics. 1999;8(3):233–43. doi: 10.1002/(sici)1099-1050(199905)8:3<233::aid-hec406>3.0.co;2-x. [DOI] [PubMed] [Google Scholar]
- Barro JR, Huckman RS, Kessler DP. The Effects of Cardiac Specialty Hospitals on the Cost and Quality of Medical Care. Journal of Health Economics. 2006;25:702–21. doi: 10.1016/j.jhealeco.2005.11.001. [DOI] [PubMed] [Google Scholar]
- Battese GE, Coelli T. Prediction of Firm-Level Technical Efficiencies with a Generalized Frontier Production Function and Panel Data. Journal of Econometrics. 1988;38:387–99. [Google Scholar]
- Bradford WD, Kleit AN, Krousel-Wood MA, Re RN. Stochastic Frontier Estimation of Cost Models within the Hospital. Review of Economics and Statistics. 2001;83(2):302–9. [Google Scholar]
- Braeutigam RR, Pauly MV. Cost Function Estimation and Quality Bias: The Regulated Automobile Insurance Industry. RAND Journal of Economics. 1986;17:606–17. [Google Scholar]
- Burgess JF. Productivity Analysis in Health Care. In: Jones A, editor. Elgar Companion to Health Economics. Cheltenham, UK: Edward Elgar Publishing; 2005. pp. 335–46. [Google Scholar]
- Carey K. A Panel Data Design for Estimation of Hospital Cost Functions. Review of Economics and Statistics. 1997;LXXIX:443–53. [Google Scholar]
- Carey K. Hospital Cost Efficiency and System Membership. Inquiry. 2003;40:25–38. doi: 10.5034/inquiryjrnl_40.1.25. [DOI] [PubMed] [Google Scholar]
- Carey K, Burgess JF. On Measuring the Hospital Cost/Quality Trade-Off. Health Economics. 1999;8(6):509–20. doi: 10.1002/(sici)1099-1050(199909)8:6<509::aid-hec460>3.0.co;2-0. [DOI] [PubMed] [Google Scholar]
- Carey K, Burgess JF, Young G. Specialization and Physician Ownership in the U.S. Hospital Industry: Beyond the Moratorium. Health Economics, Policy and Law. 2007;2(4):409–18. doi: 10.1017/S1744133107004227. [DOI] [PubMed] [Google Scholar]
- Chirikos TN, Sear AM. Measuring Hospital Efficiency: A Comparison of Two Approaches. Health Services Research. 2000;34(6):1389–408. [PMC free article] [PubMed] [Google Scholar]
- Cram P, Rosenthal GE, Vaughan-Sarrazin MS. Cardiac Revascularization in Specialty and General Hospitals. New England Journal of Medicine. 2005;352(14):1454–62. doi: 10.1056/NEJMsa042325. [DOI] [PubMed] [Google Scholar]
- Cremieux PY, Ouellette P. Omitted Variable Bias and Hospital Cost. Journal of Health Economics. 2001;20:271–82. doi: 10.1016/s0167-6296(00)00085-0. [DOI] [PubMed] [Google Scholar]
- Devers KJ, Brewster LR, Casalino LP. Changes in Hospital Competitive Strategy: A New Medical Arms Race? Health Services Research. 2003;38(1, part 2):447–69. doi: 10.1111/1475-6773.00124. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dobson A, Haught R, Sen H. Specialty Heart Hospital Care: A Comparative Study. American Heart Hospital Journal. 2003;1(1):21–9. doi: 10.1111/j.1541-9215.2003.02100.x. [DOI] [PubMed] [Google Scholar]
- Federal Trade Commission and Department of Justice. Improving Health Care: A Dose of Competition. Washington, DC: FTC and DOJ; 2004. [Google Scholar]
- Folland ST, Hofler RA. How Reliable Are Hospital Efficiency Estimates? Exploiting the Dual to Homothetic Production. Health Economics. 2001;10:683–98. doi: 10.1002/hec.600. [DOI] [PubMed] [Google Scholar]
- Frech HE, Mobley LR. Efficiency, Growth, and Concentration: An Empirical Analysis of Hospital Markets. Economic Inquiry. 2000;38(3):369–84. [Google Scholar]
- Gertler PJ, Waldman DM. Quality-Adjusted Cost Functions and Policy Evaluation in the Nursing Home Industry. Journal of Political Economy. 1992;100:1232–56. [Google Scholar]
- Gift T, Arnould R, DeBrock L. Is Healthy Competition Healthy? New Evidence of the Impact of Competition. Inquiry. 2002;39(1):45–55. doi: 10.5034/inquiryjrnl_39.1.45. [DOI] [PubMed] [Google Scholar]
- Greene W. Distinguishing between Heterogeneity and Inefficiency: Stochastic Frontier Analysis of the World Health Organization's Panel Data on National Health Care Systems. Health Economics. 2004;13:959–80. doi: 10.1002/hec.938. [DOI] [PubMed] [Google Scholar]
- Greene W. Reconsidering Heterogeneity in Panel Data Estimators of the Stochastic Frontier Model. Journal of Econometrics. 2005;126:269–303. [Google Scholar]
- Greenwald L, Cromwell J, Adamache W, Bernard S, Drozd E, Root E, Devers K. Specialty versus Community Hospitals: Referrals, Quality, and Community Benefits. Health Affairs. 2006;25(1):106–18. doi: 10.1377/hlthaff.25.1.106. [DOI] [PubMed] [Google Scholar]
- Guterman S. Specialty Hospitals: A Problem or a Symptom? Health Affairs. 2006;25(1):95–105. doi: 10.1377/hlthaff.25.1.95. [DOI] [PubMed] [Google Scholar]
- Herzlinger R. Market Driven Health Care. Reading, MA: Addison-Wesley; 1997. [Google Scholar]
- Herzlinger R. 2004. “Specialty Hospitals, Ambulatory Surgery Centers, and General Hospitals: Charting a Wise Public Policy.” Conference held on September 10, Washington, DC. Waltham, MA: Council on Health Care Economics and Policy, Fall Conference (transcript)
- Jacobs R, Smith PC, Street A. Measuring Efficiency in Health Care: Analytic Techniques and Health Policy. Cambridge: Cambridge University Press; 2006. [Google Scholar]
- Jondrow J, Lovell C A K, Materov IS, Schmidt P. On the Estimation of Technical Inefficiency in the Stochastic Frontier Production Function Model. Journal of Econometrics. 1982;19:233–8. [Google Scholar]
- Kahn CN. Intolerable Risk, Irreparable Harm: The Legacy of Physician-Owned Specialty Hospitals. Health Affairs. 2006;25(1):130–3. doi: 10.1377/hlthaff.25.1.130. [DOI] [PubMed] [Google Scholar]
- Kessler DP, McClellan MB. Is Hospital Competition Socially Wasteful? Quarterly Journal of Economics. 2000;115(2):577–615. [Google Scholar]
- Kumbhakar SC. Efficiency Measurement with Multiple Outputs and Multiple Inputs. Journal of Productivity Analysis. 1996;7(2/3):225–55. [Google Scholar]
- Kumbhakar SC, Lovell C A K. Stochastic Frontier Analysis. Cambridge: Cambridge University Press; 2000. [Google Scholar]
- Li T, Rosenman R. Estimating Hospital Costs with a Generalized Leontief Function. Health Economics. 2001;10:523–38. doi: 10.1002/hec.605. [DOI] [PubMed] [Google Scholar]
- Medicare Payment Advisory Commission (MedPAC) 2005. “Report to the Congress: Physician-Owned Specialty Hospitals.” March.
- Medicare Payment Advisory Commission (MedPAC) 2006. “Physician-Owned Specialty Hospitals Revisited.” August.
- Miller M, Elixhauser A, Zhan C, Meyer G. Patient Safety Indicators: Using Administrative Data to Identify Potential Patient Safety Concerns. Health Services Research. 2001;36(6, part 2):110–32. [PMC free article] [PubMed] [Google Scholar]
- Mitchell JM. Effects of Physician-Owned Limited-Service Hospitals: Evidence from Arizona. Health Affairs. 2005:w481–90. doi: 10.1377/hlthaff.w5.481. (published online 25 October 2005) [DOI] [PubMed] [Google Scholar]
- Newhouse JP. Toward a Theory of Nonprofit Institutions: An Economic Model of a Hospital. American Economic Review. 1970;60(1):64–74. [Google Scholar]
- Newhouse JP. Frontier Estimation: How Useful a Tool for Health Economics? Journal of Health Economics. 1994;13:317–22. doi: 10.1016/0167-6296(94)90030-2. [DOI] [PubMed] [Google Scholar]
- Pauly M, Redisch M. The Not-For-Profit Hospital as a Physicians’ Co-Operative. American Economic Review. 1973;63(1):87–100. [Google Scholar]
- Porter ME, Teisberg EO. Redefining Competition in Health Care. Harvard Business Review. 2004;82(6):64–76. [PubMed] [Google Scholar]
- Robinson J, Luft H. The Impact of Hospital Market Structure on Patient Volume, Average Length of Stay, and the Cost of Care. Journal of Health Economics. 1985;4:247–62. doi: 10.1016/0167-6296(85)90012-8. [DOI] [PubMed] [Google Scholar]
- Rosko MD. Cost Efficiency of US Hospitals: A Stochastic Frontier Approach. Health Economics. 2001;10:539–51. doi: 10.1002/hec.607. [DOI] [PubMed] [Google Scholar]
- Schmidt P, Lin T. Simple Tests of Alternative Specifications in Stochastic Frontier Models. Journal of Econometrics. 1984;24:349–61. [Google Scholar]
- Shen Y, Eggleston K, Schmid C. Hospital Ownership and Financial Performance: What Explains the Different Findings in the Empirical Literature? Inquiry. 2007;44(1):41–68. doi: 10.5034/inquiryjrnl_44.1.41. [DOI] [PubMed] [Google Scholar]
- Sloan F. Not-for-Profit Ownership and Hospital Behavior. In: Culyer J, Newhouse JP, editors. Handbook of Health Economics. Amsterdam: North-Holland/Elsevier; 2000. pp. 1141–74. [Google Scholar]
- Spang HR, Bazzoli GJ, Arnould RJ. Hospital Mergers and Savings for Consumers: Exploring New Evidence. Health Affairs. 2001;20(1):150–8. doi: 10.1377/hlthaff.20.4.150. [DOI] [PubMed] [Google Scholar]
- Thomas JW, Holloway JJ, Guire KE. Validating Risk-Adjusted Mortality as an Indicator for Quality of Care. Inquiry. 1993;30:6–22. [PubMed] [Google Scholar]
- United States Government Accountability Office [GAO] 2003. “Specialty Hospitals: Geographic Location, Services Provided, and Financial Performance.” GAO-04-167. Washington, DC: GAO. [PubMed]
- United States Government Accountability Office [GAO] 2005. “Specialty Hospitals: Information on Potential Facilities.” GAO-05-647R. Washington, DC: GAO. [PubMed]
- Worthington AC. Frontier Efficiency Measurement in Health Care: A Review of Empirical Techniques and Selected Applications. Medical Care Research and Review. 2004;61(2):135–70. doi: 10.1177/1077558704263796. [DOI] [PubMed] [Google Scholar]
- Yaisawarng S, Burgess JF. Performance-Based Budgeting in the Public Sector: An Illustration from the VA Health Care System. Health Economics. 2006;15:295–310. doi: 10.1002/hec.1060. [DOI] [PubMed] [Google Scholar]
- Zuckerman S, Hadley J, Iezzoni L. Measuring Hospital Efficiency with Frontier Cost Functions. Journal of Health Economics. 1994;13:255–80. doi: 10.1016/0167-6296(94)90027-2. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.