Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2014 Nov 20.
Published in final edited form as: J Health Econ. 2011 May 23;30(4):832–841. doi: 10.1016/j.jhealeco.2011.05.006

Where Would You Go for Your Next Hospitalization?

Kyoungrae Jung a,*, Roger Feldman b, Dennis Scanlon c
PMCID: PMC4238031  NIHMSID: NIHMS306976  PMID: 21665300

Abstract

We examine the effects of diverse dimensions of hospital quality – including consumers’ perceptions of unobserved attributes – on future hospital choice. We utilize consumers’ stated preference weights to obtain hospital-specific estimates of perceptions about unmeasured attributes such as reputation. We report three findings. First, consumers’ perceptions of reputation and medical services contribute substantially to utility for a hospital choice. Second, consumers tend to select hospitals with high clinical quality scores even before the scores are publicized. However, the effect of clinical quality on hospital choice is relatively small. Third, satisfaction with a prior hospital admission has a large impact on future hospital choice. Our findings suggest that including measures of consumers’ experience in report cards may increase their responsiveness to publicized information, but other strategies are needed to overcome the large effects of consumers’ beliefs about other quality attributes.

Keywords: Hospital Choice, Quality Information, Stated Preference, Hospital Quality

1. Introduction

Health care providers vary in diverse dimensions of quality, such as their clinical skills and knowledge, interpersonal skills, and their patients’ health outcomes. Consumers are likely to value each of these quality dimensions when making health care decisions. If consumers can easily assess differences in these dimensions across providers, they can choose a provider based on their preferences. However, it is often difficult for consumers to observe providers’ quality in health care markets (Arrow, 1963), and it has been recognized that increasing consumers’ information may help consumers make better decisions, which will help achieve better-performing health care markets.

Recent efforts to increase consumer information have focused on publicly releasing comparative information about provider quality. The information disclosed often includes clinical quality measures that are considered important by medical experts (e.g., the extent to which appropriate care is provided or hospital mortality rates). However, consumers’ responses to these reports have been small, implying that consumers may not value such information or they may already know it prior to public reporting. To devise an effective approach for public disclosure, it is essential to clarify the types of quality that consumers consider relevant in making health care decisions and to estimate their relative contributions to provider choice. While important, few studies have addressed this issue.

Early studies of hospital choice reported that distance to a hospital was an important factor affecting hospital demand (Porell and Adams, 1995). After the introduction of public reporting programs, several studies incorporated quality in hospital choice models. Those studies focused on how publicized quality information – primarily hospital mortality rates – affects hospital choice, with most studies reporting weak consumer responses to the information disclosed (Mennemeyer et al., 1997; Mukamel et al., 2004/2005; Dranove and Sfekas, 2008). Scanlon et al. (2008) evaluated an employer's initiative that provided financial incentives for employees to use hospitals that adopted specific patient safety practices. They found mixed results: the impact of the initiative depended on employees’ union affiliation and medical conditions.

The studies of publicized quality information acknowledge that important attributes in hospital choice – such as reputation and amenities – may be unobserved by the researcher. Studies have attempted to control for those unmeasured attributes by including hospital-specific fixed effects in the choice model. However, fixed effects capture all unmeasured attributes with a single variable, although consumers’ valuation of each attribute may be different. For example, consumers may perceive a hospital's overall reputation and the level of medical services offered by the hospital as distinct features of hospital quality and may separately infer each type of quality. Fixed-effects estimates do not provide information about these differences.

We examine the effects of different dimensions of hospital quality in the context of a future hospital stay needing a surgical procedure. Our model includes consumers’ perceptions of several unobserved hospital attributes such as reputation. To capture the unobserved hospital attributes, we take a different approach than the prior literature: we utilize stated preference data from a survey of non-hospitalized (naïve) consumers. The survey asked respondents about the “importance weights” they place on several unobserved attributes when choosing a hospital. We use interactions of those preference weights and hospital dummies as variables in a hospital choice model and interpret the coefficients of the interaction terms as the perceived amount of each unobserved attribute offered by each hospital.

Our approach is based on the method developed by Harris and Keane (1999) and used by Harris et al. (2002), which includes preference weights for unobserved choice attributes as variables in health plan choice models. In effect, this reverses the traditional choice model, which estimates parameters of a utility function based on observed attributes of the choices. But we extend Harris and Keane (1999) and Harris et al. (2002) by using the perceived attributes as variables in a second choice model, estimated on a different sample of consumers. Estimation of the second choice model enables us to determine the relative contribution of each unobserved attribute to consumer utility, which has not been done in prior research. Our work thus solves a problem noted by Harris et al. (2002, page 5): estimation of consumers’ perceptions of unobserved attributes of choices does not tell the researcher about the “strength of preferences separate from the degree of perceived differences.” We explain our two-stage conceptual framework for estimating the strength of consumers’ preferences for unobservable attributes in Section 3.

Our study also extends the hospital choice literature by introducing individual-level satisfaction ratings based on patients’ own experiences. Because certain aspects of hospital quality, such as whether the nursing staff treats patients with respect, can be evaluated only by experience, consumers may receive quality signals through contacts with providers and use these experiential signals to inform their next choices. Feldman et al. (2000) and Schultz et al. (2001) reported that experience is an important information source for consumers’ health care decisions. However, Abraham et al. (2006) found that a “bad experience” or dissatisfaction with their current health plan did not motivate consumers to switch plans. To our knowledge, no study has examined how individuals’ experiences influence their hospital choices. We estimate the impact of satisfaction on future hospital choice as a driving time trade-off (i.e., how many more minutes consumers are willing to travel to a more distant hospital if they have had a bad experience with a nearer hospital).

The rest of this paper is organized as follows. Section 2 describes the data. Section 3 specifies the model. Section 4 discusses empirical methods. Section 5 presents the results, and Section 6 concludes.

2. Data

The primary data set is a survey of employees or their spouses at a large self-insured employer residing in one Metropolitan Statistical Area (MSA).1 The survey was administered twice by telephone. The first round was conducted in spring 2004. The same survey was administered to a new sample in spring 2005. In each round, the sample was randomly selected within strata of union status and a recent hospitalization. The response rates – the ratio of completed interviews to the total number of people selected for interviews (includes refusal, non-response, and non-contact) – were 57% and 57.8% for the first and the second rounds of the survey, respectively. The cooperation rates, which excluded the number of non-contacts from the denominator, were 70.7% and 69%. We used data from both rounds of the survey.

Hospitalized people were identified using claims data from the firm's third-party administrator. For the 2004 survey, the hospitalization occurred between July 2003 and February 2004; for the 2005 survey, the hospitalization occurred between July 2004 and April 2005. The survey asked hospitalized persons to rate their overall satisfaction with their hospital stay on a 1 to 10 scale with 1 being least satisfied (the Appendix lists all the survey questions used in this study).

The information about future hospital choice comes from a hypothetical question included in both rounds of the survey, which asked all respondents – both hospitalized and non-hospitalized – to name the hospital(s) they would be most likely to consider, in rank order, if they needed a future surgical procedure requiring an overnight hospital stay. We used their first choice in our analysis. The survey listed 25 hospitals in the MSA as possible choices. We excluded observations on nine hospitals chosen by fewer than 15 people.2 Our final sample comprised 969 hospitalized and 790 non-hospitalized respondents who named 16 hospitals as their first choice.

The survey collected information about respondents’ preferences for six hospital attributes that are likely to influence hospital choice: overall reputation, specialty medical services offered by a hospital, amenities, out-of-pocket (OOP) costs, quality ratings, and whether a hospital is included in the health plan's network3. Respondents were asked to rate the value of each attribute in their hypothetical future hospital choice on a 1 to 10 scale (1= not at all valuable). These “importance weights” also were collected for people who were not hospitalized during the time periods specified above.

We obtained information about “observed” hospital attributes from several data sources. First, each hospital's clinical quality scores came from the 2005 Hospital Quality Initiative (HQI) sponsored by the Centers for Medicare and Medicaid Services (CMS). HQI is a public reporting program that posts Medicare-participating hospitals’ quality scores on the internet. The quality information was first released to the public in April 2005, about a month before the second round of the survey was completed. Thus, the HQI scores were not publicly available during the survey window (except some interviews conducted in May 2005), but they may represent quality information that was known informally (e.g., through word of mouth) at the time of the surveys.

The 2005 HQI data include 17 process-of-care indicators for three clinical conditions (heart attack, heart failure, and pneumonia). Examples of these indicators are the percentages of patients receiving beta-blocker treatment after a heart attack, assessment of left ventricular function for heart failure, and pneumococcal vaccination when recommended. After excluding two indicators (the percentages of heart attack patients who received PTCA and thrombolytic drugs) that were reported by only four hospitals, we calculated the average score of the remaining 15 indicators for each hospital.4

Second, we utilized Mapquest.com software to calculate driving time between the centroid of a patients’ residential zip code and the street address of each hospital. Finally, we obtained information about each hospital's teaching and profit status from the American Hospital Association (AHA) 2004 annual survey.

3. Conceptual Model

We develop our conceptual framework in two stages. The first stage models a future hospital choice among non-hospitalized (naïve) consumers to identify the average hospital-specific beliefs about unobserved hospital attributes. The second stage models how different dimensions of hospital quality, including consumers’ beliefs about unobserved attributes estimated from the first stage, influence future hospital choice among hospitalized persons. Our choice model in each stage is based on the expected utility theory of decision making.

3.1. Hospital-specific beliefs about unobserved attributes (stage 1)

The expected utility of naïve consumer i choosing hospital j is:

Uij=αRj+βEij+γTij+κXj+εij (1)

R j is hospital j's clinical quality information, Eij is a vector of consumer i's perceptions about hospital j's unmeasured attributes such as the hospital's reputation, Tij is travel time to hospital j from the centroid of the consumer's residential zip code, X j represents a vector of observable characteristics of hospital j , and εij is assumed to follow the type-I extreme value distribution.

We do not observe Eij. However, we have information on the individual's preference weights(βi) for several unobserved hospital attributes. Following Harris and Keane (1999) and Harris et al. (2002), we reverse the normal utility-based choice model by utilizing the preference weights to obtain choice-specific estimates of the unobserved attributes. The idea behind this approach is that consumers are likely to choose an alternative with more of a particular attribute that they value. For example, if consumers who report high importance weights for reputation choose alternative j, this alternative can be inferred to have better reputation than other alternatives.

A simple illustration of this approach is:

Uij=αRj+Ej(βiDj)+γTij+κXj+εij (2)

Using interactions between the preference weights and hospital-specific intercepts (Dj), we estimate a vector of coefficients (Ej) that represent the average perceived amount of each attribute offered by each hospital, relative to a reference hospital. Note that while indicated as one term, Ej represents estimates of several unobserved attributes (Ej = Ej(1) + Ej(2) + Ej(3) + ... + Ej(q), where q is the qth unobserved hospital attribute).

3.2. Effects of quality attributes on hospital choice (stage 2)

We now model a future hospital choice among consumers with a recent hospitalization. We begin with the same utility function as equation (1) for the expected utility of experienced consumer k choosing hospital j:

Ukj=αRj+βEkj+γTkj+κXj+εkj (3)

Next, we indicate the experienced consumer's beliefs about hospital j's quality as:

Ekj=(1h)Ej+h(SkjIkj+pIkj) (4)

E j is the vector of average beliefs about hospital j's unobservable attributes before experience, estimated from the first stage, Skj is consumer k 's satisfaction rating with hospital j based on her experience, h is the weight given to that experiential signal, and Ikj indicates whether consumer k used hospital j previously. If hospital j was used (Ikj =1), we observe the value of Skj, which we scale to mean zero. If hospital j was not used, Ikj =0.

While Ej captures consumers’ hospital-specific perceptions, Skj and Ikj incorporate individual heterogeneity in beliefs about hospital j. The consumer is likely to receive quality signals about hospital j through her experience with the hospital. Skj captures this experiential information. Also, the consumer who used hospital j previously may have other information or beliefs about hospital j's quality, and she may choose the hospital again if she is loyal to that hospital. The prior-use indicator, Ikj (and the parameter, p) expresses this individual information and inertia.

By substitution of (4), equation (3) is written as:

Ukj=αRj+β(1h)Ej+βhSkjIkj+βhpIkj+γTkj+κXj+εkj (5)

Based on this utility function, a consumer will choose hospital j in the future if

Ukj>Ukmfor allmj (6)

We estimate equation (5) using conditional logit analysis.

4. Methods

4.1. Estimating naïve consumers’ perceptions about unmeasured hospital attributes (stage 1)

The first stage estimates a conditional logit model among non-hospitalized people using their preference weights data. This step is necessary to capture the unmeasured attributes offered by each hospital, which allows us to examine the effects of those attributes on choice in the second stage.

This model must be estimated with data for a different group than the hospitalized people whose information is used in the second stage. If the sample was not divided this way, estimation of a one-stage choice model for hospitalized people would extract all the information from the data: it would tell us how experienced consumers perceive unmeasured hospital attributes but it would not estimate the effects of those attributes on choice. We thus utilized data from people who were not recently hospitalized to estimate stage 1. We refer to this group as naïve consumers because their beliefs about unobserved hospital attributes are not based on a recent experience.5 Using data from this group lets us obtain estimates of consumers’ beliefs about hospital quality formed prior to experience, as indicated in equation (4).

While these non-hospitalized respondents did not actually use a hospital during the survey window, the survey asked about their hypothetical future hospital choice. The importance weights on hospital attributes in choosing a hospital also are available for this group. As described earlier, importance weights were obtained for six hospital attributes: overall reputation, medical services, amenities, OOP costs, quality rating, and in-network. However, we did not use the importance weights for quality and in-network because quality was measured by the HQI score and all hospitals in the choice set were eligible for in-network coverage. We included the weights for the remaining four attributes in the model.

While most people in our data had no cost-sharing for inpatient care, we used the preference weights on OOP costs because the firm introduced benefit changes during the survey window that led some hospitals to have cost-sharing. Before July 2004, all union beneficiaries had zero coinsurance for in-patient care; in July 2004 a new scheme was introduced with 5% coinsurance for union beneficiaries, which was waived if the union beneficiary chose a hospital that complied with certain patient safety standards, such as computerized physician order entry.6

We normalized the importance weights to add to 1.0 within an individual. This normalization accounts for individual differences in scoring propensity – a tendency to score all questions high or low.7 Each normalized weight represents the relative importance of a particular attribute to the individual. Using relative weights is meaningful for our choice setting where consumers have to make trade-offs among multiple attributes to select a hospital.

We then created interaction terms between the normalized weight of each attribute and hospital dummy variables. The coefficients of the interaction terms can be interpreted as the amount of the unmeasured attribute offered by each hospital, relative to a reference hospital. For example, a positive coefficient on the interaction between the importance weight for medical services and the dummy variable for hospital 1 implies that consumers perceive that hospital 1 provides better medical services than the reference hospital. A positive coefficient on the interaction between the importance weight for out-of-pocket costs and a hospital dummy means that hospital is perceived to have low out-of-pocket costs relative to the reference hospital (a desirable feature). The magnitude of the coefficients indicates the relative position of each hospital: a hospital with a larger coefficient is perceived to have more of the desirable attribute.

Other explanatory variables in the first-stage choice model include travel times between the centroids of respondents’ zip codes and the street address of each hospital, each hospital's HQI quality score, profit status, and teaching status. We also interacted the HQI score with an indicator for the second round of the survey (2005 survey) to account for temporal changes in information about hospital clinical quality.

4.2. Estimating the effects of different dimensions of hospital quality on choice (stage 2)

We estimated equation (5) with conditional logit analysis to examine how diverse dimensions of hospital quality influence future hospital choice among hospitalized persons. As discussed, hospital clinical quality was measured by HQI scores, and satisfaction ratings were obtained from the survey. We used the coefficients of the interaction terms between preference weights and hospital dummies, estimated from the first stage, to capture consumers’ beliefs about unobserved attributes. We standardized these coefficients to facilitate comparisons of relative contributions to consumer utility across different attributes, including satisfaction ratings, which we scaled to mean zero. Because the second-stage model included estimated parameters as explanatory variables, we obtained standard errors by bootstrapping with 500 repetitions. The model included the same observed explanatory variables as in the first-stage model.

We next examined some special cases where consumers’ reliance on their own satisfaction ratings or beliefs may be small. For example, patients who follow their physicians’ recommendations in selecting a hospital may not value or use signals from their experience or beliefs. We examined this possibility as a sensitivity analysis for a subgroup of respondents who highly value “physician recommendation” in choosing a hospital. Information on the value of physician recommendation was available from the survey. We did not use this variable in the first stage because it represents respondents’ preferences for an information source rather than for a hospital attribute. This information offers a unique opportunity to examine whether consumers’ reliance on physician recommendation influences the impact of their own satisfaction ratings or beliefs on future hospital choice.

Second, patients who are admitted to a hospital through the Emergency Department (ED) may not use the same hospital for a future admission because their prior choices were limited by emergent circumstances. However, their satisfaction ratings at the ED may have a large impact on future choice if they had a good or bad experience at the ED. We examined this possibility using a separate analysis by whether the prior admission was initiated through the ED.

4. Results

The first and second columns of Table 1 present the numbers of non-hospitalized and hospitalized people who chose each hospital as their hypothetical future choice. The third column reports the number of hospitalized respondents who actually used the hospital.8 The distribution of the number of choices across hospitals is very similar in all three groups. We used hospital 15, which has the shortest average travel time and the largest share among respondents in all groups, as the reference hospital in the choice models.

Table 1.

Hospital summary information

Hospital Future 1st choice by non-hospitalized peoplea (%) Future 1st choice by hospitalized peoplea (%) Actual choice by hospitalized peoplea (%) Average Driving Time (minutes) HQI Quality Scoreb Satisfaction Rating among users
Mean Standard Deviation
1 27 (3.4) 34 (3.5) 39 (4.6) 34.3 82 8.03 2.59
2 46 (5.8) 60 (6.2) 59 (7.0) 36.8 77 8.10 2.13
3 31 (3.9) 37 (3.8) 39 (4.6) 40.6 70 7.59 2.52
4 23 (2.9) 31 (3.2) 36 (4.3) 35.5 70 7.67 2.52
5 21 (2.7) 33 (3.4) 33 (3.9) 90.6 71 8.79 1.73
6 86 (10.9) 92 (9.5) 87 (10.4) 32.7 70 8.47 1.68
7 74 (9.4) 85 (8.8) 57 (6.8) 53.8 77 8.14 2.22
8 46 (5.8) 54 (5.6) 54 (6.4) 34.8 65 8.22 2.02
9 50 (6.3) 61 (6.3) 34 (4.1) 43.0 71 8.18 2.37
10 17 (2.2) 17 (1.8) 16 (1.9) 42.4 73 7.25 3.13
11 104 (13.2) 136 (14.0) 106 (12.6) 36.2 78 8.58 1.63
12 31 (3.9) 37 (3.8) 29 (3.5) 34.5 76 8.07 2.59
13 19 (2.4) 37 (3.8) 30 (3.6) 44.1 69 8.30 2.39
14 36 (4.6) 56 (5.8) 48 (5.7) 35.5 63 8.65 1.47
15 132 (16.7) 144 (14.9) 130 (15.5) 29.7 74 7.93 1.93
16 47 (5.9) 55 (5.7) 42 (5.0) 34.3 67 8.19 1.53
Total 790 (100) 969* (100) 839* (100)
Mean 41.2 72.1 8.11
a

Hospitalization status was based on the past one year period and was identified using claims data supplied by the firm's third-party administrator.

b

Hospital Quality Initiative (HQI) scores are clinical quality information released by the Center for Medicare and Medicaid Services (CMS). This information was not publicly available during the study period.

*

The discrepancy between the two total numbers reflects the number of employees who used a hospital that is not included in the choice set for future use.

The fifth column reports the HQI quality scores, and the last column shows satisfaction ratings by respondents who actually used each hospital. The satisfaction scores lie within a relatively narrow range, from 7.25 to 8.79, but three hospitals chosen by fewer respondents for future choice than actual choice all had below-average satisfaction ratings.

Table 2 presents descriptive statistics of the study population. The mean age of both hospitalized and non-hospitalized groups is about 52. The non-hospitalized group has higher proportions of college-educated, males, and families with annual incomes above $70,000. Thirty-two percent of hospitalized respondents reported that they would switch to a different hospital for a future hospital stay. Non-hospitalized respondents gave the highest preference weight to the specialty medical services offered by a hospital and the second-highest weight to the reputation of a hospital. Hospital amenities were the least-important attribute among those listed.

Table 2.

Descriptive Statistics of Study Population

Non-hospitalized peoplea (N=790) Hospitalized peoplea (N=969)
Individual characteristics
Mean SDb Mean SD
Age 51.9 8.3 52.8 9.1
Female 0.55 0.49 0.60 0.49
Income (>$70,000) 0.50 0.49 0.41 0.49
College educated 0.45 0.49 0.37 0.48
Union 0.47 0.49 0.54 0.49
Reporting to switch a hospital for future use 0.32 0.46
Importance weights on unmeasured hospital attributes
Original scalec Normalized weightsc
Mean SD Mean SD
Reputation 8.41 1.50 0.11 0.018
Medical services 8.78 1.49 0.12 0.020
Amenities 6.62 2.26 0.10 0.025
Out-of-pocket cost 7.55 2.19 0.08 0.024
a

Hospitalization status was based on the past one-year period and was identified using claims data supplied by the firm's third-party administrator.

b

Standard Deviation

c

The importance weights were originally measured on a 1 to 10 scale (1= not at all important). We normalized the weights within an individual by dividing a weight on a specific attribute by the sum of responses for all attributes.

Table 3 reports the results of the first-stage model that estimates non-hospitalized respondents’ beliefs about unmeasured attributes. Non-hospitalized people perceive hospitals to differ along a few key dimensions. Three of the fifteen coefficients of interactions between hospital dummies and the importance weight on reputation are significantly greater than zero, suggesting these hospitals are perceived to have better reputation than the reference hospital. Non-hospitalized people also perceive that hospital 5 offers more medical services than the reference hospital. Several estimates capturing OOP costs are significant and negative, implying that these hospitals are perceived to have higher OOP costs than the reference hospital. No coefficients of the amenity weight interactions were significant, indicating that naïve respondents do not perceive differences in amenities across hospitals. We include coefficients on the interaction terms for reputation, medical services and OOP costs in the second-stage choice model as hospital-specific estimates of consumers’ beliefs about those attributes.

Table 3.

Hypothetical Hospital Choice among Non-hospitalized People

Coefficient Standard Error
Measured Hospital Characteristics
    Driving time −0.099*** 0.004
    HQI score 0.040 0.118
    HQI score*2005 survey −0.030 0.016
    For-profit −0.259 2.484
    Teaching −5.922*** 2.220
Unmeasured Attribute Importance
    Reputation
        Hospital 1 0.876 12.468
        Hospital 2 10.319 9.958
        Hospital 3 −10.813 11.786
        Hospital 4 2.619 12.389
        Hospital 5 −21.547 16.292
        Hospital 6 3.948 8.261
        Hospital 7 −3.024 10.549
        Hospital 8 −8.581 10.024
        Hospital 9 7.136 9.168
        Hospital 10 −6.788 14.940
        Hospital 11 22.210*** 7.570
        Hospital 12 19.468* 10.264
        Hospital 13 −8.510 13.817
        Hospital 14 31.507*** 11.347
        Hospital 16 12.841 9.642
    Medical services
        Hospital 1 −4.032 12.148
        Hospital 2 −2.819 8.180
        Hospital 3 3.450 10.188
        Hospital 4 3.660 10.271
        Hospital 5 38.388*** 11.865
        Hospital 6 4.838 6.802
        Hospital 7 3.146 8.183
        Hospital 8 8.891 8.506
        Hospital 9 0.137 7.545
        Hospital 10 −12.869 11.548
        Hospital 11 −3.109 6.310
        Hospital 12 −10.957 8.465
        Hospital 13 −4.309 10.306
        Hospital 14 13.360 10.571
        Hospital 16 −1.700 8.073
Unmeasured Attribute Importance
    Amenities
        Hospital 1 13.185 9.733
        Hospital 2 2.345 6.772
        Hospital 3 6.467 8.938
        Hospital 4 −5.796 8.476
        Hospital 5 −5.162 11.918
        Hospital 6 −0.498 5.498
        Hospital 7 −1.931 6.935
        Hospital 8 0.415 6.944
        Hospital 9 1.351 6.848
        Hospital 10 6.361 10.969
        Hospital 11 −0.571 5.184
        Hospital 12 −3.424 7.454
        Hospital 13 11.774 10.908
        Hospital 14 3.902 7.719
        Hospital 16 −7.264 6.551
    Out-of-pocket costs
        Hospital 1 −25.135*** 8.599
        Hospital 2 −15.583** 7.172
        Hospital 3 −12.790 8.099
        Hospital 4 −12.326 8.662
        Hospital 5 5.006 11.603
        Hospital 6 −8.338 5.867
        Hospital 7 2.890 7.781
        Hospital 8 −9.624 6.891
        Hospital 9 −12.820** 6.394
        Hospital 10 3.542 12.400
        Hospital 11 −15.343*** 5.357
        Hospital 12 −13.922* 7.633
        Hospital 13 −8.556 9.856
        Hospital 14 1.385 7.925
        Hospital 16 −7.549 6.891
Number of choosers 790
Number of observations 12,640
Log-likelihood −1575.85
Pseudo R2 0.2805

Hospital 15 is the reference group

*

p<0.1

**

p<0.05

***

p<0.01

To check the validity of our results, we examined characteristics of hospitals that have significant coefficients on the preference-weight variables. One of the three hospitals with positive and significant coefficients on reputation weights has been named among the “Best Hospitals in America” by US News and World Report and all three hospitals are listed in the top 5 in their MSA. This suggests that consumers may form perceptions about hospital reputation from publicly available information sources and that our estimates represent perceived reputational differences among hospitals.

Further, all three hospitals with positive and significant coefficients on reputation weights are affiliated with hospital systems. We are not aware of any prior study that examined how system affiliation influences hospital choice. However, Dranove and Shanley (1995) reported that hospitals joining systems appeared to gain reputational benefits by differentiating themselves from other hospitals with similar attributes. We could have included a dummy indicator of system affiliation in the model. However, if we found a positive coefficient on the indicator, we would not know what consumers actually valued from system affiliation. Our approach directly shows that system affiliation has “branding” effects on hospital choice.

Hospital 5, which was perceived to offer more specialty medical services, had the second largest number of specialty services identified in the AHA data (13 of 17 specialty services; another hospital had 14 services, and the average was 10). This indicates that people who reported that specialty medical services are important in their hospital choices selected a hospital that offered more specialty services.

We did not use indicators of services from the AHA data in our model because the survey question did not specify what surgical procedure would be needed during the future hospitalization. It was thus unclear what services to include and how to interpret their coefficients, if included. Results on each specialty service (e.g., cardiac procedure) would not be informative when a specific reason for a hospitalization was not presented.

Hospitals with significant coefficients for OOP cost were those that did not meet the patient safety criteria to have hospital coinsurance waived for union beneficiaries after July 1, 2004, implying that our estimated cost parameters reflect hospitals’ OOP costs. Separate analysis by union status confirmed that a few significant coefficients on cost weights became insignificant for non-union beneficiaries who had zero coinsurance.

Finally, we explored a model that adds hospital fixed effects.9 It is not standard practice to include fixed effects in a choice model with preference-weight data; instead, the model lets each choice have an intercept for each unobserved attribute whose preference data are available (Harris and Keane, 1999). However, to account for any residual attributes that are not captured by the four preference weights, we added hospital fixed effects and found that this did not change the coefficients of the interactions between preference weights and hospital dummies. This finding suggests that our estimates are robust after controlling for any other hospital-specific attributes.

We now turn to the results of the second-stage choice model for hospitalized respondents. Table 4 reports maximum likelihood estimates for four specifications of the model, which employ different sets of controls. Model I uses only measured hospital characteristics, including the HQI clinical quality score. Model II adds consumers’ beliefs about reputation, medical services, and OOP costs, obtained from the coefficients in the first stage regression. Model III adds individual satisfaction ratings plus an indicator of prior use of the hospital. Standard errors in Models II and III are bootstrapped. For comparison, column IV presents estimates from a model that uses hospital fixed effects to capture consumers’ beliefs about unobserved attributes, as is common in the literature. Because parameters of a choice model are identified relative to the variance of the utility function (i.e., coefficients represent both scale and taste effects), it is difficult to directly compare coefficients across these models (Long, 1997). Thus, we discuss changes in the coefficients on variables of interest relative to the coefficient of driving time. The ratios of these coefficients are invariant to the scale parameter.

Table 4.

Future Hospital Choice among Hospitalized People

I II III IV
Measured Hospital Characteristics
    Driving time −0.073*** (0.003) −0.081*** (0.006) −0.045*** (0.004) −0.047*** (0.034)
    HQI quality 0.092*** (0.012) 0.086*** (0.015) 0.063*** (0.020)
    HQI quality*2005 survey −0.021 (0.014) −0.019 (0.015) −0.021 (0.022)
    Teaching 0.699*** (0.174) −2.290*** (0.360) −2.013*** (0.493)
    For-profit −1.880*** (0.206) −0.625** (0.291) −0.416 (0.401)
Consumers’ perceptions a
    Reputation 0.722*** (0.096) 0.617*** (0.123)
    Medical services 0.969*** (0.097) 0.695*** (0.106)
    Out-of-pocket cost 0.392*** (0.070) 0.390*** (0.092)
Experiential measures
    Recent use indicator 3.253*** (0.141) 3.271*** (0.091)
    Individual Satisfaction rating 0.820*** (0.128) 0.818*** (0.092)
Number of choosers (observations) 969 (15,504) 969 (15,504) 969 (15,504) 969 (15,504)
Log-likelihood −2231.6 −2131.4 −1289.9 −1274.4
Pseudo R2 0.1694 0.2067 0.5199 0.5258

Standard errors are in parenthesis. Standard errors for models II and III are bootstrapped with 500 repetitions.

a

These variables are hospital-specific estimates of unmeasured attributes, obtained from the choice model of non-hospitalized people.

*p<0.10

**

p<0.05

***

p<0.01.

Estimates from Model I, which includes observed hospital characteristics only, are consistent with basic expectations about hospital choice. Consumers prefer closer, teaching, and not-for-profit hospitals. The positive coefficient on the unpublicized HQI score implies that consumers with a prior hospitalization tend to select hospitals with high clinical quality before its public disclosure. This is consistent with Mukamel et al. (2004/2005), who reported that hospital users seemed to correctly infer hospital mortality rates prior to the release of “report cards” on mortality. However, it is different from the insignificant effect of the HQI score in the first-stage analysis among non-hospitalized people (Table 3). This difference may indicate that people with a prior hospitalization were able to obtain information about clinical quality through their experience, or they may have searched for informal information for their prior hospitalization. The interaction term between the HQI score and the 2005 survey variable is not significant, implying no secular change in consumers’ informal information.

Model II adds estimates of consumers’ beliefs about unmeasured hospital attributes. The sign of the teaching status coefficient changes from positive to negative, suggesting that teaching status captures unmeasured quality differences in Model I. After controlling for unmeasured hospital quality, teaching status may reflect discomfort with using a teaching hospital, such as the possibility of being seen by medical residents. The effect of for-profit status on choice relative to driving time becomes smaller than in Model I. However, the impact of the HQI quality score on hospital choice relative to driving time is similar to Model I, confirming that consumers tend to choose hospitals with high clinical quality even before CMS publicly released this information. The positive and significant coefficients on consumers’ beliefs about reputation and medical services indicate that hospital choice is influenced by these attributes, even after controlling for consumers’ informal information about clinical quality. This may be because the HQI score is based on specific clinical measures for only a few conditions, while reputation and medical services capture perceptions about general quality based on a broader set of services.

Model III adds experiential measures – the prior use indicator and individual satisfaction ratings. For-profit status loses its significance while teaching status remains significant and negative. The coefficients of the HQI score and consumers’ beliefs variables remain positive and significant. While these coefficients are slightly smaller than in Model II, a substantial decrease in the coefficient of driving time suggests that the relative impacts of the HQI score and consumers’ perceptions on hospital choice compared with driving time are not reduced.

Column IV presents the results from a model that uses hospital fixed effects instead of observed hospital characteristics. The coefficients of all variables included in both Models III and IV are very similar. The contributions of prior use and satisfaction ratings to utility relative to driving time are almost the same. This consistency increases our confidence in the estimates and suggests that our approach of using preference weights and the fixed-effects method capture consumers’ prior beliefs in a similar way, although the models incorporating consumers’ perceptions adds information about important unobserved attributes.

We calculated marginal effects from the estimates in Model III, our favored specification. First, consumers’ perceptions of medical services and hospital reputation contribute substantially to consumer utility. One standard deviation increases in medical services and reputation would increase a hospital's market share from 20% to 31% and 29%, respectively. The large effect of prior beliefs reported in the literature may have captured consumers’ perceptions about these attributes (Chernew et al., 2008). The coefficient of OOP cost indicates consumers tend to choose a hospital that is perceived to have low cost. However, the impact of perceived OOP costs on hospital choice is smaller than those of perceptions about reputation or medical services.

Second, a hospital with a base market share of 20% would increase its market share by about 5 percentage points for a one standard deviation increase in the HQI score. This suggests that consumers already use hospitals with high clinical quality before public reporting; however, its impact on hospital choice is smaller than those of perceptions about reputation or medical services.

Third, satisfaction ratings matter for future hospital choice. A one standard deviation increase in the satisfaction rating would increase a hospital's market share from 20% to 33%. The marginal effect – in terms of a travel time trade-off – indicates that a consumer would drive 18.6 minutes farther to use a hospital with a one standard deviation better satisfaction rating.

Finally, the estimated marginal effect of prior use is 64 percentage points (evaluated at the mean values of the covariates). This reflects strong persistency in hospital choice. We are not aware of any prior study that examined patient-level inertia in hospital choice. However, Cutler et al. (2004) found that high-performing hospitals did not experience an increase in market shares even after they were publicly identified as high quality providers, although hospitals with poor quality lost some demand among relatively healthy patients, who may have been able to search for information and had ability to travel to other hospitals.

Sensitivity Analysis

A complicating factor in examining hospital choice is that patients do not choose a hospital alone; instead, that decision is made jointly with referring physicians. Because our study looks at a hypothetical future hospital choice, bias from not considering this joint decision might be small. However, if some consumers highly value their physicians’ suggestions and know which hospital their physician would recommend, they may choose that would-be-recommended hospital, even for a hypothetical choice. In this case, the impact of their experience or other beliefs on hospital choice is likely to be small, while the effect of prior use may be large.

We explored this possibility using the importance weights on “physician recommendation” when choosing a hospital. We normalized each survey respondent's weight on physician recommendation by dividing it by the sum of the respondent's responses for hospital attributes and physician recommendation. We then divided the sample into three groups based on the normalized weights and analyzed the top and bottom tertiles separately.

Table 5 presents estimates from this subgroup analysis. As expected, we found a larger inertia effect in the top tertile (high valuations of physician recommendation) than in the bottom tertile. The relative impact on choice compared with driving time is also higher for the top tertile. The larger contribution of satisfaction ratings to hospital choice in the bottom tertile appears to suggest that consumers who do not rely on physician recommendation value their own satisfaction ratings. However, the impact of these variables on choice was not significantly different between the two groups and it is similar to those in the overall analysis, indicating our findings are robust after accounting for consumers’ valuations of physician recommendation in hospital choice.

Table 5.

Results of sensitivity analysis

Weights on physician recommendation Admission through emergency room

Top tertile Bottom Yes No
Measured Hospital Characteristics
    Driving time 0.052*** (0.006) −0.045*** (0.005) −0.047*** (0.005) −0.044*** (0.004)
Consumers’ perceptions a
    Reputation 0.677*** (0.233) 0.774*** (0.202) 0.783*** (0.174) 0.467*** (0.159)
    Medical services 0.931*** (0.232) 0.690*** (0.175) 0.691*** (0.163) 0.704*** (0.157)
    Out-of-pocket cost 0.330* (0.174) 0.562*** (0.162) 0.557*** (0.137) 0.244*** (0.113)
Experiential measures
    Recent use indicator 3.578*** (0.289) 3.142*** (0.259) 3.091*** (0.214) 3.356*** (0.190)
    Individual satisfaction rating 0.796*** (0.236) 1.029*** (0.239) 0.884*** (0.216) 0.779*** (0.182)
Number of choosers (observations) 315 (5,040) 322 (5,152) 417 (6,672) 547 (8,752)
Log-likelihood −379.67 −446.88 −609.27 −673.07
Pseudo R2 0.5653 0.4994 0.4730 0.5562

All models include each hospital's teaching/profit status, HQI score, and an interaction term between HQI score and 2005 survey. Standard errors in all models (in parentheses) are bootstrapped with 500 repetitions.

a

These variables are hospital-specific estimates of unmeasured attributes, obtained from the choice model of non-hospitalized people.

*

p<0.10

**p<0.05

***

p<0.01.

Next, we analyzed subgroups of patients with a scheduled admission versus an admission through the ED. Patients admitted through the ED may have a restricted set of choices, not based on factors they would otherwise consider, implying they may switch to another hospital for a future use. However, if ED services are delivered quickly and are well-managed, patients may rely on their satisfaction with an ED-initiated admission for a future hospital use.

About 43% of the respondents were hospitalized via the ED (N=417).10 The average satisfaction rating among ED users was lower than among those with planned hospitalizations (7.84 vs. 8.33; p < 0.001). Because the coefficients on driving time were similar in both groups (-0.045 vs. -0.044), we discuss differences in the actual coefficients between these two groups. As expected, the impact of prior use was smaller among ED users than among non-ED users (Table 5). The coefficient on the satisfaction rating was higher among ED users than non-ED users. However, these estimates did not differ significantly between the two groups.

Finally, we checked whether the estimated beliefs about reputation and medical services obtained from the first stage simply represented the popularity of each hospital in the second-stage analysis. We estimated Model III – our preferred model – including the percent of non-hospitalized respondents who named the hospital as their future choice, in addition to the beliefs estimates. This share variable is likely to control for any unobserved hospital quality that is not captured by those beliefs among non-users. The coefficient of the share variable was insignificant and including it did not change the impacts of beliefs.

5. Discussion and conclusion

Researchers have proposed that development of initiatives to increase consumer information requires clarification of what quality aspects consumers consider relevant and use for their health care choices (Haas-Wilson, 1994; Feldman et al., 2000; Harris and Buntin, 2008). However, no research has estimated the relative impact of different aspects of quality on hospital choice. Our study contributes to this discussion by introducing hospital quality measures that have not been used in the literature. The main findings of the study are as follows.

First, we found that consumers perceive differences in reputation, medical services, and outof-pocket costs across hospitals but not in amenities. The second-stage choice model showed that medical services and reputation had large impacts on future hospital choice. Prior studies may have captured consumers’ combined perceptions about these attributes with hospital fixed effects. By examining relative contributions of different aspects of unobserved hospital quality to choice, our study provides more complete information about the role of consumers’ beliefs in hospital choices.

Second, our analysis suggests that consumers tend to use hospitals with better clinical quality scores before the scores are publicly reported. However, the contribution of this informal information about clinical quality to hospital choice is small, compared with those of consumers’ perceptions about reputation or medical services.

Finally, we found large and positive effects of prior use and individuals’ satisfaction ratings from their own experience on future hospital choice. The significant effect of satisfaction ratings is different from Abraham et al. (2006), probably because that study looked at whether a “bad experience” led to health plan switching rather than physician or hospital switching. A health plan consists of many contracted providers with whom consumers make primary contacts. Thus, if a patient's rating of her health plan was low due to a bad experience with a physician, she may seek another physician without switching plans. Our finding of persistency in hospital choice appears to be consistent with “inertia” effects reported in health plan choice (Jin and Sorenson, 2006).

Our results should be interpreted in the context of the choice setting we studied. First, our model is based on a hypothetical future choice. Using actual sequential hospital choices may provide better estimates of the impacts of consumers’ perceptions or individual experience on hospital choice. Data with more than one follow-up could estimate learning effects over time from hospitalizations. However, using a hypothetical future choice helped us avoid a potential problem that people with repeated hospital stays may be severely ill and thus likely to choose a hospital from a limited choice set. Further, repeated hospitalizations are not common among working-age people, resulting in insufficient power for that analysis.

Second, we used stated preference data from non-hospitalized respondents to estimate consumers’ beliefs about unobserved hospital quality. We referred to this group as “naïve” consumers because they did not have a recent prior hospital stay. However, this population may not be as naïve as we think. They may have used a hospital in the choice set before the survey window and may have formed beliefs about hospital quality based on that experience. It is also possible that they may have heard about the quality of the hospitals from users. We could not control for the possibility that respondents “ever” used the hospital or they acquired quality information from their co-workers. However, the importance weights from non-hospitalized respondents were obtained by prospective questions, minimizing the possibility that they reflect quality of a particular hospital at which they were treated. Using data from respondents at the same firm also helped control for worksite factors that may affect hospital choice because they are exposed to the same environment as users.

Third, we estimated only the average belief about each unmeasured hospital attribute rather than individual-specific beliefs. However, we estimated separate parameters for beliefs about different attributes. This improves on the fixed-effects approach, which summarizes the average beliefs about all attributes with a single value. Further, individual heterogeneity in beliefs is partially incorporated by the prior use indicator, which captures information about the hospital available to a user.

Recent public reporting programs have been expanded to include quality indicators based on patients’ experiences. For example, the Centers for Medicare and Medicaid Services began to release Consumer Assessment of Healthcare Providers and Systems (CAHPS) data for hospitals in 2008. The CAHPS data include patients’ overall satisfaction ratings with the hospital experience. Although the impact of releasing CAHPS-type information on hospital choice has yet to be examined, our finding of the positive impact of individual satisfaction ratings is encouraging given this trend. If consumers do not have their own experience, they may turn to report cards containing information on satisfaction ratings from other users’ experience.

In addition to disclosing relevant information to the public, other strategies are needed to increase the use of quality information, considering the large effects of consumers’ beliefs. Mechanisms designed to effectively disseminate quality information may help increase consumers’ awareness and use of the information. It is reported that less than a quarter of consumers are aware of publicly-available comparative quality information (Harris and Buntin, 2008). While most quality information is available via internet, the rate of using the internet as a source for provider quality information is low (Harris and Buntin, 2008). As we have seen in the health plan “report card” movement, employers’ efforts to provide employees with hospital quality information in an easy-to-read format, along with financial incentives to choose a high quality provider, may improve access to and use of the information. Ensuring that physicians are informed about public quality information also may help effectively utilize the information if consumers choose hospitals based on their physicians’ recommendations and physicians incorporate such information in their referrals.

Our study contributes to the literature on consumer information and choice by assessing the role of consumers’ knowledge and beliefs about different dimensions of hospital quality in making a future hospital choice. To develop effective ways to increase consumer information about hospital quality and achieve its ultimate goal – improving performance of health care markets – many questions remain to be answered: How do consumers make an initial choice of hospital? How do consumers form their beliefs about hospital quality? What types of information would lead to cost control, use of appropriate care, and improvement in health and satisfaction? Continuing exploration of these issues will help achieve efficient operation of health care markets, as well as ensuring provision of good quality care in all aspects.

Acknowledgements

This project was supported by grants 2 U18 HS13680 and 2 RO1 HS010730-04 from the Agency for Health Care Research and Quality (AHRQ). The authors thank Donald Miller for assistance with data preparation. We are grateful to Richard Lindrooth and seminar participants at the Pennsylvania State University, University of Pennsylvania, University of Minnesota, and American Society of Health Economists and American Economic Association conferences for their helpful comments.

Appendix

Table A1.

Survey questions used in the study

Experience rating for hospitalized people:
    “Using a scale of 1 to 10, with 1 being very dissatisfied and 10 being very satisfied. Please rate your overall satisfaction with the care provided at this hospital.”
Hypothetical hospital choice:
    “Please tell me the name of the hospital you are most likely to consider if you need to have a surgical procedure requiring an overnight hospital stay.”
    “Are there others you would consider using?”
Preference weights for unmeasured hospital attributes:
    “On a scale of 1 to 10, with 1 being not at all valuable and 10 being extremely valuable, please rate each item.”
    “The next time you decide which hospital to use for inpatient services, how valuable would you find: The hospital's overall reputation?”
    “The next time you decide which hospital to use for inpatient services, how valuable would you find: The specialty medical services offered by the hospital, for example cardiac bypass surgery?”
    “The next time you decide which hospital to use for inpatient services, how valuable would you find: The amenities offered by the hospital, for example, private rooms and convenient parking?”
    “The next time you decide which hospital to use for inpatient services, how valuable would you find: Your expected out-of-pocket costs for the hospital stay?”
Preference weights for information sources:
    “On a scale of 1 to 10, with 1 being not at all valuable and 10 being extremely valuable, please rate each item.”
    “The next time you decide which hospital to use for inpatient services, how valuable would you find: Your physician's recommendation of the hospital?”

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

1

Only one household member (either the employee or his/her spouse) was included in the survey. Spouses were surveyed only if they were covered by the employer's insurance plan. Dependent children were not included.

2

These hospitals tended to be located farther away from most respondents’ residences than the other hospitals.

3

Consumers usually pay less when using hospitals in their health plans’ network.

4

We assigned the mean values of three indicators about smoking cessation counseling to four hospitals with missing values of these indicators.

5

Naïve consumers may have been hospitalized before the survey window; however, we do not have information on such hospital use. We only know they were not hospitalized within the past 12-month period.

6

We might have controlled for OOP cost using an interaction term among compliance status of a hospital, survey period, and union status of a respondent. However, it was not clear how to construct this term because the benefit change was made during the second survey window while compliance was based on information during the first survey window and hospitals may have changed their compliance status over time.

7

Absolute (raw) values of weights may represent the strength of preferences. However, because preference weights do not have a natural unit, respondents may have different perceptions of the scale. This is a well-recognized issue in using rating-scale techniques to measure health state utilities (Torrance, 1986). If we used absolute weights to reflect the strength of preferences, we would constrain the change in utility between adjacent categories and the strength of utility for the same weight value to be equal across all individuals. To check whether our results are sensitive to rescaling, we estimated our models using absolute weights but found few changes in the results.

8

The discrepancy in the total numbers between future and actual use among hospitalized people reflects the number of respondents who used a hospital that is not included in the choice set for future use.

9

We did not include hospital fixed effects in our primary model because we would not know how to interpret their coefficients if we did. In theory, if the survey had asked respondents about their preferences for “generic quality” (all other attributes that are not mentioned), we could have used that weight and a vector of hospital fixed effects to identify the generic quality offered by each hospital. However, we do not believe that this would be informative.

10

The rate of emergent hospitalizations seemed large, but we verified it with the health plan's staff. We also examined diagnosis codes for these hospitalizations and found most codes represented emergent cases.

References

  1. Abraham JM, Feldman R, Carlin C, Christianson J. The effect of quality information on consumer health plan switching: evidence from the Buyers Health Care Action Group. Journal of Health Economics. 2006;25(4):762–781. doi: 10.1016/j.jhealeco.2005.11.004. [DOI] [PubMed] [Google Scholar]
  2. Arrow KJ. Uncertainty in health care markets. American Economic Review. 1963;53(5):941–973. [Google Scholar]
  3. Beaulieu ND. Quality information and consumer health plan choices. Journal of Health Economics. 2002;21(1):43–63. doi: 10.1016/s0167-6296(01)00126-6. [DOI] [PubMed] [Google Scholar]
  4. Chernew M, Gowrisankaran G, Scanlon DP. Learning and the value of information: Evidence from health plan report cards. Journal of Econometrics. 2008;144:156–174. [Google Scholar]
  5. Cutler DM, Huckman RS, Landrum MB. The role of information in medical markets: an analysis of publicly reported outcomes in cardiac surgery. American Economic Review. 2004;94(2):342–346. doi: 10.1257/0002828041301993. [DOI] [PubMed] [Google Scholar]
  6. Dafny L, Dranove D. Do report cards tell consumers anything they don't already know? The case of Medicare HMOs. RAND Journal of Economics. 2008;39(3):790–821. doi: 10.1111/j.1756-2171.2008.00039.x. [DOI] [PubMed] [Google Scholar]
  7. Dranove D, Sfekas A. Start spreading the news: a structural estimate of the effects of New York hospital report cards. Journal of Health Economics. 2008;27(5):1201–1207. doi: 10.1016/j.jhealeco.2008.03.001. [DOI] [PubMed] [Google Scholar]
  8. Dranove D, Shanley M. Cost reductions or reputation enhancement as motives for mergers: the logic of multihospital systems. Strategic Management Journal. 1995;16:55–74. [Google Scholar]
  9. Feldman R, Christianson J, Schultz J. Do consumers use information to choose a health care provider system? Milbank Memorial Quarterly. 2000;78(1):47–77. doi: 10.1111/1468-0009.00161. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Haas-Wilson D. The relationships between the dimensions of health care quality and price: the case of eye care. Medical care. 1994;32(2):175–182. doi: 10.1097/00005650-199402000-00008. [DOI] [PubMed] [Google Scholar]
  11. Harris KM, Keane MP. A model of health plan choice: Inferring preferences and perceptions from a combination of revealed preference and attitudinal data. Journal of Econometrics. 1999;89:131–157. [Google Scholar]
  12. Harris K, Schultz J, Feldman R. Measuring consumer perceptions of quality differences among competing health benefit plans. Journal of Health Economics. 2002;21(1):1–17. doi: 10.1016/s0167-6296(01)00098-4. [DOI] [PubMed] [Google Scholar]
  13. Harris K, Buntin MB. Research Synthesis Report. Vol. 14. Robert Wood Johnson Foundation; 2008. Choosing a health care provider: The role of quality information. pp. 1–25. [PubMed] [Google Scholar]
  14. Jin GZ, Sorensen AT. Information and consumer choice: The value of publicized health plan ratings. Journal of Health Economics. 2006;25(2):248–275. doi: 10.1016/j.jhealeco.2005.06.002. [DOI] [PubMed] [Google Scholar]
  15. Kolstad JT, Chernew ME. Quality and consumer decision making in the market for health insurance and health care services. Medical Care Research and Review. 2008;66(1):28S–52S. doi: 10.1177/1077558708325887. [DOI] [PubMed] [Google Scholar]
  16. Mennemeyer ST, Morrisey MA, Howard LZ. Death and reputation: how consumers acted upon HCFA mortality information. Inquiry. 1997;34(2):117–28. [PubMed] [Google Scholar]
  17. Mukamel DB, Weimer DL, Zwanziger J, Gorthy SH, Mushlin AI. Quality report cards, selection of cardiac surgeons, and racial disparities: A study of the publication of the New York State Cardiac Surgery Reports. Inquiry. 2004/2005;41:435–446. doi: 10.5034/inquiryjrnl_41.4.435. [DOI] [PubMed] [Google Scholar]
  18. Long S. Regression Models for Categorical and Limited Dependent Variable. Sage Publications, Inc.; 1997. [Google Scholar]
  19. Porell FW, Adams EK. Hospital choice model: A review and assessment of their utility for policy impact analysis. Medical Care Research and Review. 1995;52(2):158–195. doi: 10.1177/107755879505200202. [DOI] [PubMed] [Google Scholar]
  20. Scanlon DP, Chernew ME, McLaughlin CG, Solon G. The impact of health plan report cards on managed care enrollment. Journal of Health Economics. 2002;21(1):19–41. doi: 10.1016/s0167-6296(01)00111-4. [DOI] [PubMed] [Google Scholar]
  21. Scanlon D, Lindrooth R, Christianson JB. Steering patients to safer hospitals? The effect of a tiered hospital network on hospital admissions. Health Services Research. 2008;43:1849–1867. doi: 10.1111/j.1475-6773.2008.00889.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Schultz J, Call TK, Feldman R, Christianson J. Do employees use report cards to assess health care provider systems? Health Services Research. 2001;36(3):508–530. [PMC free article] [PubMed] [Google Scholar]
  23. Torrance GW. Measurement of health state utilities for economic appraisal: A review. Journal of Health Economics. 1986;5:1–30. doi: 10.1016/0167-6296(86)90020-2. [DOI] [PubMed] [Google Scholar]

RESOURCES