Skip to main content
Health Services Research logoLink to Health Services Research
. 2011 Oct;46(5):1646–1662. doi: 10.1111/j.1475-6773.2011.01279.x

Understanding Variations in Medicare Consumer Assessment of Health Care Providers and Systems Scores: California as an Example

Donna O Farley 2, Marc N Elliott 1, Amelia M Haviland 2, Mary Ellen Slaughter 2, Amy Heller 3
PMCID: PMC3207197  PMID: 21644970

Abstract

Objective

To understand reasons why California has lower Consumer Assessment of Healthcare Providers and Systems (CAHPS) scores than the rest of the country, including differing patterns of CAHPS scores between Medicare Advantage (MA) and fee-for-service, effects of additional demographic characteristics of beneficiaries, and variation across MA plans within California.

Study Design/Data Collection

Using 2008 CAHPS survey data for fee-for-service Medicare beneficiaries and MA members, we compared mean case mix adjusted Medicare CAHPS scores for California and the remainder of the nation.

Principal Findings

California fee-for-service Medicare had lower scores than non-California fee-for-service on 11 of 14 CAHPS measures; California MA had lower scores only for physician services measures and higher scores for other measures. Adding race/ethnicity and urbanity to risk adjustment improved California standing for all measures in both MA and fee-for-service. Within the MA plans, one large plan accounted for the positive performance in California MA; other California plans performed below national averages.

Conclusions

This study shows that the mix of fee-for-service and MA enrollees, demographic characteristics of populations, and plan-specific factors can all play a role in observed regional variations. Anticipating value-based payments, further study of successful MA plans could generate lessons for enhancing patient experience for the Medicare population.

Keywords: Patient experience of care, Geographic variations, Medicare


The Medicare program, administered by the Centers for Medicare and Medicaid Services in the U.S. Department of Health and Human Services, has collected and reported data on beneficiaries' experiences of care in private health plans, Medicare Advantage (MA), since 1997 and for beneficiaries in traditional fee-for-service Medicare since 2000 (Landon et al. 2004). These data have been collected through the Medicare Consumer Assessment of Healthcare Providers and Systems (Medicare CAHPS) survey instrument.

Variations in Medicare CAHPS scores have been well-documented geographically (Zaslavsky, Zaborski, and Cleary 2004), between fee-for-service Medicare and MA plans (Landon et al. 2004), and across MA plans (Zaslavsky, Zaborski, and Cleary 2004), but the factors driving these variations have not yet been identified. Research has found that Medicare spending is not consistently associated with better quality, access, or patient satisfaction with care (Fisher et al. 2003a,b; Baicker and Chandra 2004; Landon et al. 2004; Dowd et al. 2009).

The Patient Protection and Affordable Care Act of 2010 directs the Centers for Medicare and Medicaid Services to establish a quality bonus program that would pay bonuses to MA plans for performance on clinical quality and enrollee experience of care (U.S. Congress 2010). It also has value-based purchasing provisions for many types of fee-for-service Medicare providers. Under such provisions, variations in CAHPS performance would have financial consequences for MA plans and fee-for-service providers via a system of quality bonuses associated with overall performance of “four stars” or higher on a five-star scale.

This study examines some factors that might underlie observed regional variations in CAHPS scores. We use the California experience as a case study because California is a large state and it historically has been one of the lower CAHPS performers. California Medicare CAHPS scores have been lower for health plan services, while ratings of doctors have been closer to the overall national averages (Zaslavsky, Zaborski, and Cleary 2004). These differences would affect their payments under a quality bonus system. Using 2010 star ratings, our estimates show that California plans attained the “four star” thresholds for 18 percent of their eligible CAHPS measures, compared with 44 percent of measures for non-California MA plans.

BACKGROUND

The Medicare program consists of traditional fee-for-service Medicare, in which a majority of beneficiaries are enrolled, as well as an enrollment option for MA health plans. The Centers for Medicare and Medicaid Services contracts with several hundred managed care plans, which must fulfill reporting and performance requirements, including patient experience of care. There is no uniformly defined MA plan benefit beyond a specified minimum requirement; capitation rates vary widely across counties, and health plans vary substantially in plan structure, benefits offered, and additional services to beneficiaries (Hurley, Grossman, and Strunk 2003; Medicare Payment Advisory Commission 2003; Hurley, Strunk, and Grossman 2005). These variations among MA plans could result in different experiences across plans for enrolled beneficiaries.

The Medicare CAHPS surveys contain global-rating items, additional single-item measures, and items that are grouped to form composite scores for several domains (Goldstein et al. 2001). The CAHPS survey scores used for public reporting are case mix adjusted to control for systematic differences in response tendency associated with respondent characteristics (Martino et al. 2009; Agency for Healthcare Research and Quality 2010).

California has substantial managed care penetration and many California plans participate in the MA program (22 in 2007, 40 in 2008), with 34 percent of California Medicare beneficiaries enrolled. California's MA plans include several very large plans as well as many smaller plans. Some plan sponsors also serve markets in other parts of the country, while others are unique to California. The MA plans in California offer differing benefit structures, models of care, and monthly premiums for enrolled beneficiaries (California Health Care Foundation 2003).

RESEARCH QUESTIONS

We performed analyses to address the following research questions:

  1. To what extent do overall differences in CAHPS performance between California and the rest of the country hold within the fee-for-service and MA sectors?

  2. Are there additional stable characteristics of beneficiaries that would explain observed score differences between California and the rest of the country, but which are not included in the standard CAHPS case mix adjustments?

  3. Are there substantial differences in CAHPS scores among California MA plans that suggest that unique plan-level characteristics might be affecting observed California/non-California differences in scores?

We identified two factors as potential case mix adjustors (for question #2) that met the criteria of stability, potential contributor to differences in CAHPS scores, and measurability. The first is beneficiary race/ethnicity. Previous research demonstrates that race/ethnicity is strongly and consistently associated with CAHPS scores; Asians tend to report lower scores than others, and Latinos tend to report higher global ratings but lower composite scores (Lurie et al. 2003; Weech-Maldonado et al. 2004, 2008; Elliott et al. 2009; Goldstein et al. 2010). These two groups constitute a substantially larger proportion of California residents than of the nation as a whole. The second is beneficiary location in urban versus rural areas, for which score differences may reflect response tendency (which is a legitimate basis for adjustment) or may reflect true differences in care (which is not) (O'Neill 2004; Reschovsky and Staiti 2005; Lutfiyya et al. 2007).

In examining differences in CAHPS scores among California MA plans, we noted that a large proportion of all California MA enrollees are in a single large MA plan, “Plan-A,” which has distinct structure and operating characteristics. We compared CAHPS scores for Plan-A to the rest of California's Medicare health plans.

METHODS

Our analyses used CAHPS survey data for national samples of both members of MA plans and fee-for-service Medicare beneficiaries (some of whom were also enrolled in prescription drug plans). We performed the same analyses for 2007 and 2008 survey data, but we report only the latter here, given similar results.

Data and Measures

A total of 408,020 MA and fee-for-service beneficiaries completed the 2008 Medicare CAHPS survey. The survey used a stratified random sampling plan, with contracts (hereafter “plans”) serving as strata for MA beneficiaries and for fee-for-service beneficiaries enrolled in prescription drug plans. States served as strata for fee-for-service beneficiaries not enrolled in a prescription drug plan. The survey was administered by two waves of mailings with telephone follow-up of nonrespondents. The overall response rate was 61 percent (56 percent for fee-for-service, 65 percent for MA).

We analyzed the five global ratings (personal physician or nurse, specialists, all health care received, experiences with Medicare/plan, and experience with prescription drug coverage), additional single items on flu shot and pneumonia shot and ease of paperwork, and four composite measures of reported care: doctor communication (four items), ease of getting needed care (two items), customer service (three items), ease of getting needed drugs (three items), and getting information on drugs (two items) (Goldstein et al. 2001). The global-rating items had 0–10 response scales, and all other items had four response options (never, usually, sometimes, always).

Each CAHPS composite score was calculated as the average of the responses for the items within the composite. To facilitate comparisons across measures of health care experiences, we linearly transformed all CAHPS scores to a 0–100 scale and then calculated weighted mean scores, weighted by the number of beneficiaries in the relevant contracts and states to account for the sample design and nonresponse.

We case mix adjusted CAHPS measures to control for systematic differences in survey response tendency related to respondents' characteristics. Standard Medicare CAHPS case mix adjustment controls for age, education, overall self-rated health, self-rated mental health, an indicator of dual eligibility for Medicaid, and Low Income Subsidy receipt (Zaslavsky et al. 2001; Martino et al. 2009).

Analyses

We first estimated linear regressions to calculate overall (combined Medicare fee-for-service and MA) mean, standard case mix adjusted Medicare CAHPS scores for California respondents and for the remainder of the country, using models with a California indicator and the case mix fixed effects noted above. We also estimated overall CAHPS scores adjusted for differences in MA/fee-for-service enrollments by adding an MA indicator to the model to assess the effect on scores of the difference in MA plan penetration between California and the rest of the nation. We tested the statistical significance of the differences in scores for each measure. We also calculated case mix adjusted scores (without the MA indicator) for each of 53 states and territories (hereafter “states”) to determine California's ranking among states on each CAHPS measure (1 = highest, 53 = lowest).

To test research question #1, we stratified the data into MA and fee-for-service and estimated, within each type, regression models predicting each CAHPS measure from a dummy variable for coverage in California and standard case mix adjustors.

To test research question #2, we expanded the case mix adjustment to add adjustors for race/ethnicity and urbanicity (referred to as “full adjustment”). The race/ethnicity indicators—Hispanic, Native American, black, Asian or Pacific Islander, and multiracial—were derived from a Hispanic ethnicity question and a list of races allowing multiple endorsements (Goldstein et al. 2010). Four urbanicity indicator variables were based on Beale codes (Economic Research Service 2004) that define level of rural or urban status on a scale of 1 (most urban, 53 percent of population, omitted) to 9 (most rural, 1 percent of population), with values of 5–9 (12 percent of population) pooled (Butler and Beale 1994). We estimated models with these additional case mix adjustors (full adjustment models), again fitting separate models by coverage type (MA, fee-for-service). We also tested the separate effects of race/ethnicity and urbanicity by assessing changes in R2 for models that include each set individually, compared with models using only the standard case mix adjustment.

To test research question #3, we examined effects of individual California health plans on the difference between MA plan scores for the state and the rest of the country. We fit two additional models to test the effect of large Plan-A on the California MA scores. In one model, we included observations only for Plan-A, and in the other model we included observations only for other California plans. These two models compared each of these groups to all MA plans in the rest of the country.

RESULTS

Comparisons of sample demographics and other variables used in the analysis, for California and the rest of the nation, are presented in Table 1. These data highlight the higher percentages of Asian and Hispanic populations and greater concentration in urban areas in California, compared with the rest of the nation.

Table 1.

Characteristics of 2008 Medicare Beneficiaries in California and the Rest of the Nation, by Medicare Advantage (MA) or Fee-for-Service (FFS) Status*

California Rest of Nation


Sample Characteristic MA FFS MA FFS
Sample size 26,209 10,989 216,013 154,809
Age (%)
 18–64 5.6 15.1 10.3 15.8
 65–69 19.7 25.3 22.4 23.6
 70–74 23.5 19.5 23.7 20.4
 75–79 21.4 15.6 19.7 16.4
 80–84 16.4 12.9 13.9 13.0
 85+ 13.5 11.5 10.0 10.8
Urbanicity (by Beale codes)
 1 Metro ≥ 1m population 85.5 66.1 57.1 41.1
 2 Metro 250k–1 million 12.5 20.0 21.7 20.5
 3 Metro < 250k 1.6 8.3 8.2 12.2
 4 Urban ≥ 20k, adjacent metro 0.2 2.5 4.5 7.3
 5–9 small urban and rural 0.3 3.1 8.4 18.7
 Beale code missing 0.0 0.1 0.0 0.2
Race/ethnicity
 White 57.6 55.2 64.6 73.3
 African American 5.0 4.6 8.3 8.0
 Hispanic 13.2 15.5 10.1 4.8
 Asian 7.8 11.9 1.9 1.5
 Native American 0.3 0.4 0.3 0.6
 Multirace 1.3 1.8 1.4 1.8
 Race unknown 14.9 10.5 13.5 10.0
Education
 Less than high school graduate 18.2 22.2 26.5 21.2
 High school graduate 26.8 23.1 36.7 35.4
 Less than bachelor degree 30.8 28.2 21.4 23.8
 Bachelor degree 10.2 11.9 7.3 8.9
 More than 4 years college 13.9 14.6 8.2 10.8
General health status
 Excellent 10.4 8.2 9.1 7.0
 Very good 30.3 24.4 27.0 23.8
 Good 36.2 34.8 36.1 35.0
 Fair 19.0 24.3 22.4 25.5
 Poor 4.2 8.3 5.4 8.8
Mental health status
 Excellent 32.5 30.1 31.0 28.6
 Very good 35.0 30.4 32.3 31.4
 Good 23.5 25.0 25.7 26.1
 Fair 7.4 11.4 9.1 11.0
 Poor 1.6 3.2 1.8 2.8
Low income subsidy deemed 11.0 2.5 17.2 2.9
Proxy answered questions 3.5 3.8 3.4 3.8
Received help from proxy 11.0 15.1 11.4 11.3
*

Data were weighted to represent the enrolled population of contract (MA, prescription drug plan, or FFS unenrolled) by county combinations, followed by a raking procedure (log-linear weights by iterative proportional fitting) to match weighted sample and enrolled populations within each contract (or the FFS unenrolled category) on gender, age, race/ethnicity, Medicaid, low-income supplement, and Special Needs Plan status, prescription drug enrollment, and zip-code level distributions of income, education, and race/ethnicity (Deming and Stephan 1940; Purcell and Kish 1980).

In the comparative results presented here, all differences found are statistically significant (p<.05) unless otherwise noted. As shown in Table 2, California CAHPS scores in 2008 were lower than scores for the rest of the country. California's rankings among states in the domains of physician services and immunization compared poorly and its rankings for the other CAHPS domains varied from above average to low (column 2). Adjusting scores for MA penetration reduced but did not eliminate the differences.

Table 2.

Case Mix Adjusted Mean CAHPS Scores for Medicare Beneficiaries, California (CA) versus Non-CA, 2008

Original Scores Adjusted for MA Enrollment


CAHPS Scores CAHPS Scores


CAHPS Outcomes California Rank among States CA Non-CA Difference: CA−Non-CA CA Non-CA Difference: CA−Non-CA
Physician services
 Global rating of personal physician or nurse (0–10) 45 87.46 88.19 −0.73*** 88.81 89.57 −0.77***
 Global rating of specialist (0–10) 39 86.95 87.54 −0.59** 88.13 88.63 −0.50*
 Global rating of health care (0–10) 41 82.16 82.77 −0.61*** 83.54 84.12 −0.58***
 Doctor communication (four-item composite) 47 87.05 88.01 −0.96*** 88.42 89.38 −0.96***
 Ease of getting needed care (two-item composite) 44 83.19 84.57 −1.38*** 83.91 85.14 −1.23***
Immunizations
 Getting a flu shot in the previous year 40 63.22 63.68 −0.46 64.78 65.22 −0.44
 Getting a pneumonia shot 48 51.79 55.39 −3.60*** 55.61 59.33 −3.71***
Plan services
 Global rating of MA plan or fee-for-service (0–10) 27 81.68 81.94 −0.26 83.01 83.43 −0.42*
 Getting care quickly (two-item composite) 26 67.42 66.90 0.52* 69.20 68.82 0.39
 Ease of paperwork (stand-alone report item) 14 70.89 69.28 1.61* 70.46 70.96 −0.50
 Customer service (three-item composite) 12 75.43 74.40 1.03* 73.38 74.21 −0.83
Part D prescription drugs
 Global rating of prescription drug coverage (0–10) 14 81.63 80.64 0.99*** 82.2 81.64 0.56**
 Ease of getting needed drugs (three-item composite) 41 88.59 89.03 −0.44* 87.7 88.45 −0.75***
 Getting information on drugs (two-item composite) 36 79.45 79.93 −0.48 79.2 79.98 −0.79*

The original scores are case mix adjusted using the standard CAHPS case mix adjustments; scores adjusted for MA enrollment are also adjusted for MA enrollment status. Response scores are linearly rescaled to a 0–100 possible range to facilitate comparisons across measures.

California's rank among the 50 states, District of Columbia, Puerto Rico, and Virgin Islands (total = 53), where rankings are from 1 = highest to 53 = lowest.

*

p<.05

**

p<.01

***

p<.001.

CAHPS, Consumer Assessment of Healthcare Providers and Systems; MA, Medicare Advantage.

The results in Table 3 reveal differences in California's standing relative to the rest of the nation within MA and within fee-for-service. California MA with standard case mix adjustment (columns 2 and 3) tended to be above the national MA average with the consistent exception of physician services, whereas California fee-for-service tended to be below the national fee-for-service average in all domains, but less markedly than California MA for physician services.

Table 3.

California (CA) versus Non-CA Differences in CAHPS Scores for Medicare Beneficiaries in Medicare Advantage (MA) Plans and Fee-for-Service (FFS), with Standard and Full Case Mix Adjustments, 2008

Difference: CA−Non-CA

Standard Case Mix Full Adjustment Ethnicity Urban§


CAHPS Outcomes MA FFS MA FFS
Physician services
 Global rating of personal physician or nurse −0.93*** −0.68** −0.80*** −0.55*
 Global rating of specialist −1.53*** −0.09 −1.17*** 0.13
 Global rating of health care −0.89*** −0.39 −0.39* 0.01
 Doctor communication (four-item composite) −1.13*** −0.88*** −0.79*** −0.52*
 Ease of getting needed care (two-item composite) −2.47*** −0.69* −1.47*** 0.28
Immunizations
 Getting a flu shot in the previous year 3.36*** −2.18*** 3.91*** −1.27*
 Getting a pneumonia shot 1.43*** −6.22*** 3.31*** −3.35***
Plan services
 Global rating of MA plan or Medicare (if FFS) 0.35* −0.86*** 0.50*** −0.88**
 Getting care quickly (two-item composite) 0.90*** 0.12 1.94*** 1.06**
 Ease of paperwork (stand-alone report item) 1.75*** −2.49* 2.32*** −1.66
 Customer service (three-item composite) 0.01 −1.79* 0.43 −0.97
Part D prescription drugs
 Global rating of prescription drug coverage 1.57*** −0.7* 1.77*** −0.28
 Ease of getting needed drugs (three-item composite) 1.43*** −2.82*** 2.17*** −0.90**
 Getting information on drugs (two-item composite) 1.13** −2.31*** 1.82*** −1.20

Response scores are linearly rescaled to a 0–100 possible range to facilitate comparisons across measures.

Case mix adjusted using the standard CAHPS case mix adjustments.

§

Case mix adjusted using the standard CAHPS adjustments plus adjustments for race/ethnicity and urbanicity.

*

p<.05

**

p<.01

***

p<.001.

CAHPS, Consumer Assessment of Healthcare Providers and Systems.

Full case mix adjustment (columns 4 and 5) improved California's standing relative to the rest of the nation on all 14 measures for both MA and fee-for-service, when compared with standard case mix adjustment. However, California MA's standing relative to the national average improved by one point or more for only three measures (pneumonia immunization, getting care quickly, and getting needed care).

For California fee-for-service, the scores for all but one CAHPS measure had been significantly lower than the national average, and these scores moved closer to the national average. The exception was getting care quickly, which had a nonsignificant positive difference under standard adjustment that became larger and significant under full adjustment.

In subsequent analyses (not shown), we found that race/ethnicity was more important than urbanicity in predicting CAHPS measures (in terms of individual-level R2) and in explaining differences of California from the rest of the nation.

With the additional case mix adjustors having only a modest effect on California CAHPS scores, we looked within the MA plans in California to explore how a large, individual plan with substantial California MA enrollment (Plan-A) might be affecting variations in performance for the California MA sector.

As shown in Table 4, Plan-A compared much more favorably to the national MA average than the rest of California plans. It was markedly above the national MA average for all domains except physician services, where it was similar to the national average. In contrast, averages for other California plans fell below the national MA averages, except for Part D measures, which slightly exceeded the national average. Thus, some portion of the effects on CAHPS scores for California plans appeared to be related to the nature of the organization and operating styles of the individual plans. The sizes of these differences, in combination with Plan-A's large California market share, had a large effect on overall comparisons of California MA to national MA.

Table 4.

Effect of “Plan-A” on CAHPS Scores for Beneficiaries in Medicare Advantage (MA) Plans, California (CA) versus Non-CA, Full Case Mix Adjustments, 2008

MA Differences: CA−Non-CA

CAHPS Outcomes All CA Plans Only “Plan-A” Without “Plan-A”
Physician services
 Global rating of personal physician or nurse −0.80*** −0.17 −1.27***
 Global rating of specialist −1.17*** 0.31 −2.25***
 Global rating of health care −0.39* 0.39 −1.00***
 Doctor communication (four-item composite) −0.79*** 0.31 −1.62***
 Ease of getting needed care (two-item composite) −1.47*** −0.58 −2.32***
Immunizations
 Getting a flu shot in the previous year 3.91*** 9.14*** −0.53
 Getting a pneumonia shot 3.31*** 10.48*** −2.86***
Plan services
 Global rating of MA plan or Medicare (if fee-for-service) 0.50*** 1.57*** −0.40*
 Getting care quickly (two-item composite) 1.94*** 3.99*** 0.25
 Ease of paperwork (stand-alone report item) 2.32*** 4.31*** −0.88
 Customer service (three-item composite) 0.43 1.90*** −1.10**
Part D prescription drugs
 Global rating of prescription drug coverage 1.77*** 2.87*** 0.82***
 Ease of getting needed drugs (three-item composite) 2.17*** 3.76*** 0.74***
 Getting information on drugs (two-item composite) 1.82*** 3.53*** 0.20

Response scores are linearly rescaled to a 0–100 possible range to facilitate comparisons across measures.

Case mix adjusted using the standard CAHPS adjustments plus adjustments for race/ethnicity and urbanicity. The non-CA comparison group for all three differences is all non-CA MA plans; only the CA plans vary.

*

p<.05

**

p<.01

***

p<.001.

CAHPS, Consumer Assessment of Healthcare Providers and Systems.

DISCUSSION

The results of this study highlight the complexity that often underlies apparently simple, average measures used to monitor performance trends across organizations or geographic areas. In this case, we looked at performance on CAHPS patient experience-of-care measures in the Medicare program, for which the geographic areas being compared were U.S. states. In 2008, California ranked near the bottom among states in terms of physician services and immunizations, had mixed rankings for Part D measures, and had average-to-above average performance in plan services. These differences were generally small (an exception perhaps being a 4-percentage point gap in pneumococcal immunization), but they were practically and statistically significant. These results have informed our research questions regarding possible influence of different factors.

1. To what extent do overall differences in CAHPS performance between California and the rest of the country hold within the fee-for-service and MA sectors?

We found strong differences between the Medicare MA and fee-for-service sectors. In domains of immunization, plan services, and Part D services, California MA consistently exceeded the national MA average, whereas California fee-for-service generally lagged the national average. Both sectors were below average for physician services, but more so for California MA. These results are consistent with findings from other studies that found variations in performance across different quality measures (Miller and Luft 2002; Landon et al. 2004; Gillies et al. 2006).

These differences were not unexpected because the two health care models are quite different. Beneficiaries in fee-for-service Medicare have standard Medicare benefits and can choose their physicians and other providers freely. Beneficiaries enrolled in the MA plans have expanded benefits and the plans more actively manage their care processes and access to providers, with some plans essentially locking the beneficiary into receiving care from a single group practice. Taken as a whole, California MA plans may exhibit the traditional management characteristics of managed care more strongly than other Medicare managed care nationally.

2. Are there additional stable characteristics of the beneficiaries or market that would explain observed score differences between California and the rest of the country, but which are not included in the standard CAHPS case mix adjustments?

Given the large differences between California and the rest of the country in race/ethnicity and urbanicity, we anticipated that their addition as case mix adjustors might be sufficient to explain the differences. While the additional adjustors improved California scores on almost all measures, they did not fully explain the lower California scores in MA and they explained fewer than half the lower California scores in fee-for-service. The most important contributor was race/ethnicity; urbanicity had much weaker effects, making the question of whether urbanicity captured response tendency or quality of care moot for this application. The higher prevalence of Asians (who tend to provide the lowest ratings and reports of care) and Hispanics (who tend to provide high ratings but low reports) in California contributed to understating performance in California relative to the rest of the nation (Weech-Maldonado et al. 2004, 2008).

3. Are there plan-specific differences in CAHPS scores that indicate that unique plan-level characteristics might be affecting observed California/non-California differences in scores?

CAHPS performance in California was being affected by organizational and operational factors operating among the MA plans that do not exist in the fee-for-service Medicare sector. In particular, a single California health plan substantially altered the average MA CAHPS scores for California. This plan had strong scores for all the CAHPS measures except physician services, which resembled the national MA average. When its data were removed from the CAHPS data for California MA plans, the average scores for the remaining plans dropped substantially, so they were below the national MA average on all but Part D domains. Further, their differences from national MA means for physician service domains were larger than the differences in the fee-for-service sector; this was not the case for the other CAHPS domains. Thus, one cannot simply conclude that all MA plans in California perform poorly in patient experience of care. Rather, it is important to look within the group at individual health plans to better characterize patterns of performance across plans. The existence of a large plan with consistent above-average performance limits the extent to which unmeasured factors specific to California's population are likely to explain observed differences.

Clinical care processes have been found to vary across individual health plans or across types of plans, and it is reasonable to expect that similar variations would be occurring for patient experience of care. For example, differences in inpatient care utilization have been found for beneficiaries with severe chronic diseases who are in fee-for-service Medicare versus health plans (Revere and Sear 2004), as have variations across Medicaid health plans in pediatric asthma care (Dombkowski et al. 2005). Health plan ownership has been found to be associated with risk sharing processes, utilization of hospital inpatient care, catastrophic case management, and drug formularies (Ahern and Molinari 2001). Conversely, a study that examined the effects of health plan delivery system on clinical quality and patient experience of care (using CAHPS) found that the type of delivery system used affected many clinical measures, but not the CAHPS measures (Gillies et al. 2006).

Other likely sources of differences are plans' varying approaches to working with their contracted medical practices that, in turn, can affect how patients experience the care they receive from physicians in those practices. For example, studies have found that physicians were more likely to change their clinical practices if they received care management tools from a medical group or group/staff model health plan (Haggstrom and Bindman 2007), that the structure of a health plan is related to the duration of office visits by elderly patients (Hu and Reuben 2002), and that a health plan's method of paying physicians can affect patients' experiences of care (Scoggins 2002).

The available data did not permit detailed examination of the effects of particular organization-specific factors on MA plan performance on CAHPS measures. Further, our analysis focused on Medicare fee-for-service and MA plans in just one state. Additional investigations of a broader set of health plans, both Medicare and commercial, are needed to identify factors that affect variations in performance across the greater health plan population. Both qualitative and quantitative methods may help unravel the dynamics of service delivery within a number of health plans, drawing upon existing theory and published research in the organizational behavior and health service literature.

Overall, our case study results suggest some areas for improvements. The evidence for low immunization rates in California, in both fee-for-service and MA, compared with the rest of the country suggests the need for quality improvement efforts. In addition, our findings on the role of race/ethnicity case mix adjustment on the average differences between California CAHPS scores and those for the rest of the country suggest that consideration be given to adding these adjustors in some contexts. Finally, any examination of variations in performance across Medicare fee-for-service and MA will need to consider variations across MA plans.

CONCLUSION

This study demonstrates that a variety of factors may contribute to observed differences across geographic areas in performance on CAHPS measures. The factors involved for California were individual demographics not used in the standard case mix adjustment model, differences between fee-for-service Medicare and the MA plans, and plan-specific differences in performance across MA plans. Further, the positive performance by one MA plan had a masking effect on the overall average CAHPS scores for all California MA plans. Although such diversity probably exists for most comparisons, the contributing factors are likely to vary depending on the specific situation. For California, future research is needed to uncover specific sources of performance gaps. In addition, further study of successful MA plans could generate general lessons for providing good patient experience to California's Medicare population and perhaps to those in other states, such as New York, with historically lagging Medicare beneficiary experience.

Acknowledgments

Joint Acknowledgment/Disclosure Statement: This study was funded by CMS contract HHSM-500-2005-00028I to RAND. The authors would like to thank Elizabeth Goldstein, Ph.D., for advice and support and Aneetha Ramadas, A.B., for assistance with the preparation of the manuscript.

Disclosures: None.

Disclaimers: None.

Supporting Information

Additional supporting information may be found in the online version of this article:

Appendix SA1: Author Matrix.

hesr0046-1646-SD1.doc (60.5KB, doc)

Table SA1: Standard Errors for Differences in CAHPS Scores between California and Rest of the Nation, Standard and Full-Adjustment Case Mix.

Table SA2: Standard Errors for Differences in CAHPS Scores between California and Rest of the Nation, California Plans, Plan-A Only, Plans Without Plan-A.

hesr0046-1646-SD2.doc (62KB, doc)

Please note: Wiley-Blackwell is not responsible for the content or functionality of any supporting materials supplied by the authors. Any queries (other than missing material) should be directed to the corresponding author for the article.

REFERENCES

  1. Agency for Healthcare Research and Quality. 2010. “Instructions for Analyzing CAHPS Data,” p. 5 [accessed on accessed December 17, 2010]. Available at https://www.cahps.ahrq.gov/cahpskit/files/2015_Instructions_for_Analyzing_Data.pdf. [DOI] [PubMed]
  2. Ahern M, Molinari C. Impact of HMO Ownership on Management Processes and Utilization Outcomes. American Journal of Managed Care. 2001;7(5):489–97. [PubMed] [Google Scholar]
  3. Baicker K, Chandra A. Medicare Spending, the PhysicianWorkforce, and Beneficiaries' Quality of Care. Health Affairs. 2004;W4:184–97. doi: 10.1377/hlthaff.w4.184. [DOI] [PubMed] [Google Scholar]
  4. Butler MA, Beale CL. Rural-Urban Continuum Codes for Metro and Nonmetro Counties, 1993 (Staff Report No. 9425) Washington, DC: Agriculture and Rural Economy Division, Economic Research Service, U.S. Department of Agriculture; 1994. [Google Scholar]
  5. California Health Care Foundation. California Medicare HMOs: Declining Benefits and Rising Costs. Oakland, CA: California Health Care Foundation; 2003. [Google Scholar]
  6. Deming WB, Stephan FF. On a Least Squares Adjustment of a Sampled Frequency Table When the Expected Marginal Totals Are Known. Annals of Mathematical Statistics. 1940;11(4):427–44. [Google Scholar]
  7. Dombkowski KJ, Cabana MD, Cohn LM, Gebremariam A, Clark SJ. Geographic Variation of Asthma Quality Measures within and between Health Plans. American Journal of Managed Care. 2005;11(12):765–72. [PubMed] [Google Scholar]
  8. Dowd BE, Kralewski JE, Kaissi AA, Irrgang SA. Is Patient Satisfaction Influenced by the Intensity of Medical Resource Use by Their Physicians? American Journal of Managed Care. 2009;15(5):e16–21. [PubMed] [Google Scholar]
  9. Economic Research Service. 2004. “Briefing Rooms: Measuring Rurality: Rural-Urban Continuum Codes” [accessed on June 30, 2009]. Available at http://www.ers.usda.gov/Briefing/Rurality/RuralUrbCon/
  10. Elliott MN, Haviland A, Kanouse DE, Hambarsoomian K, Hays RD. Adjusting for Subgroup Differences in Extreme Response Tendency When Rating Health Care: Impact on Disparity Estimates. Health Services Research. 2009;44(2, part 1):542–61. doi: 10.1111/j.1475-6773.2008.00922.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Fisher ES, Wennberg DE, Stukel TA, Gottlieb DJ, Lucas FL, Pinder EL. The Implications of Regional Variations in Medicare Spending. Part 1: The Content, Quality, and Accessibility of Care. Annals of Internal Medicine. 2003a;138(4):273–87. doi: 10.7326/0003-4819-138-4-200302180-00006. [DOI] [PubMed] [Google Scholar]
  12. Fisher ES. The Implications of Regional Variations in Medicare Spending. Part 2, Health Outcomes and Satisfaction with Care. Annals of Internal Medicine. 2003b;138(4):288–98. doi: 10.7326/0003-4819-138-4-200302180-00007. [DOI] [PubMed] [Google Scholar]
  13. Gillies RR, Chenok KE, Shortell SM, Pawlson G, Wimbush JJ. The Impact of Health Plan Delivery System Organization on Clinical Quality and Patient Satisfaction. Health Services Research. 2006;41(4):1181–99. doi: 10.1111/j.1475-6773.2006.00529.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Goldstein E, Cleary PD, Langwell KM, Zaslavsky AM, Heller A. Medicare Managed Care CAHPS®: A Tool for Performance Improvement. Health Care Financing Review. 2001;22(3):101–7. [PMC free article] [PubMed] [Google Scholar]
  15. Goldstein E, Elliott MN, Lehrman WG, Hambarsoomian K, Giordano LA. Racial, Ethnic and Gender Differences in Patients' Perceptions of Inpatient Care. Medical Care Research and Review. 2010;67(1):74–92. doi: 10.1177/1077558709341066. [DOI] [PubMed] [Google Scholar]
  16. Haggstrom DA, Bindman AB. The Influence of Care Management Tools on Physician Practice Change across Organizational Settings. Joint Commission Journal on Quality and Patient Safety. 2007;33(11):672–80. doi: 10.1016/s1553-7250(07)33077-8. [DOI] [PubMed] [Google Scholar]
  17. Hu P, Reuben DB. Effects of Managed Care on the Length of Time That Elderly Patients Spend with Physicians during Ambulatory Visits: National Ambulatory Medical Care Survey. Medical Care. 2002;40(7):606–13. doi: 10.1097/00005650-200207000-00007. [DOI] [PubMed] [Google Scholar]
  18. Hurley RE, Grossman JM, Strunk BC. Medicare Risk Contracting/Medicare Contracting Risk: A Life-Cycle View from Twelve Markets. Health Services Research. 2003;38(1, part 2):395–417. doi: 10.1111/1475-6773.00122. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Hurley RE, Strunk BC, Grossman JM. Geography and Destiny: Local-Market Perspectives on Developing Medicare Advantage Regional Plans. Health Affairs. 2005;24(4):1014–21. doi: 10.1377/hlthaff.24.4.1014. [DOI] [PubMed] [Google Scholar]
  20. Landon BE, Zaslavsky AM, Bernard SL, Cioffi MJ, Cleary PD. Comparison of Performance of Traditional Medicare versus Medicare Managed Care. Journal of the American Medical Association. 2004;291(14):1744–52. doi: 10.1001/jama.291.14.1744. [DOI] [PubMed] [Google Scholar]
  21. Lurie N, Zhan C, Sangl J, Bierman AS, Sekscenski ES. Variation in Racial and Ethnic Differences in Consumer Assessments of Health Care. American Journal of Managed Care. 2003;9(7):502–9. [PubMed] [Google Scholar]
  22. Lutfiyya NM, Bhat DK, Gandlu SR, Nguyen C, Weidenbacher-Hoper VL, Lipsky MS. A Comparison of Quality of Care Indicators in Urban Acute Care Hospitals and Rural Critical Access Hospitals in the United States. International Journal for Quality in Health Care. 2007;19(3):141–9. doi: 10.1093/intqhc/mzm010. [DOI] [PubMed] [Google Scholar]
  23. Martino SC, Elliott MN, Cleary PD, Kanouse DE, Brown JA, Spritzer KL, Heller A, Hays RD. Psychometric Properties of an Instrument to Assess Medicare Beneficiaries' Prescription Drug Plan Experiences. Health Care Financing Review. 2009;30(3):41–53. [PMC free article] [PubMed] [Google Scholar]
  24. Medicare Payment Advisory Commission. Report to Congress. Washington, DC: MedPAC; 2003. Market Variation: Implications for Beneficiaries and Policy Report; pp. 19–38. [Google Scholar]
  25. Miller RH, Luft HS. HMO Plan Performance Update: An Analysis of the Literature. Health Affairs. 2002;21(4):63–86. doi: 10.1377/hlthaff.21.4.63. [DOI] [PubMed] [Google Scholar]
  26. O'Neill L. The Effect of Insurance Status on Travel Time for Rural Medicare Patients. Medical Care Research and Review. 2004;61(2):187–202. doi: 10.1177/1077558704263798. [DOI] [PubMed] [Google Scholar]
  27. Purcell NJ, Kish L. Postcensal Estimates for Local Areas (or Domains) International Statistical Review. 1980;48(1):3–18. [Google Scholar]
  28. Reschovsky JD, Staiti AB. Access and Quality: Does Rural America Lag Behind? Health Affairs. 2005;24(4):1128–39. doi: 10.1377/hlthaff.24.4.1128. [DOI] [PubMed] [Google Scholar]
  29. Revere L, Sear A. Differences in U.S. Hospital Service Utilization between Traditional Medicare and Medicare HMO Patients. Journal of Health and Human Services Administration. 2004;27(3):347–71. [PubMed] [Google Scholar]
  30. Scoggins JF. The Effect of Practitioner Compensation on HMO Consumer Satisfaction. Managed Care. 2002;11(4):49–52. [PubMed] [Google Scholar]
  31. U.S. Congress. The Patient Protection and Affordable Care Act, P.L. 111-148. Washington, DC: Government Printing Office; 2010. [Google Scholar]
  32. Weech-Maldonado R, Elliott MN, Morales LS, Spritzer KL, Marshall GN, Hays RD. Health Plan Effects on Patient Assessments of Medicaid Managed Care among Racial/Ethnic Minorities. Journal of General Internal Medicine. 2004;19(2):136–45. doi: 10.1111/j.1525-1497.2004.30235.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Weech-Maldonado R, Elliott MN, Oluwole A, Schiller KC, Hays RD. Survey Response Style and Differential Use of CAHPS Rating Scales by Hispanics. Medical Care. 2008;46(9):963–8. doi: 10.1097/MLR.0b013e3181791924. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Zaslavsky AM, Zaborski LB, Cleary PD. Plan, Geographical, and Temporal Variation of Consumer Assessments of Ambulatory Health Care. Health Services Research. 2004;39(5):1467–84. doi: 10.1111/j.1475-6773.2004.00299.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Zaslavsky AM, Zaborski LB, Shaul DJA, Cioffi MJ, Cleary PD. Adjusting Performance Measures to Ensure Equitable Plan Comparisons. Health Care Financing Review. 2001;22(3):109–26. [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

hesr0046-1646-SD1.doc (60.5KB, doc)
hesr0046-1646-SD2.doc (62KB, doc)

Articles from Health Services Research are provided here courtesy of Health Research & Educational Trust

RESOURCES