Skip to main content
Health Services Research logoLink to Health Services Research
. 2002 Aug;37(4):985–1007. doi: 10.1034/j.1600-0560.2002.62.x

Effects of CAHPS Health Plan Performance Information on Plan Choices by New Jersey Medicaid Beneficiaries

Donna O Farley, Pamela Farley Short, Marc N Elliott, David E Kanouse, Julie A Brown, Ron D Hays
PMCID: PMC1464003  PMID: 12236394

Abstract

Objective

To assess the effects of CAHPS health plan performance information on plan choices and decision processes by New Jersey Medicaid beneficiaries.

Data Sources/Study Setting

The study sample was a statewide sample of all new Medicaid cases that chose Medicaid health plans during April 1998. The study used state data on health maintenance organization (HMO) enrollments and survey data for a subset of these cases.

Study Design

An experimental design was used, with new Medicaid cases randomly assigned to experimental or control groups. The experimental group received a CAHPS report along with the standard enrollment materials, and the control group did not.

Data Collection

The HMO enrollment data were obtained from the state in June 1998, and evaluation survey data were collected from July to October 1998.

Principal Findings

No effects of CAHPS information on HMO choices were found for the total sample. Further examination revealed that only about half the Medicaid cases said they received and read the plan report and there was an HMO with dominant Medicaid market share but low CAHPS performance scores. The subset of cases who read the report and did not choose this dominant HMO chose HMOs with higher CAHPS scores, on average, than did those in an equivalent control group.

Conclusions

Health plan performance information can influence plan choices by Medicaid beneficiaries, but will do so only if they actually read it. These findings suggest a need for enhancing dissemination of the information as well as further education to encourage informed choices.


As the move to managed care has limited the set of providers available through each health plan and introduced strong incentives to alter the process of health care, the stakes associated with health plan choices have increased enormously for consumers. In this context, both having a choice of plans and making the right choice are important to consumers. At the same time, policymakers and large group purchasers are seeking effective ways to provide consumers the information they need to choose their plans wisely. These purchasers include state Medicaid programs, virtually all of which now cover Medicaid benefits through managed care programs for at least some enrollee groups. Yet purchasers also want to know whether theirinvestments in the development and dissemination of reports on health plan performance are providing consumers with information that is useful for them.

This paper reports on a demonstration and evaluation that studied the effect of distributing health plan performance information to newly enrolled Medicaid beneficiaries. The outcomes of interest were plan choices and consumer perceptions of the enrollment and decision-making process. The study was part of the Consumer Assessment of Health Plans Study (CAHPS), initiated in 1995 by the Agency for Health Care Policy and Research (now the Agency for Healthcare Research and Quality, AHRQ) to help consumers choose plans by giving them information from a survey of plan members. Specifically, this study evaluates the impact of a statewide consumer report produced from the CAHPS 1.0 Medicaid managed care survey that was fielded by the New Jersey Medicaid Office of Managed Health Care in 1997. The New Jersey Medicaid evaluation was part of a larger CAHPS evaluation effort designed by RAND, the Research Triangle Institute, Harvard Medical School, and Westat (Crofton, Lubalin, and Darby 1999).

Many states now require poverty-related Medicaid beneficiaries to enroll in a managed care plan, and growing numbers of states are extending mandatory managed care to Supplemental Security Income recipients. Currently, all but two Medicaid programs have some form of mandatory or voluntary managed care program. As of June 1998, 585 health plans participated in Medicaid, serving more than 16 million beneficiaries, or 54 percent of the nation's Medicaid population (Health Care Financing Administration 1999). Medicaid beneficiaries typically are asked to choose from a set of options thatincludes one or more health maintenance organizations (HMOs), and sometimes a primary care case management plan. In states with mandatory managed care, beneficiaries who do not make a voluntary choice are assigned toa health plan by the Medicaid office (a process known as auto-assignment).

New Jersey Medicaid Cahps Demonstration

During 1996 and 1997, the New Jersey Medicaid program phased in mandatory HMO enrollment for AFDC and other welfare-related beneficiaries in 17 of its 21 counties.1 As of February 1998, 91.4 percent of these beneficiaries were enrolled in Medicaid HMOs. The state contracted with a private firm to manage the Medicaid HMO enrollment process and assist beneficiaries in choosing their plans. The enrollment contractor operated an 800 number where Medicaid beneficiaries could get answers to questions about plans and find out which physicians were enrolled in which plans. The contractor also sent “health benefit coordinators” into county welfare offices and the community to assist beneficiaries as they chose and enrolled in HMOs.

The state conducted a CAHPS survey of Medicaid HMO enrollees in the mandatory managed care program for the first time in July–October 1997.2 Interviews were conducted with 5,878 enrollees in 10 Medicaid HMOs, using a mixed-mode survey design (Fowler et al. 1999). Respondents with telephone numbers known to the state were interviewed by telephone; otherwise, mail questionnaires were sent.

The New Jersey Medicaid office subsequently published a seven-page brochure, “Choosing an HMO,” that compared the Medicaid HMOs with respect to the consumer ratings and experiences reported in the CAHPS survey. This brochure was designed to be included in the package of HMO enrollment materials mailed to new Medicaid enrollees. The starting point for the state's brochure was the CAHPS Version 1.0 report template that was designed for reporting survey results to privately insured populations. The template was modified to make the information more accessible to a Medicaid population by minimizing the amount of text, reducing the reading level, and consolidating the survey results into two tables for ease of HMO comparisons. Following the CAHPS convention for comparative ratings, two stars were shown for plans with survey results that were not significantly different from the average for all other Medicaid plans in the state; one star for plans that scored significantly lower than average, and three stars for plans that scored significantly higher than average.3

Conceptual Framework and Hypotheses

Empirical research has shown that many factors play an important role in health plan choices, including the services covered, premiums and out-of-pocket costs to the consumer, maintaining established relationships with providers, and freedom of provider choice (Mechanic et al. 1990; Marquis and Rogowski 1991; Davis et al. 1995; Scanlon et al. 1997; Sainfort and Booske 1996; Gibbs et al. 1996; Tumlinson et al. 1997). Although there is some evidence that consumers are likely to consider information about plan performance when it is available, the empirical evidence is mixed about how they use it and its relative importance in their decision making (Scanlon et al. 1997; Marshall etal. 2000). When making health plan choices, consumers seem to give a lower priority to considerations of quality and service than the scope and generosity of coverage, provider choice, or premium costs (Sainfort and Booske 1996; Castles et al. 1997; Knutson et al. 1997; Robinson and Brodie 1997; Tumlinson et al. 1997; Chernew and Scanlon 1998). However, Sainfort and Booske (1996) found that consumers’ use of plan performance information tends to increase as they are exposed to the information and learn how to interpret it.

Three aspects of consumer behavior are particularly critical in determining the effect of an information intervention like CAHPS:

  • How much consumers pay attention to information about health plan choices;

  • Which health plans consumers choose, and in the case of Medicaid, whether they choose a plan or allow themselves to be auto-assigned to a plan by the state; and

  • Whether and how they weigh differences among plans in making their choices.

All three of these behaviors are influenced by the characteristics, preferences, and attitudes of different consumers, by the characteristics of health plan options available to them, and by the costs and benefits of acquiring different types of information. Moreover, there is a feedback loop between behavior and attitudes about the plan options—the very act of gathering and considering information, and weighing available options, is likely to change a consumer's behavior.

Evaluating the effect of CAHPS reports on health plan choices is analogous to evaluating the effectiveness of a clinical intervention. The effectiveness of a clinical intervention depends on its efficacy in the treated population and the proportion of the target population reached for treatment. Considering the three consumer behaviors identified above within this framework, we designed our evaluation of the Medicaid CAHPS demonstration in New Jersey to test the following hypotheses:

  1. H1. Medicaid consumers who are mailed a CAHPS report will choose a plan more often, instead of being auto-assigned.

  2. H2. Medicaid consumers who are mailed a CAHPS report will choose plans that perform better according to the CAHPS survey of plan members.

  3. H3. Medicaid consumers who are mailed a CAHPS report will feel more positive about their plan choice and the enrollment experience.

  4. H4. A large percentage of Medicaid consumers who are mailed a CAHPS report will notice the report and read it.

  5. H5. If the first three hypotheses are rejected because only a small percentage of consumers notice and read the CAHPS report (that is, Hypothesis 4 is rejected), then the effects predicted in Hypotheses 1–3 will at least be evident among consumers who notice and read the report.

Methods

Evaluation Design

The evaluation employed a randomized, experimental design. The total sample consisted of all new Medicaid cases who were mailed HMO enrollment materials during a four-week period from March 25 to April 15, 1998. All of these cases were processed for a June 1 effective date for HMO enrollment. Cases are the family units that qualify for Medicaid coverage. Medicaid-eligible family units that include an adult are referred to as adult cases, and those in which only children are Medicaid-eligible are child cases. New Jersey requires all members of each Medicaid case to enroll in the same HMO. Based on whether the last digit of the case ID was odd or even, half the cases were randomly assigned to an experimental group that received the CAHPS report, and half were assigned to a control group that did not receive the report. As shown in Table 1, there were 2,568 cases in the control group and 2,649 cases in the report group.4 The control group received the standard mailing of Medicaid enrollment materials. The mailing to the report group included the standard materials and the CAHPS report.

Table 1.

Sample Sizes for the CAHPS New Jersey Medicaid Outcome Evaluation

Counties with 4–5 HMOs Counties with 6+ HMOs


Total Report No Report Report No Report Report No Report
April enrollees 5,217 2,649 2,568 503 472 2,146 2,096
 Surveyed
 Number 2,550 1,763 787 502 235 1,261 552
 Percentage* 48.9% 66.6% 30.6% 100.0% 49.8% 58.8% 26.3%
Not Surveyed 2,667 886 1,781 1 237 885 1,544
*

Denotes the sampling fraction for April enrollees in each group in each county.

We used data from two sources to test the evaluation hypotheses. After the April enrollees made their health plan choices, or were auto-assigned to a plan, the New Jersey Medicaid office supplied us with a data file that identified plan choices, auto-assignment, and demographics for the full sample of 5,217 April enrollees. In addition, as shown in Table 1, a sample of 2,550 cases (1,763 from the report group, 787 from the control group) was surveyed after the enrollment process for the April cohort to gather self-reported data on their decision-making processes and how they used the plan performance report. The survey was conducted from July through October 1998.

For the survey, we oversampled cases in the report group and in counties where five or fewer Medicaid HMOs were available. The report group was oversampled because we anticipated that a fraction of this group would not recall receiving or reading the CAHPS report, and we wanted to test differences in effects between those who did and did not read the report. Oversampling in counties with fewer HMOs allowed us to test differences in the effect of the CAHPS report for consumers with many choices versus relatively few choices. This sampling design resulted in a modest design effect, which we adjusted for in computing standard errors in all analyses of the survey data.

Evaluation Survey

The evaluation survey collected data about Medicaid beneficiaries’ perceptions about the HMO enrollment process and the extent to which cases in the report group actually noticed and read the CAHPS report. The interview was conducted with the family member who was responsible for choosing a plan for the case, usually the mother. Respondents were asked questions about which HMO features were important to them, the importance of choosing a health plan, the types of information they used, how hard it was to choose the best plan, and their confidence in their choices. If respondents in the report group recalled receiving the CAHPS report, they were asked about the extent to which they read the report, their views on the usefulness of the report, and how much they trusted the information it contained.

The evaluation survey was conducted by telephone if the telephone number was available and by mail otherwise. All of the interviews were conducted in English with the adult family member or an adult proxy responsible for choosing a plan for child cases. Cases were tracked actively to find contact information. We obtained a 43 percent completion rate (number of completes/total sample) and a 57 percent response rate (after eliminating ineligible cases due to death, language problems, inability to find contact information, or no eligible respondent).5 We completed 757 surveys for the report group and 341 surveys for the control group.

This response rate compares favorably to those for other surveys of Medicaid populations, which typically do not exceed 50 percent (Gold et al. 1995; Donat et al. 1995; Brown et al. 1999). Yet it is lower than generally acceptable response rates for surveys of other populations (i.e., >70 percent). Because of pervasive problems with available contact information from state Medicaid programs, higher response rates are not likely to be achieved for Medicaid populations. Therefore, we address issues of nonresponse bias carefully in our analysis and interpretation of findings.

We did our best to correct for nonresponse bias using the Medicaid administrative data for the evaluation survey sample. Logistic regression was used to predict response probabilities based on race/ethnicity, gender, case type (adult versus child), age, and county of residence. African Americans and members of other racial/ethnic groups were less likely to respond than non-Hispanic whites (odds ratios of 0.72 for African Americans, 0.33 for other race/ethnicity, p <0.01 in each case). Cases with an adult were less likely to respond than child-only cases (odds ratio of 0.79, p <0.05). There was also significant variation by county of residence. The model had moderately high concordance of 65.0 percent between predicted probabilities that individuals would respond to the survey and whether they actually did.6

To correct for these observable nonresponse biases, a nonresponse weight that was inversely proportional to the predicted probability of response was generated for each survey respondent. These weights and the design weights were applied so that the weighted characteristics of the responding sample approximated those of the April Medicaid enrollees.

Analysis Plan

We shaped our analyses to test the hypotheses presented above, using either the administrative data for the full sample or the survey data. Analyses that utilized only the administrative data yielded the most powerful tests of CAHPS report effects because the April enrollee sample was much larger than the survey sample. However, these data only provided information on Medicaid enrollment decisions and basic demographic characteristics of the case heads, which we supplemented with survey data to explore factors contributing to the observed enrollment outcomes.

We used the administrative data for all April enrollees to compare auto-assignment rates (Hypothesis 1) and plan choices (Hypothesis 2) for the full control and report groups. Because the original experimental design assigned cases randomly into the group mailed CAHPS reports and the group not mailed CAHPS reports, we could perform straightforward analyses that compared the average values of outcome measures for the two groups. Then we used the survey data to test for effects of the CAHPS report on consumers’ attitudes about their choices and the enrollment process (Hypothesis 3), and to estimate how many of the people who were mailed a CAHPS report actually remembered receiving it and read it (Hypothesis 4). We compared average values of outcome measures for the report and control groups in the survey sample, applying design effect and nonresponse weights to calculate weighted averages.

We learned from the evaluation survey that only 49 percent of the report group remembered receiving and reading the CAHPS report, as identified from responses to a survey question and related follow-up question.7 Respondents who reported they received the report (i.e., those who noticed it) and at least glanced through it were identified as the “receptive” subgroup. A separate analysis was performed for this subgroup to test the extent to which CAHPS plan performance information might affect HMO choices for receptive beneficiaries who noticed and read the report (Hypothesis 5), and to compare these effects to those observed for the overall sample.

Analyses revealed that the receptive respondents differed from the rest of the report group (and the control group) on a number of characteristics. Unlike the initial random assignment to the control and report groups, within the report group, there was a self-determined split between subjects who noticed and read the report and those who did not. As a result, differences in outcomes between this subgroup and the control group could be the result of selection bias, not the CAHPS report. That is, the same factors that influenced whether subjects reported they received and read the reports might also play a role in determining the outcomes we were testing.

Given the different characteristics observed for the receptive report subgroup, we sought to establish an equivalent subgroup within the control group—those who would have noticed and read the report if it had been sent to them—to allow us to test CAHPS effects for the receptive subgroup. We employed the “proportionate propensity weighting” technique to create this subgroup (Hirano et al. 2000). When data are available to build a strong model predicting entry into an intervention subgroup, this technique is a particularly appealing and intuitive means of identifying an equivalent subgroup within the control group of a randomized experiment.

Using survey data for the report group survey sample, we fit a logistic regression model that predicted having noticed and read the report. We generated predicted probabilities that members of the control group survey sample would be receptive to (i.e., notice and read) the report if they had been in the report group. Propensity weights that were directly proportional to the predicted probabilities were used to re-weight respondents in the control group to make them equivalent to the receptive subgroup of the report group. This procedure modeled a control group equivalent to the experimental subgroup on observed characteristics, and it corrected for unobserved characteristics to the extent they were correlated with the observed characteristics.

A rich set of information was available from the administrative and survey data for estimating the propensity model. We tested a number of variables as we specified the model, including age, race/ethnicity, gender, education, county of residence, self-reported health status, whether respondent had a usual provider, importance of access to provider's office, and propensity to use non-CAHPS sources of information on HMOs (defined below).

The evaluation was further influenced by the dominance of one HMO that historically had a much larger share of Medicaid enrollment than any other HMO, but scored the lowest on the CAHPS measures. This HMO was available in all the New Jersey counties that had mandatory Medicaid HMO enrollment, and it attracted nearly 30 percent of the April enrollees. We tested to see if consumers in the report group chose plans with higher CAHPS ratings after we excluded those who chose the dominant plan. We tested this response for the entire April sample and the receptive subgroup.

Finally, we estimated a logistic regression model with enrollment in the dominant HMO as the outcome variable, to explore which factors might be influencing New Jersey Medicaid beneficiaries to choose this HMO despite being exposed to CAHPS information that showed it to perform poorly. The factors we identified as likely to influence preferences for this HMO were the strength of the HMO's market share, a beneficiary's desire to keep a usual provider, the importance to the beneficiary of plan performance dimensions measured by CAHPS, and other beneficiary characteristics.8 The market share variable was measured as the previous-year market share of the dominant HMO in the respondents’ county of residence. The usual-provider predictor was a dummy variable based on a survey question on whether the respondent had and wanted to keep a usual provider. The importance of CAHPS dimensions for choice was measured using a derived index variable, defined below. Variables for beneficiary characteristics were age 35 or older, Hispanic ethnicity, self-reported health status as excellent or good, and did not complete high school.

We estimated the logistic regression model for the report subgroup with the greatest exposure to the information: the receptive subgroup who reported they had received and read the CAHPS report. We also estimated the model for the combined sample of receptive consumers in the report and control groups, adding an indicator for the report group to test for independent CAHPS effects on choice of the dominant HMO after controlling for other factors.

Variables

Variables used in the analyses included those obtained directly from the New Jersey Medicaid data files (e.g., HMO enrollments, case household size), the outcome evaluation survey (e.g., noticed and read CAHPS report, importance of usual provider), and derived variables. We define here the key variables derived for the analysis.

Standardized CAHPS Rating of Selected HMO

We summarized the assessment of each Medicaid HMO's performance on the CAHPS survey by counting the total number of “stars” that the plan received on all CAHPS dimensions. The counts ranged from 20 to 29 stars. The star charts in the CAHPS report were based on an HMO's performance compared to the average for all the Medicaid HMOs in the state. Not all Medicaid HMOs were available in every county, however, so we wanted to compare the CAHPS star count for each case's selected HMO to the average star counts for the HMOs available in the case's county of residence. Thus, we subtracted the average star count for all the Medicaid HMOs in the county from the star count of the selected HMO. The resulting standardized CAHPS ratings ranged from −8.40 (well below the county average) to 6.26 (well above the county average).

Propensity to Use Non-CAHPS Information Sources

This measure quantifies the extent to which consumers sought information from a variety of possible sources, other than the CAHPS performance information, as they were choosing a Medicaid HMO. Survey respondents were asked if they obtained information about HMOs by: (1) talking to a health benefits coordinator, (2) calling an 800 number for information, (3) talking to family members or friends, (4) getting information directly from HMOs, or (5) getting information from their doctors. The propensity to use non-CAHPS information sources was measured as the number of “yes” responses to these questions.

Importance of CAHPS Dimensions in Choosing a Health Plan

The CAHPS survey asked respondents to assess plan performance along seven dimensions (e.g., doctors and nurses who communicate well, how easy to get referrals to a specialist). The New Jersey Medicaid CAHPS report described each HMO's performance on these dimensions. In our evaluation survey, we asked respondents to rate how important each of these dimensions was to them for choosing a health plan, on a scale from 1 (not at all important) to 4 (very important). We derived a summary measure of the importance of the CAHPS dimensions by averaging respondents’ ratings for the seven dimensions.

Importance of Access to Provider's Office

Two survey questions asked respondents how important it was to choose an HMO with convenient provider office locations or with off-hours availability of health care, including emergency care. We used the average of responses to these two questions as a measure of the importance of physical access to care.

Perceived Differences in HMO Quality

Two questions in the evaluation survey assessed each consumer's perceptions of how much the available HMOs differed in “how good care is” and “getting care you need.” We averaged the values for these variables to obtain a global measure of perceived differences in performance among HMOs.

Market Shares of Plans

We calculated the Medicaid market share of each HMO in each county as of July 1997, almost a year prior to the April enrollment (and the enrollment decisions being modeled). The county-specific market share of the dominant HMO was included in analyses that examined enrollment in the dominant HMO.

Results

Who Received and Read the CAHPS Report?

As we have noted, about half the cases mailed a CAHPS report said they received and read the report. (Very few respondents reported that they received the report and did not read it.) The results of our multivariate logistic regression, presented in Table 2, indicate that those with a propensity to seek out and use information on the HMOs (i.e., those who consulted larger numbers of non-CAHPS information sources) were significantly more likely to notice and read the CAHPS report. The positive coefficient on the propensity variable suggests that respondents who used information considered the CAHPS report to be an additional source, not a substitute.

Table 2.

Logistic Regression for Propensity to Notice and Read the CAHPS Report

Variable Estimated Coefficient Standard Error Odds Ratio
Age 25–34 −0.29 0.21 0.75
Age 35 or older −0.44# 0.23 0.64
Female 0.43 0.34 1.54
Hispanic −0.08 0.20 0.92
Did not complete high school −0.07 0.22 0.93
Some college −0.35# 0.21 0.70
Self-rated health excellent or very good 0.66* 0.30 1.93
Self-rated health good 0.36 0.32 1.43
Has a usual provider, but not important to keep 0.38# 0.22 1.46
Has a usual provider, and wants to keep 0.02 0.23 1.02
Importance of access to provider's office (1–4) −0.07 0.17 0.93
Propensity to use information (non-CAHPS) (0–5) 0.30** 0.06 1.35
In county with 4 plans 0.53 0.40 1.70
In county with 7 or more plans −0.22 0.19 0.80
Importance of CAHPS dimensions for plan performance (1–4) −0.17 0.22 0.84
Extent to which ratings are informative about care (1–3) 0.33# 0.18 1.39
Perceived differences in HMOs quality (1–4) 0.00 0.12 1.00
Missing indicator for informative ratings −1.14* 0.57 0.32
Missing indicator for perceived HMO differences 0.16 0.44 1.17
Intercept −1.23 0.84
#

p < 0.10

*

p < 0.05

**

p < 0.01, all for two-sided tests. Model concordance was 67.4%.

The other variables that were significant predictors of noticing and reading the CAHPS report were respondent characteristics. Those in excellent or very good health were more likely to notice and read the report than those in poorer health. Respondents age 35 or older and those with some college education were less likely to notice and read the CAHPS report, although these were marginally significant effects (p <0.10). The belief that member ratings are informative about quality of care also was marginally associated (positively) with noticing and reading the CAHPS report, although the direction of causality is not clear.

Two variables were notable in their lack of significant effects on noticing and reading the CAHPS report. One of these was the perceived importance of the CAHPS performance dimensions. This result suggests that respondents who felt these measures were important may not have been aware of what the Medicaid CAHPS report contained, and for those who read the report, the CAHPS information did not change their perceptions of the importance of these aspects of plan performance. The other unimportant predictor was the number of Medicaid HMOs from which beneficiaries could choose in their county of residence.

The multiple logistic regression shown in Table 2 yielded a moderately high concordance of 67.4 percent between respondents actually saying they noticed and read the report and the probabilities predicted by the model that they would do so. These are the probabilities we used as propensity weights to define an equivalent control group for the receptive report subgroup.

Effects of the CAHPS Report on Plan Choices

We found no significant differences in the plan choices of the total sample of April enrollees in the control and report groups, and thus rejected both Hypothesis 1 and Hypothesis 2. Estimated rates of voluntary HMO choice were similar for the report and control groups, as shown in Table 3, with more than two-thirds of each group making a choice (68 percent in the report group and 69 percent in the control group). The mean standardized CAHPS rating for the HMOs that were selected was close to zero in each group, signifying that neither group chose HMOs with star counts that deviated noticeably from the county average. The two groups also enrolled in the dominant HMO at similar rates (28 percent and 27 percent for the report and control groups, respectively). Among those who did not choose the dominant HMO, the mean values of the standardized CAHPS rating were similar (and positive, since the county average included the low star count for the dominant HMO).

Table 3.

Plan Choices for April Enrollees and Receptive Subgroups

Mean or Proportion Sample Size


Reports Control Reports Control
April enrollees
Proportion choosing a plan 0.68 0.69 2,649 2,568
For those who chose a plan:
Standardized CAHPS rating of plan selected −0.03 0.03 1,813 1,775
Proportion selecting the dominant HMO 0.28 0.27 1,813 1,775
Standardized CAHPS rating of selected plan, for those not selecting dominant HMO 1.80 1.73 1,253 1,255
Receptive subgroup
Proportion choosing a plan 0.95 0.96 334 341
For those who chose a plan:
Standardized CAHPS rating of plan selected 0.62# 0.00 318 327
Proportion selecting the dominant HMO 0.25# 0.32 318 327
Standardized CAHPS rating of selected plan, for those not selecting dominant HMO 2.58** 1.81 232 226
#

p < 0.10

*

p < 0.05

**

p < 0.01, all for one-sided tests of the difference between the reports and control group.

Note: 36% of survey respondents who were auto-assigned according to administrative records reported that they had chosen a plan; 10% of respondents who chose a plan according to administrative records reported that they had been auto-assigned.

Given that only half of the survey respondents reported they received and read the CAHPS report (Hypothesis 4), we tested the extent to which a CAHPS effect might be observed for this subgroup who noticed and read the information in the report (Hypothesis 5), an effect that would not extend to those in the report group who did not read it. We compared the receptive report subgroup to the equivalent control subgroup with respect to auto-assignment rates into HMOs as well as three aspects of choice for those who voluntarily selected an HMO: (1) choice of plans with higher CAHPS ratings for the entire subgroup, (2) enrollment in the dominant HMO with poor CAHPS ratings, and (3) choice of plans for those who did not enroll in the dominant HMO.

There was no difference in auto-assignment rates for the receptive report subgroup and equivalent controls. High percentages of both groups voluntarily chose a plan (95–96 percent). These strikingly high rates of voluntary choice for both groups are an indicator of the predictive validity of the logistic regression used to construct the equivalent control group for the receptive subgroup. The variable for voluntary choice was not included in the propensity model, and the rate of voluntary choice was much lower for other members of the control group, suggesting that the procedure did well in matching on unobserved characteristics as well as observed ones.

On average, the receptive report subgroup chose HMOs with higher standardized CAHPS ratings than those chosen by the equivalent control group (average standardized CAHPS ratings of 0.62 versus 0.00, respectively), and a smaller proportion of the receptive report subgroup selected the dominant HMO (25 percent versus 32 percent). However, both differences fell short of significance (p <0.10 using the 1-sided test implied by the hypotheses).

The only significant CAHPS effect we found was for cases in the receptive report subgroup that did not enroll in the dominant HMO. These cases chose HMOs with an average standardized CAHPS rating of 2.58 compared to 1.81 for the equivalent control group. This difference of 0.71 stars above the average star count for the equivalent control group is large in relation to the maximum possible deviation of 6.26 stars, and is statistically significant (2-sided, p <0.05).

Perceptions of the Enrollment Process

According to Hypothesis 3, dissemination and use of the CAHPS report will make consumers feel more positive about the plans they choose and the enrollment process. We tested this hypothesis in three different samples of thereport and control group: all respondents to the evaluation survey, the subgroup of respondents who were receptive to the CAHPS report (and equivalent controls), and the subgroup of receptive respondents who chose a plan instead of being auto-assigned. We found no differences between the report and control groups along these dimensions.

Choosing the Dominant Health Plan

A substantial 25 percent of the consumers in the receptive subgroup persisted in choosing this HMO over other plans that performed better on CAHPS measures. We present in Table 4 the results of the logistic regression model we estimated to identify factors that were associated with this choice. The sample for this analysis was the receptive report subgroup (those who reported they received and read the CAHPS report), and the binary dependent variable was choice of the dominant HMO versus another plan. The model was a strong predictor of dominant HMO choice, having an 85.3 percent concordance between actual choices and the predicted probabilities.

Table 4.

Logistic Regression for Choice of the Dominant Medicaid HMO for Receptive Subjects Who Read Reports and Chose a Plan

Predictor Variable Estimated Coefficient Standard Error Odds Ratio
Age 35 or older −2.95** 0.49 0.05
Hispanic 1.02* 0.41 2.77
Self-rated health excellent or very good −0.16 0.41 0.85
Did not complete high school 0.78* 0.39 2.18
Has and wants to keep a usual provider −0.96* 0.39 0.38
Index of importance of CAHPS dimensions in choice (1–4) −0.68# 0.40 0.51
Previous market share of dominant plan, per 10 percentage points 0.38** 0.10 1.46
Intercept −0.18 1.75
#

p < 0.10

*

p < 0.05

**

p < 0.01, all for two-sided tests. Model concordance was 85.3%.

An important factor for enrollment in the dominant HMO was the HMO's prior-year market share in the consumer's county of residence. For each 10 percent increment in the dominant HMO's market share in a county, respondents were 1.46 times more likely to select the plan. Demographic factors were also significant predictors. Individuals who were age 35 or older and those who wanted to keep their usual provider were less likely to choose the dominant HMO. Conversely, Hispanics were more likely to choose this plan, as were those who did not complete high school. A portion of the effects these measures capture may be underlying plan features preferred by these population groups that we could not identify with the data available to us. The importance that consumers assigned to the CAHPS performance dimensions was negatively associated with enrollment in the dominant HMO, but only marginally significant (p <0.10).

When we added an indicator for the report group, and estimated the logistic regression model shown in Table 4 for the combined sample of receptive consumers in the report and control groups, the report group indicator was not statistically significant. Thus, our multivariate analysis rejected the hypothesis that receptive consumers in the report group were less likely to enroll in the dominant HMO.

Discussion

The overall results of this evaluation are clear: For the Medicaid population as a whole, we found no evidence that the CAHPS report reduced auto-assignment rates, influenced plan choices, or modified consumers’ perceptions of the enrollment process. These results are important because they suggest that the substantial effort required to measure enrollees’ experiences with Medicaid health plans and disseminate the results may have little or no discernible effect on the Medicaid plan choices of other consumers.

The implications of these negative findings, however, depend on the reasons for the failure to observe an effect. Within an efficacy/effectiveness framework, the effectiveness of distributing a CAHPS report to Medicaid consumers depends on two factors: (1) the percentage of consumers who actually read the report (are exposed to treatment) and (2) the average effect of the CAHPS information on the subset of consumers who read it (efficacy in the treated population). Our subgroup analyses attempt to shed some light on the extent to which the demonstrated lack of effectiveness implies lack of efficacy as opposed to inadequate exposure in the treated population.

The proportion of the population reached may be defined as the proportion who received a CAHPS report and read it. Although we have no direct measure for this exposure, only half of the beneficiaries who were mailed a Medicaid CAHPS report remembered receiving and reading it. If we take this as a reasonable proxy for exposure, then only half of the intended population was reached.

A number of factors could contribute to this result. Some respondents who were not aware of the CAHPS report probably did not receive it because of incorrect addresses or delivery problems. Of those who did receive it, some likely did not notice it, others ignored it, and others looked at it but forgot about it by the time of the evaluation survey. In any case, given the low rate of exposure, the report's effect on consumers who saw it would have to be substantial for it to have a noticeable effect on the New Jersey Medicaid population as a whole.

In recent laboratory experiments that simulated the enrollment process, both privately insured and Medicaid consumers who read CAHPS reports were more likely to choose health plans that performed better according to the reports (Spranca et al. 2000; Kanouse et al. 2000). Thus, the efficacy of the CAHPS reports has been demonstrated under ideal laboratory conditions that include guaranteed exposure with ample time to read and assimilate the report and a structured situation that forces a decision immediately after exposure to the information. Under real world conditions, consumers who read the CAHPS report may not read it thoroughly, and they may not make a decision immediately after reading it or have the report on hand when they do. For all these reasons, CAHPS reports would be likely to have smaller effects on plan choices in the field than in the laboratory even for those exposed.

We tested for efficacy under field conditions by comparing the plan choices of New Jersey Medicaid beneficiaries who noticed and read the CAHPS report to the choices of a control group selected for equivalence on other characteristics. The overall effect on average CAHPS ratings for HMOs chosen by this receptive subgroup was only marginally significant, and the CAHPS report had only a weak effect in reducing enrollment in the dominant HMO. Yet we found some evidence for CAHPS effects for a subset of the receptive beneficiaries who rejected the dominant HMO with lower CAHPS ratings and chose some other HMO. These individuals chose HMOs with better CAHPS ratings at a significantly higher rate than the equivalent control group.

We found no effect of CAHPS information on consumers’ perceptions of the enrollment processes, including their assessments of the importance of making a choice and the difficulty of the decision-making tasks. Those who received CAHPS information did not find it any easier to choose a plan, nor were they unduly burdened by the challenge of dealing with this additional information.

Our analysis suggests that the New Jersey Medicaid recipients who paid attention to the CAHPS report were more actively engaged in the consumer role. They obtained information about plan choices from more sources, including but not limited to CAHPS, and they were rarely auto-assigned to a default plan. Providing these proactive consumers with a CAHPS report had little effect on their auto-assignment rate because they were strongly disposed to make their own choices anyway.

Our analysis of the overall effects of CAHPS on plan choices includes the entire April 1998 sample of enrollees, and is therefore not subject to nonresponse bias. However, our assessment of other outcomes, and the results for subgroups, are derived from survey data for which possible nonresponse bias must be considered. Although our 57 percent response rate is fairly typical of studies of the Medicaid population, which includes many people who are difficult to contact, it leaves room for bias in the results due to differences between those who completed the survey and those who did not. We attempted to reduce or eliminate this bias through the use of nonresponse weights. The variables available to us for predicting survey response were reasonably informative, yielding a 65 percent concordance between predicted and actual response to the survey. With this concordance, it seems likely that the observable characteristics we were able to use captured a substantial share of the factors contributing to nonresponse, and therefore that the nonresponse weights should have adjusted acceptably for nonresponse bias.

To the extent that nonresponse bias remains in the survey data, it is likely to bias results in the direction of finding a CAHPS effect and overestimating its magnitude. Characteristics that predispose consumers to be survey responders are likely to be positively correlated with the characteristics that predispose them to read a CAHPS report (e.g., having good contact information and being willing to attend to a survey or report related to Medicaid). If so, then the survey respondents would tend to include higher proportions of consumers who were exposed to CAHPS than are present in the overall population, making it easier to find a CAHPS effect. Despite this potential for bias, we found no CAHPS effect for the total survey sample. We did find an effect within the receptive subgroup, where we would expect nonresponse bias to be smaller than for the total sample. Thus, while it is important to consider the possible effects of nonresponse bias, these do not seem likely to be substantial in this study nor to affect its conclusions.

Our evaluation findings have both programmatic and methodological implications. On the programmatic side, Medicaid programs that provide their clients with CAHPS information to promote well-informed, proactive plan choices need to be sure that the information reaches as much of the intended audience as possible. This study was not able to distinguish failures of delivery from failures to capture recipients’ interest and attention, but efforts to make the report more appealing and more likely to be read may prove worthwhile; research is needed in this area. In addition, educational initiatives are needed to motivate beneficiaries to use and think about the information that is available to them. Such interventions will need to focus on increasing familiarity with CAHPS performance information, trust in the information, and the ability to interpret it easily.

On the methodological side, the fact that consumers who are receptive to CAHPS reports have distinctive attributes may confound attempts to evaluate the effects of reading the report. For example, if we had not modeled an equivalent control group for the receptive report subgroup, we might have incorrectly concluded that noticing and reading the CAHPS report caused a dramatic reduction in the auto-assignment rate. Instead, it is more likely that proactive consumers who made their own choice of plans were also motivated to pay attention to the CAHPS report.

This evaluation also shows it is unwise to draw general conclusions about the effects of information interventions like the CAHPS reports from a single demonstration site. Many factors can modify the effect of a CAHPS report, including the accuracy of the mailing addresses maintained by the Medicaid agency, the relative size and quality of the plans that are available, and the design and layout of the CAHPS report. In New Jersey, for example, the presence of a dominant HMO that had low CAHPS scores may have attenuated the effect of CAHPS ratings on Medicaid HMO choices.

We have little information on why this HMO was dominant. We can speculate that new beneficiaries may perceive its large Medicaid market share to be a sign of quality or they may be more likely to know people enrolled in it. We do know that choice of provider tends to be more important to people than choice of health plan. Anecdotally, we were told the HMO contracts with many physicians, therefore many people may be able to keep their physicians by enrolling in the HMO, and others may know of providers associated with it (even if they didn't already have a relationship with one of them). Beneficiaries were encouraged by New Jersey Medicaid to call an 800 number to ask if particular providers were in an HMO.

Among the receptive subgroup of respondents who rejected the dominant HMO, we found that significantly more of those who received the CAHPS report chose HMOs with higher ratings. The small size of this subgroup and the absence of a significant overall effect leave the usefulness of this information somewhat in doubt. A difference of this magnitude found on a wider scale—for example, among all those who remembered receiving a report—would more clearly establish the value in making this information available. With ongoing exposure and education, consumers may increasingly incorporate CAHPS into the information they consider when choosing health plans. Moreover, to the extent that health plans believe that CAHPS will influence their enrollments or their reputations, they can be expected to take performance information seriously. Longitudinal studies of the course of health plan ratings and enrollments over time are needed to test for these effects, which may depend only partly on individual consumer behavior. Meanwhile, the results of this study suggest that expectations of direct and immediate effects on the plan choices of Medicaid consumers should be modest at best until the formidable challenges of effective dissemination are met.

Acknowledgments

The New Jersey Medicaid Office of Managed Health Care, and its enrollment contractor, Foundation Health Federal Services (now Maximus), were active partners in the conduct of the demonstration and, specifically, in this outcome evaluation. We extend our appreciation to Lou Bodian, Dan Walsky, and Margaret Sabin, all with the Medicaid Office of Managed Health Care during the demonstration, whose support allowed us to perform this evaluation. We also gratefully acknowledge the support of Christine Crofton and Charles Darby, our project officers at the Agency for Healthcare Research and Quality.

Notes

1

HMO enrollment is not required in the remaining four counties because they have very small Medicaid populations.

2

In the previous year, the state had surveyed Medicaid managed care enrollees using a different questionnaire.

3

Significant differences may be measured as statistical significance or practical significance (magnitude of difference). New Jersey Medicaid required significant differences of both types before assigning one or three stars to an HMO.

4

These cases excluded those enrolling in one HMO because a Medicaid sanction action prohibited this HMO from enrolling new beneficiaries during the study period.

5

Approximately 6 percent of the sample was lost due to language problems. These respondents also would have had trouble using the CAHPS report because it was printed only in English.

6

Concordance is the proportion of times, over all pairs of observations that match an event (1) with a non-event (0), that the predicted probability of the event happening is higher for the event observation (1) than for the non-event observation.

7

The first question described the CAHPS report and then asked “Did you get this report?” A respondent who answered “Yes” to this question was asked the follow-up question, “How carefully did you read the star report?” with response options of “never looked at it, just glanced through it, read parts of it, or read most or all of it carefully.”

8

Previous research has shown all of these factors to be predictors of plan choices. Other than county-level Medicaid market share, no data were available for HMO characteristics for use in this model.

This research was supported through cooperative agreements No. 5U18HS09204-05 entitled “Consumer Assessment of Health Plans Study” (CAHPS) from the Agency for Healthcare Research and Quality (AHRQ).

References

  1. Brown J, Nederend SE, Hays RD, Farley Short Pamela, Farley Donna O. “Special Issues in Assessing Care of Medicaid Recipients”. Medical Care. 1999;37(3, supplement):MS79–88. doi: 10.1097/00005650-199903001-00009. [DOI] [PubMed] [Google Scholar]
  2. Castles A, Goodwin P, Damberg C. Abstract Book of the Association of Health Services Research. vol. 14. Washington, DC: Association for Health Services Research; 1997. “Consumer Use of Quality of Care Information: An Evaluation of California Consumer HealthScope”; pp. 171–2. [Google Scholar]
  3. Chernew M, Scanlon DP. “Health Plan Report Cards and Insurance Choice”. Inquiry. 1998;35(1):9–22. [PubMed] [Google Scholar]
  4. Crofton C, Lubalin J, Darby CS. “Foreword”. Medical Care. 1999;37(3, supplement):MS1–9. doi: 10.1097/00005650-199903001-00001. [DOI] [PubMed] [Google Scholar]
  5. Davis K, Collins KS, Schoen C, Morris C. “Choice Matters: Enrollees' Views of Their Health Plans”. Health Affairs. 1995;14(2):99–112. doi: 10.1377/hlthaff.14.2.99. [DOI] [PubMed] [Google Scholar]
  6. Donat PL, Selby-Harrington ML, Quade D, Brastauskas BS. “Obtaining Telephone Numbers for a Rural Medicaid Population: Issues for Outreach and Research”. Public Health Nursing. 1995;12(3):165–70. doi: 10.1111/j.1525-1446.1995.tb00005.x. [DOI] [PubMed] [Google Scholar]
  7. Fowler FJ, Gallagher PM, Nederend S. “Comparing Telephone and Mail Responses to the CAHPS Survey Instrument”. Medical Care. 1999;37(3, supplement):MS41–9. doi: 10.1097/00005650-199903001-00005. [DOI] [PubMed] [Google Scholar]
  8. Gibbs DA, Sangl JA, Burrus B. “Consumer Perspectives on Information Needs for Health Plan Choice”. Health Care Financing Review. 1996;18(1):55–73. [PMC free article] [PubMed] [Google Scholar]
  9. Gold M, Hadley J, Eisenhower D, Hall J, Metcalf C, Nelson L, Chu K, Strouse R, Colby D. “Design and Feasibility of a National Medicaid Access Survey with State-Specific Estimates”. Medical Care Research and Review. 1995;52(3):409–30. doi: 10.1177/107755879505200305. [DOI] [PubMed] [Google Scholar]
  10. Health Care Financing Administration. [December 1999];“Medicaid Managed Care State Enrollment–June 1998”. 1999 Available at: http://www.hcfa.gov/medicaid/plansum8.htm.
  11. Hirano K, Imbens GW, Ridder G. Cambridge, MA: Technical Working Paper, National Bureau of Economic Research; 2000. “Estimation of Average Treatment Effects Using the Estimated Propensity Score”. [Google Scholar]
  12. Kanouse DE, Spranca ME, Uhrig J, Elliott MN, Short PF, Farley DO, Hays RD. “Effects of CAHPS Reports on Medicaid Recipients' Choice of Health Plans in a Laboratory Setting”. Paper presented at the annual meeting of the Association for Health Services Research; Los Angeles, Calif. 2000. [Google Scholar]
  13. Knutson DJ, Dahms N, Kind E, McGee J, Finch M, Fowles JB. Abstracts of the Association of Health Service Research. vol. 14. Washington, DC: Association of Health Services Research; 1997. “The Effect of Health Plan Report Cards on Consumer Knowledge, Attitudes and Plan Choice: A Quasi-Experimental Evaluation”; p. 184. [Google Scholar]
  14. Marquis MS, Rogowski JA. Participation in Alternative Health Plans. Santa Monica, CA: RAND; 1991. RAND report R-4105-HCFA. [Google Scholar]
  15. Marshall MN, Shekelle PG, Brook RH. “The Public Release of Performance Data: What Do We Expect to Gain? A Review of the Evidence”. Journal of the American Medical Association. 2000;283(14):1866–74. doi: 10.1001/jama.283.14.1866. [DOI] [PubMed] [Google Scholar]
  16. Mechanic D, Ettel T, Davis D. “Choosing Among Health Insurance Options: A Study of New Employees”. Inquiry. 1990;27(1):14–23. [PubMed] [Google Scholar]
  17. Payne JW, Bettman JR, Johnson EJ. The Adaptive Decision Maker. New York: Cambridge University Press; 1993. [Google Scholar]
  18. Robinson S, Brodie M. “Understanding the Quality Challenge for Health Consumers: The Kaiser/AHCPR Survey”. Joint Commission Journal on Quality Improvement. 1997;23(5):239–44. doi: 10.1016/s1070-3241(16)30313-3. [DOI] [PubMed] [Google Scholar]
  19. Sainfort F, Booske BC. “Role of Information in Consumer Selection of Health Plans”. Health Care Financing Review. 1996;18(1):31–54. [PMC free article] [PubMed] [Google Scholar]
  20. Scanlon DP, Chernew M, Lave JR. “Consumer Health Plan Choice: Current Knowledge and Future Directions”. Annual Review of Public Health. 1997;18:507–28. doi: 10.1146/annurev.publhealth.18.1.507. [DOI] [PubMed] [Google Scholar]
  21. Spranca ME, Kanouse DE, Short PF, Farley DO, Hays RD. “Do Consumer Reports of Health Plan Quality Affect Health Plan Selection?”. Health Services Research. 2000;35(5, Pt 1):933–47. [PMC free article] [PubMed] [Google Scholar]
  22. Tumlinson A, Hendricks A, Stone EM, Mahoney P, Bottigheimer H. “Choosing a Health Plan: What Information Will Consumers Use?”. Health Affairs. 1997;16(3):229–38. doi: 10.1377/hlthaff.16.3.229. [DOI] [PubMed] [Google Scholar]

Articles from Health Services Research are provided here courtesy of Health Research & Educational Trust

RESOURCES