Abstract
This article describes preliminary results from a natural experiment that tested the impact of report cards on employees. As part of the 1995 enrollment process, some members of the State of Minnesota Employee Group Insurance Program received report cards on the plans offered to them, and others did not. Both groups of employees had a chance to review a second community-wide report card covering all Minnesota plans that had been distributed by an independent organization through local newspapers. Both groups were surveyed before and after they made their health plan selections. We compare the likelihood of seeing, the intensity of reading, and the perceived helpfulness of the first, employer-specific report card with the second, community-wide report card for consumers who make plan selections.
Introduction and Background
Supporters of a managed competition approach to health care reform have argued that consumers need more and better information to make truly informed choices among health plans. In theory, creating better informed consumers will encourage health plans to compete on quality of care and enrollee satisfaction, in addition to cost (Enthoven, 1993; Hibbard and Weeks, 1987).
National debate over health care reform focused attention on the types and amounts of information available to assist consumers in making health care choices, as well as the ability of consumers to process and act on that information (Sofaer and Hurwicz, 1993). Some have suggested that informing and protecting consumers should be an end in itself and not just a means to a working marketplace (Sofaer, 1993). Others have stressed that a minimal number of reasonably informed consumers is sufficient to encourage competition among providers (Pauly, 1987). Virtually all health benefit programs give employees spreadsheets that provide factual information in a format to encourage comparison of health plans. These spreadsheets include information on premiums, employee out-of-pocket contributions to premiums, benefit coverage, and provider network. Additional information on providers in a health plan network is distributed by the employee benefit manager or health plan in response to employee requests.
In the private sector, large purchasers of health care have initiated efforts to expand information available to potential health plan enrollees to include quality and member satisfaction measures (Cronin, 1995; Business & Health, 1995). These efforts have traditionally involved single purchasers and individual health plans. Typically, a large firm collects data from its employees about satisfaction with services provided by each health plan. Employers often request that health plans supplement these data with information about services provided to the employed group, such as immunization rates, other measures of preventive care, and outcomes that can be measured using administrative claims data Gordahl, 1992). The resulting information may be used by the employee to select among health plans.
In recent years, coalitions of employers have formed, in part, to standardize data for comparison of health plans by employers and to enhance the leverage that employers can exert on health plans in contract negotiations (Epstein, 1995). In response to these efforts, some health plans have begun to work closely with employer groups to design data collection instruments and forms for displaying the results (Jordan, Straus, and Bailit, 1995). An important example of this type of collaboration is the Health Plan Employer Data and Information Set (HEDIS) effort, which began in 1992 and is now under the sponsorship of the National Committee for Quality Assurance (Packer-Tursman, 1993; National Committee for Quality Assurance, 1993). HEDIS measures include areas of plan performance such as rates of preventive services, measures of appropriateness of care, and patient satisfaction (Hibbard and Jewett, 1996).
In parallel with these highly visible national efforts, some large employers began experimenting with comparative health plan report cards that could be used in conjunction with spreadsheet information by employees for selecting a health care plan from among the set of plans offered by employers. These report cards typically contained data from employee surveys, but varied widely in their comprehensiveness, sophistication, and presumed usefulness to consumers. Most often, the report cards compared health plans on the basis of access to care, quality of communication, health plan administrative procedures, and overall ratings of satisfaction (McGee and Knutson, 1994).
At present, there is ongoing, widespread experimentation with various methods of collecting information about health plans and providing summary comparisons for consumer and employer use (Bushick, 1996). Considerable attention has been focused on the technical aspects of surveying consumers and designing report cards (Agency for Health Care Policy and Research, 1995). Evaluation of consumer report cards has centered primarily on qualitative information derived from focus groups (Lavisso-Mourey, 1994; National Committee for Quality Assurance, 1995; Walker, Hubbard, and Garfinkel, 1996). Results of these focus group evaluations indicate that consumers are interested in report cards, but that they would like the report cards to focus on basic, not “fancy,” information and survey results about their specific medical condition or “people like themselves” (Firman, 1995). In addition, consumers are interested in knowing more about the reliability and validity of the information provided to them (U.S. General Accounting Office, 1995).
Although the report card movement began with employer-specific report cards, industry leaders and policy makers have begun to realize that producing unique employer-specific report cards that repeatedly measure the same health plan from the perspective of different employer groups may be inefficient. Report cards are costly to prepare because they require administration of the survey, construction of an analytic dataset, analysis of the survey, packaging of the findings in a format that is easily understood, and dissemination of findings to employees. Even when previously developed surveys are available, these questionnaires require review and possible modification or elaboration before they are acceptable to employers. Although no studies have documented the cost of producing report cards on a regular basis, the cost is significant to the individual employer, (e.g., costs for the Minnesota Department of Employee Relations survey were $145,000 in 1995) and, from a societal viewpoint, will increase in the aggregate as increasing numbers of employers adopt the report card process. An alternative approach would involve the development of a community-wide report card, based on surveys of the overall population of health plan enrollees, with the findings broadly disseminated through the mass media. While this approach could result in lower aggregate costs associated with report card related activities, consumers may not find community-wide report cards as helpful as employer-specific report cards.
This article will contrast the utility of report cards prepared by employers for the benefit of their own employees with community-wide report cards prepared for the general public. We use data assembled for a larger study designed to investigate whether report cards improve consumers' knowledge about health plans, affect consumers' attitudes towards health plans, and influence consumers' choice of health plans. In this larger study, survey data were collected from different samples drawn from the 60,000 employees enrolled in the State of Minnesota Employee Group Insurance Program (“the Program”). The Program has been identified as a model for managed competition and has been a pioneer in the development and dissemination of report card information to employees (Dowd and Feldman, 1994/1995). In 1991, a survey-based report card was mailed to all Program employees before the fall open enrollment period, with the exception of employees of the University of Minnesota. The University did not wish to incur the additional expense of distributing the report card. In 1993, and most recently in 1995, a revised form of the report card was distributed to potential enrollees, again excluding University employees. In 1995, the report card was a single sheet 7 inches by 25.5 inches, folded to 8½ by 11 inches, printed in two colors. It included a summary star chart as well as 14 graphs of survey results. The summary star chart indicated that one plan was significantly below average on 12 of 14 measures. The graphs indicated that all plans had generally positive ratings.
In October 1995, a report card was disseminated by the Minnesota Health Data Institute (MHDI) to the general public through a newspaper supplement. MHDI was created by the Minnesota legislature in 1993 as a public-private partnership to carry out activities related to health plan and provider performance measurement, electronic data exchange, and data privacy. This effort was MHDI's initial attempt at measuring and disseminating health plan performance information. The report card contained information on 46 health plans, including publicly funded programs. The six-color newspaper supplement was 16 pages long with separate sections comparing private health insurance (health maintenance organizations [HMOs], point-of-service, indemnity), Medicare (HMOs, fee-for-service), and State health programs (medical assistance, general assistance, Minnesota care, Minnesota Comprehensive Health Association). Among the HMOs, one plan was consistently better than average on 13 of 21 measures. (This plan was not available to the study population. Furthermore, one plan available only to State employees, the State Health Plan, was not included in the MHDI report.)
In this article, we evaluate the relative impact on consumers of these two report card efforts. For simplicity, we will refer to the report card generated by the State of Minnesota Department of Employee Relations as the employer-specific report card and the report card generated by the MHDI as the community-wide report card. The non-University State employees will be referred to as State employees, and the University State employees will be referred to as University employees. For each report card, we asked whether the employee had seen it, how intensely the employee had read it, and how helpful the report card information was in selecting a health plan. State employees were asked about both report cards. University employees were only asked about the community-wide report card, since they did not receive an employer-specific report card. For these analyses, respondents with single coverage and respondents with family coverage were combined. Because information on some independent variables was available only in the pre-enrollment survey, we restricted our analysis to those cases with both pre-enrollment and postenrollment information.
Methods
Study Design
In effect, the way in which the two report cards were disseminated created a natural experiment. Two groups of employers chose among the same health plans and received the same spreadsheet information on those plans. One group (State employees) received the community-wide report card and the employer-specific report card, while the second group (University employees) received only the community-wide report card. To take advantage of this natural experiment, we used a Solomon four-fold design to address the larger study questions (Campbell and Stanley, 1963). Data were collected before and after the open enrollment periods, as shown in Table 1.
Table 1. Data Collection Times.
| Employee Group | Time 1 (Pre-Enrollment) |
Time 2 (Enrollment) |
Time 3 (Postenrollment) |
|---|---|---|---|
| State Employees (1) | O1 | X | O2 |
| State Employees (2) | — | X | O2 |
| University Employees (3) | O1 | — | O2 |
| University Employees (4) | — | — | O2 |
SOURCE: Knutson, D.J., Fowles, J.B., Finch, M., et al., 1996.
Time 1 indicates the period immediately preceding the open enrollment period for all study samples. Time 2 indicates the open enrollment period. Time 3 indicates the period immediately following the open enrollment period. O1 and O2 indicate the administration of a survey. X indicates the distribution of the report card to State employees.
Study Population
Two study samples were drawn, based upon whether the employee had a single or family coverage policy. For each type of coverage, four samples were surveyed (lines 1 through 4 in Table 1). Two samples were formed through random sampling of State employees in the Minneapolis-St. Paul metropolitan area, all of whom received the employer-specific report card, and the other two samples were formed through random sampling of University employees in the Twin Cities, none of whom received the employer-specific report card. One State and one University sample were surveyed before open enrollment. All samples were surveyed at postenrollment.
The University employee population had a much higher proportion of faculty than the State employee population. (There were some faculty at State and community colleges in the State employee population.) To reduce potentially large differences in educational levels between the two groups, we excluded faculty members from both samples. We required that subjects be active, full-time employees because these employees are eligible for health coverage. They also had to work and reside in the seven-county Minneapolis-St. Paul metropolitan area because the six health plans covered in the employer-specific report card were available in this geographic area. Some employee subgroups were not included in the survey, including employees who were involved with conducting the study or who were atypical from the perspective of health benefit eligibility. These groups included Department of Employee Relations staff and members of the State legislature. Employees whose status with respect to these criteria changed during the study period were dropped from the study. Additionally, employees who changed from a single policy to a family policy or vice versa were eliminated. During respondent screening, we eliminated University employees whose spouse was employed by the State (in which case a report card would have been sent to the household). Administrative data indicated which plan the employee was enrolled in for calendar year 1995. Employees who stated that they belonged to a health plan that did not match the plan listed in the administrative data were dropped from the study.
The response rate was 74 percent for the pre-enrollment survey and 85 percent for the postenrollment survey. The response rate calculation included all refusals and eligible non-contacts in the denominator. In the overall study, the number of respondents varied among the eight samples, ranging from 385 to 431. For the pre-enrollment and postenrollment samples used in this article, combining single and family respondents, there were 820 State employees and 802 University employees.
Data Sources
The analysis relied principally on data collected through telephone surveys of State and University employees. Pre-enrollment and postenrollment surveys were conducted immediately before and after open enrollment, which was held between October 1 and October 31, 1995, for State employees and between October 16 and November 15, 1995, for University employees. The telephone questionnaire collected data related to the primary study questions, as well as information on employee and household characteristics expected to influence health plan preference and choice based on past published studies.1
Other data were taken from secondary sources. Health plan membership for all sample members for the years 1994, 1995, and 1996 was provided by the State Department of Employee Relations. Descriptors of the various plans offered to employees in the sample (e.g., premium cost, co-pay and deductible amounts, and specific coverage) were abstracted from the enrollment packets distributed to all State and University employees in September 1995.
Dependent Variables
For this analysis, we focused on three dependent variables for each of the two report cards with information pertaining to these variables collected in the postenrollment surveys (see the “Study Design” section). The first variable addressed whether the respondent remembered seeing the report card (yes, no, or not sure). The second variable, defined only for those who had seen the report card, measured the intensity of processing the reported information (read most or all of it, read parts of it, just glanced through it, or never really looked at it). The third variable measured the respondent's perception of the helpfulness of the report card in deciding whether to stay with or switch health plans (extremely helpful, very helpful, somewhat helpful, not very helpful, or not at all helpful).
Independent Variables
Independent variables used in these analyses included age, gender, educational level, presence of chronic disease in the family, single or family coverage, switched or considered switching health plans, whether the respondent or spouse worked in a clinic or doctor's office, the general likelihood of using objective ratings such as consumer reports for choosing services, perceived importance of the health plan decision, and confidence in health plan choice. Measures of central tendency for both the dependent and independent variables for the State and University employees are included in Table 2.
Table 2. Selected Characteristics of State and University Employees.
| Variable | State Employees (n = 820) |
University Employees (n = 802) |
|---|---|---|
| Independent Variables | ||
| Age (Mean) | ***46.1 (8.9) | 42.8 (9.5) |
| Gender (Percent Female) | ***52 | 66 |
| Educational Level (Percent) *** | ||
| 8th Grade or Less | 0 | 0 |
| Some High School | 1 | 1 |
| High School Graduate or GED | 21 | 8 |
| Some College or Technical | 25 | 26 |
| College Graduate | 30 | 34 |
| Post-Graduate or Professional Degree | 23 | 32 |
| Presence of Chronic Disease in Family (Percent Yes) | *61 | 55 |
| Single or Family Coverage (Percent Single) | ns48 | 48 |
| Switched or Considered Switching (Percent Who Switched)1 | **21 | 16 |
| Self or Spouse Working in Clinic (Percent Yes) | ***13 | 42 |
| Likelihood of Using Objective Ratings to Select a Service (Mean)2 | **2.54 (1.18) | 2.37 (1.12) |
| Importance of Health Plan Decision (Mean)3 | ns2.39 (1.20) | 2.45 (1.25) |
| Confidence in Health Plan Choice (Mean)4 | ns1.49 (0.83) | 1.47 (0.87) |
| Dependent Variables | ||
| Saw Employer-Specific Report Card (Percent Yes) | 76 | NA |
| Intensity of Reading Employer-Specific Report (Mean)5 | 1.77 (0.89) | NA |
| Degree of Helpfulness of Employer-Specific Report Card for Decision (Mean)6 | 3.32 (1.01) | NA |
| Saw Community-Wide Report Card (Percent Yes) | ns25 | 27 |
| Intensity of Reading Community-Wide Report Card (Mean)7 | **2.31 (0.92) | 2.04 (0.92) |
| Degree of Helpfulness of Community-Wide Report Card for Decision (Mean)8 | **3.87 (0.92) | 3.60 (1.00) |
p ≤ .001.
p ≤ .01.
p ≤ .05.
No significant difference between the groups.
Five-point scale: 1 = switched; 2 = considered switching a lot; 5 = did not consider switching at all.
Five-point scale: 1 = definitely would; 5 = definitely would not.
Five-point scale: 1 = extremely important; 5 = not at all important.
Four-point scale: 1 = very confident; 4 = not very confident.
Four-point scale: 1 = read most or all of it; 4 = never really looked at it.
Five-point scale: 1 = extremely helpful; 5 = not at all helpful.
Four-point scale: 1 = read most or all of it; 4 = never really looked at it.
Five-point scale: 1 = extremely helpful; 5 = not at all helpful.
NOTES: NA is not applicable. Numbers is parentheses are standard deviations.
SOURCE: Knutson, D.J., Fowles, J.B., Finch, M., et al., 1996
Statistical Analysis
There were three stages in the analysis. First, descriptive statistics for each variable were generated. Next, relationships among all independent and dependent variables were explored at the bivariate level, using Chi-square or analysis of variance (ANOVA) tests as appropriate. Finally, for the multivariate analyses, the independent variables were identified on the basis of their significance in the bivariate analyses and their theoretical significance. Variables that reached significance of 0.01 or less were entered into the multinomial logit regression models. The estimated models were then used to evaluate the simultaneous contributions of these variables in predicting the likelihood that an employee saw the report card, the intensity of reading it, and the perceived helpfulness of the report card in selecting a health plan. Odds ratios (OR) and 95-percent confidence intervals are reported.
Results
First, we compare the characteristics of State and University employees. Then we compare State employees' evaluation of the community-wide report card with University employees' evaluation of the same report card. Finally, we compare State employees' evaluation of their own employer-specific report card with University employees' evaluation of the community-wide report card.
Differences Between State and University Respondents
The State and University respondents differed in several ways. State employees were somewhat older than University employees (mean age 46 years versus 43 years), and more were male (48 percent versus 34 percent). Even after excluding all the faculty from the sample, State employees still had a somewhat lower average educational level. About 25 percent of State employees had postgraduate or professional degrees, while 32 percent of University employees had advanced degrees. State employees were more likely to have a chronic disease in the family (61 percent versus 55 percent), and were much less likely to work or have a spouse who worked in a hospital or clinic (13 percent versus 42 percent). This difference reflects the fact that University hospital and clinic employees are included in the University group. State employees were somewhat more likely to have switched plans from 1995 to 1996 (21 percent versus 16 percent). State employees were less likely to say that they used objective ratings like Consumer Reports to select services. There was no difference between State employees and University employees on their ratings of the importance of the health plan decision or their degree of confidence in their health plan choice (Table 2). Because the State and University employees differed on some characteristics that may be related to the effect of report cards, we included these characteristics as independent variables in our multivariate analyses. We also tested the need to include a “propensity score” as an independent variable in the estimated models. The propensity score is the probability that an individual is found in a particular group, and is used to detect bias in estimated intervention effects (Rosenbaum and Rubin, 1983). The propensity score was not significant in any of the estimated models, suggesting no bias in the estimated intervention effects.
Evaluation of the Community-Wide Report Card
Only about 25 percent of either group, State or University employees, reported seeing the community-wide report card. The likelihood of seeing the community-wide report card was somewhat higher for older (OR = 1.03) and more highly educated respondents (college graduate OR = 1.80; postgraduate OR = 2.34). Seeing the report card was not significantly affected by place of employment, gender, having a spouse who worked in a clinic, or preference for using objective ratings (Table 3).
Table 3. Comparison of State Employees' With University Employees' Rating of the Community-Wide Report Card.
| Independent Variables | Dependent Variables | |||||
|---|---|---|---|---|---|---|
|
| ||||||
| Saw Community-Wide Report Card (n=1,488) |
Intensity of Reading Community-Wide Report Card (n=366) |
Degree of Helpfulness of Community-Wide Report Card (n=364) |
||||
|
|
|
|
||||
| Odds Ratio | 95-Percent Confidence Intervals | Odds Ratio | 95-Percent Confidence Intervals | Odds Ratio | 95-Percent Confidence Intervals | |
| State or University1 | 0.85 | 0.66, 1.11 | 0.60 | 0.39, 0.93 | 0.58 | 0.37, 0.89 |
| Age | 1.03 | 1.02, 1.04 | 1.00 | 0.98, 1.02 | 1.01 | 0.99, 1.04 |
| Sex2 | 0.99 | 0.76, 1.27 | 1.04 | 0.68, 1.58 | 0.70 | 0.46, 1.07 |
| Education: Some College3 | 1.37 | 0.89, 2.10 | 0.93 | 0.44, 1.98 | 1.34 | 0.64, 2.83 |
| Education: College Graduate | 1.80 | 1.19, 2.72 | 1.46 | 0.71, 2.98 | 0.66 | 0.32, 1.33 |
| Education: Post-Graduate | 2.34 | 1.55, 3.56 | 1.53 | 0.75, 3.12 | 0.61 | 0.30, 1.23 |
| Intensity of Reading Community-Wide Report Card | — | — | — | — | 0.36 | 0.27, 0.46 |
| Self or Spouse Work in Clinic4 | 0.88 | 0.67, 1.17 | 0.72 | 0.45, 1.14 | 1.16 | 0.73, 1.84 |
| Less Likely to Use Objective Ratings | 0.92 | 0.83, 1.03 | 1.06 | 0.88, 1.27 | 0.89 | 0.74, 1.07 |
| Presence of Chronic Disease in Family5 | 0.96 | 0.74, 1.24 | 1.08 | 0.71, 1.63 | 0.77 | 0.51, 1.17 |
| Single or Family6 | 1.22 | 0.95, 1.57 | 1.25 | 0.82, 1.91 | 0.72 | 0.47, 1.10 |
| Switched Health Plans 1995 to 19967 | 1.37 | 0.97, 1.93 | 1.52 | 0.87, 2.67 | 0.79 | 0.45, 1.39 |
| Considered Switching a Lot | 1.19 | 0.64, 2.22 | 0.56 | 0.20, 1.59 | 0.79 | 0.29, 2.16 |
| Considered Switching a Fair Amount | 1.39 | 0.92, 2.10 | 1.33 | 0.68, 2.60 | 1.44 | 0.74, 2.82 |
| Considered Switching a Little | 1.24 | 0.92, 1.68 | 0.89 | 0.54, 1.47 | 1.12 | 0.68, 1.85 |
| Decreased Importance of Health Plan Decision | 0.93 | 0.83, 1.04 | 0.74 | 0.61, 0.90 | 0.87 | 0.72, 1.05 |
| Decreased Confidence in Health Plan Decision | 0.85 | 0.70, 1.02 | 0.95 | 0.70, 1.29 | 0.93 | 0.68, 1.26 |
Reference category was “University.”
Reference categiory for sex was “male.”
Reference category for education was “high school graduate or less.”
Reference category for working in a clinic was “yes.”
Reference category for presence of chronic disease was “no.”
Reference category for single or family was “single.”
Reference category for switching was “did not consider switching.”
SOURCE: Knutson, D.J., Fowles, J.B., Finch, M., et al., 1996.
Among those who reported seeing the community-wide report card, State employees were less likely than University employees to have read most or all of it (25 percent versus 37 percent). The likelihood of more intense reading was influenced by whether or not the respondent was a State employee (OR = 0.60) and decreasing importance of the health plan decision (OR = 0.74). It was not influenced by age, gender, educational level, having a spouse who worked in a clinic, or preference for using objective ratings.
Among those who reported seeing the community-wide report card, State employees were less likely to find it helpful for choosing a plan (5 percent versus 11 percent extremely or very helpful, p <0.01). In a logistic regression, State employees were much less likely to find the report helpful (OR = 0.58), and especially for those who had not read the report as intensely as others (OR = 0.36).
Community-Wide Versus Employer-Specific Report Cards
The differences in the two groups in their evaluations of the community-wide report card could simply reflect the fact that State employees had their own report card, and they considered this report card more relevant to their enrollment choice. Therefore, our second analysis compared the State employees' evaluation of their employer-specific report card with the University employees' evaluation of the community-wide report card-the only one available to them. To perform this analysis, we created new versions of the dependent variables in which “seeing the report card” took the values of seeing the employer-specific report card for the State employees and seeing the community-wide report card for University employees. Similarly, “intensity of reading the report card” and “helpfulness of the report card in selecting a health plan” took the values for the employer-specific report card for the State employees and the values for the community-wide report card for University employees (Table 4).
Table 4. Comparison of State Employees' Rating of the Employer-Specific Report Card With University Employees' Rating of the Community-Wide Report Card.
| Independent Variables | Dependent Variables | |||||
|---|---|---|---|---|---|---|
|
| ||||||
| Saw Report Card (n=1,449) |
Intensity of Reading Report Card (n=715) |
Degree of Helpfulness of Report Card (n=711) |
||||
|
|
|
|
||||
| Odds Ratio | 95-Percent Confidence Intervals | Odds Ratio | 95-Percent Confidence Intervals | Odds Ratio | 95-Percent Confidence Intervals | |
| State or University1 | 9.46 | 7.18, 12.46 | 1.83 | 1.30, 2.59 | 1.11 | 0.79, 1.57 |
| Age | 1.01 | 1.00, 1.03 | 1.02 | 1.00, 1.04 | 1.00 | 0.99, 1.02 |
| Sex2 | 1.19 | 0.92, 1.55 | 0.87 | 0.64, 1.18 | 0.62 | 0.46, 0.84 |
| Education: Some College3 | 1.99 | 1.33, 2.97 | 1.09 | 0.68, 1.73 | 0.99 | 0.63, 1.59 |
| Education: College Graduate | 2.26 | 1.53, 3.36 | 1.02 | 0.65, 1.61 | 0.54 | 0.34, 0.84 |
| Education: Post-Graduate | 2.39 | 1.59, 3.59 | 1.31 | 0.81, 2.11 | 0.51 | 0.32, 0.82 |
| Intensity of Reading Report Card | — | — | — | — | 0.41 | 0.34, 0.49 |
| Self or Spouse Work in Clinic4 | 1.09 | 0.82, 1.44 | 0.77 | 0.52, 1.13 | 1.09 | 0.75, 1.59 |
| Less Likely to Use Objective Ratings | 0.94 | 0.84, 1.04 | 0.86 | 0.75, 0.97 | 0.81 | 0.71, 0.91 |
| Presence of Chronic Disease in Family5 | 1.00 | 0.77, 1.29 | 0.96 | 0.71, 1.30 | 0.82 | 0.61, 1.11 |
| Single or Family6 | 1.37 | 1.06, 1.77 | 1.10 | 0.81, 1.48 | 0.78 | 0.58, 1.05 |
| Switched Health Plans 1995 to 19967 | 2.05 | 1.44, 2.91 | 1.35 | 0.90, 2.02 | 1.39 | 0.93, 2.06 |
| Considered Switching a Lot | 2.17 | 1.18, 4.00 | 1.82 | 0.91, 3.60 | 0.74 | 0.38, 1.41 |
| Considered Switching a Fair Amount | 1.48 | 0.96, 2.26 | 1.40 | 0.84, 2.33 | 2.00 | 1.22, 3.28 |
| Considered Switching a Little | 1.42 | 1.05, 1.92 | 0.91 | 0.63, 1.31 | 1.31 | 0.91, 1.88 |
| Decreased Importance of Health Plan Decision | 0.98 | 0.87, 1.09 | 0.85 | 0.74, 0.98 | 0.91 | 0.79, 1.04 |
| Decreased Confidence in Health Plan Decision | 0.87 | 0.73, 1.03 | 0.87 | 0.72, 1.06 | 0.84 | 0.70, 1.02 |
Reference category was “University.”
Reference categiory for sex was “male.”
Reference category for education was “high school graduate or less.”
Reference category for working in a clinic was “yes.”
Reference category for presence of chronic disease was “no.”
Reference category for single or family was “single.”
Reference category for switching was “did not consider switching.”
SOURCE: Knutson, D.J., Fowles, J.B., Finch, M., et al., 1996
State employees were much more likely to have seen the employer-specific report card than University employees were to have seen the community-wide report card (76 percent versus 27 percent). State employees who saw the report card were more likely to say that they read most or all of the employer-specific report card than University employees were to say that they read most or all of the community-wide report card (49 percent versus 37 percent, p < 0.001). When we controlled for all variables simultaneously, this finding was unchanged (OR= 1.83). People with higher educational levels were more likely to report seeing the report card (some college OR = 1.99, college graduate OR = 2.26, and postgraduate OR = 2.39). Those who switched health plans from 1995 to 1996 (OR = 2.05) or had considered switching “a lot” (OR = 2.17) or “a little” (OR = 1.42) were more likely to have seen a report card (OR = 2.05 and 2.17, respectively). The type of coverage also played a role; those with family coverage were more likely to have seen a report card than those with single coverage (OR = 1.37).
State employees who reported seeing the employer-specific report card were more likely than University employees (who saw only the community-wide report card) to say that they read their report card more intensely (OR = 1.83). Increasing age was directly related to reading intensity (OR = 1.02), as was the decreased likelihood of using objective ratings to select services (OR = 0.86). Those who thought the health plan decision was less important read less intensely (OR =0.85).
When we compared the helpfulness of the two report cards for those who saw them, we found no significant difference between State employees and University employees, once other characteristics had been controlled. Women were less likely to find the report card helpful than men (OR = 0.62). Those with a college degree or a postgraduate degree were much less likely to find the report card helpful (OR = 0.54 and 0.51 respectively). Employees who were less likely to use objective ratings to select services found the report card less helpful (OR = 0.81). Those who thought about switching health plans “a fair amount” found their report card more helpful than those who did not consider switching (OR = 2.00). Those who read their report with less intensity were less likely to find it helpful (OR = 0.41).
Role of Chronic Illness
The presence of chronic illness in the household was not related to the use and perceived usefulness of the report cards. Some readers may assume that households with chronic illness might be more receptive to the report card because of their greater expected need for health care services. On the other hand, these households may not find the report card especially useful because they desire condition-specific information. Our results support this latter interpretation.2
Discussion
Our results highlight important differences in the impact of two types of report cards. First, the employer-specific report card was much more likely to be seen than the community-wide report card. Second, the employer-specific report card was read more intensely than the community-wide report card. Third, after controlling for differences in reading intensity, there was no difference in the perceived helpfulness of the two report cards by those who saw them.
What characteristics of the report cards might be responsible for these findings? These report cards can be compared in terms of their content, the population that provided the content, each readership's prior experience with report cards, and the dissemination methods.
Comparison of Content
The content of the report cards was similar in many ways. Both report cards used consumer evaluations, and both focused on similar dimensions: access to care, quality of communication, health plan administrative procedures, and overall ratings of satisfaction. The content differed somewhat in that the employer-specific report card had separate results for primary and specialty care as well as for children and adults. Although the subject matter was similar, the actual plans that were compared differed. The employer-specific report card included only six health plans from which the employee could choose. In contrast, the community-wide report card included 46 plans, many of which, such as Medicare and Medicaid, were not relevant to the choices of the population studied here.
Comparison of Populations
The populations surveyed to generate the information reflected in the two report cards were somewhat different. The population for the employer-specific report card consisted of State employees only. Thus, for State employee readers, the respondents who evaluated the plans worked for the same employer as the individual selecting a plan. On the other hand, information used to construct the community-wide report card was based on survey responses from a random sample of each health plan's enrolled population; therefore, survey respondents included individuals who were not State or University employees.
Comparison of Prior Experience
State employees and University employees approached their respective report cards with different levels of prior experience. Because State employees had received report cards in 1991 and 1993, the format was familiar to many of them. University employees had no prior experience with any report card, because the report card produced by MHDI was the first public initiative to disseminate health plan report card information in Minnesota.
Comparison of Dissemination Approaches
Dissemination methods can affect both the likelihood of seeing the information as well as the relevance of the information. The dissemination of the two report cards differed dramatically both in terms of medium and context. The State as an employer distributed its report card, along with other enrollment information, directly to the employee's home. The material was received at the time the choice of health plan was being made, and was accompanied by additional information, e.g., premium costs and provider network. In contrast, the community-wide report card, although delivered in the open enrollment season, was not specifically included as part of the materials received by employees from their employer to assist in making health plan choice decisions. It was disseminated through the newspaper, and dissemination may not have been complete (Minnesota Health Data Institute, 1996). Due to financial constraints, the supplement was not included in all newspapers.
Each of these differences-content, population, degree of prior experience, and dissemination-may contribute to the perceived relevance of the report card. It is not possible in this analysis to disentangle the relative contributions of each of these differentiating characteristics. Logically, one might think that prior experience would heighten the usefulness of report card information. However, when we compared the employee's length of employment with the State with their perceived usefulness of the employer-specific report card, we found no significant relationship.
Those who read the report card more thoroughly found it more helpful regardless of whether the report card was employer-specific or community-wide. This finding highlights the importance of developing a better understanding of what motivates consumers to attend to this type of information. Disseminating this information in the explicit context of health plan selection and enrollment processes may be one way to increase attention. Our findings indicate that the method of distribution is strongly related to the likelihood of seeing the report card.
Recommendations
It may be possible to increase the relevance of a community-wide report card through changes in the dissemination process. As with the MHDI initiative, an independent organization could collect data and develop comparative information on all the health plans in a community. However, the distribution channels could be tailored, both for employers and for individuals. The comparative information could be made available to employers who could use it, in turn, to produce report cards that specifically apply to the health plans they offer their employees. The employer could control the timing of dissemination, and also supply other health plan selection information, particularly information about price. Subject to the availability of technology, the information could be made available through computer networks directly to consumers, who could browse through it based on whatever selection criteria were relevant to them. With this approach, price information could not be available simultaneously because each employer's price would be different. However, consumers would have control of the timing of access to the information, and of methods to select relevant information.
Either strategy, focused on employers or directly on consumers, should preserve the potential scale economies and quality control achieved by centralizing data collection, but should also increase the relevance and simplify the information for those who are going to use it. If employer-specific population-based measures are not critical to the relevance of report cards to consumers, the results of our comparison may indicate the degree of attention that could be achieved using a community-wide data collection process harnessed to an employer-specific versus a consumer-specific dissemination plan.
Study Limitations
Since the study was not based on a randomized trial, it is not surprising that the State employees differed from the University employees in their characteristics. While we controlled for observed differences in the multivariate analyses, it is possible that some unmeasured characteristics were responsible for the observed differences in the report card effects. However, we used a large number of independent measures, and few showed significant differences between the groups.
The generalizability of the study findings are limited by the nature of the setting. The study population was relatively well educated and was drawn from an employed population in the Twin Cities, a mature managed care market. As a consequence, we cannot be sure how these results would apply to significantly different populations, especially those in public programs, such as Medicare and Medicaid, and in emerging managed care markets.
Other factors may also have influenced our findings. For instance, the State employees' rating of the community-wide report card may have been lower than the University employee's rating because, in the telephone survey, questions about the community-wide report card directly followed questions about the employer-specific report card. State employees may have felt the need to differentiate their ratings of the two report cards, and responded by artificially lowering their appraisal of the community-wide report card. Also, the study addressed only report cards based on individual ratings of plan characteristics, such as satisfaction and access. It did not address the impact of report cards that include performance-based measures, such as immunization rates and other measures of the technical quality of care.
While the study results must be interpreted in light of these limitations, the research is an important first attempt to assess, using quantitative methods and a well-developed research design, the critical issue of consumer response to report cards. More research is required to understand the effect of such report cards on different populations, and also to test consumer responsiveness to alternative dissemination strategies. In particular, demonstrations that focus on new dissemination techniques of community-wide information to employers or consumers could provide valuable insights for policymakers.
Acknowledgments
We appreciate the efforts of Mary Kvanbeck in assembling the data base, and Elizabeth Fowler for help with the literature review.
Footnotes
The research presented in this article was supported by the Health Care Financing Administration (HCFA) under Grant Number 18-P-90601/5-01. David J. Knutson, Jinnet B. Fowles, Elizabeth A. Kind, and Susan Adlis are with the Institute for Research and Education, HealthSystem Minnesota. Michael Finch is with the University of Minnesota School of Public Health. Jeanne McGee is with McGee & Evers Consulting, Inc. Nanette Dahms is with the Minnesota Department of Employee Relations. The opinions expressed are those of the authors and do not necessarily reflect those of HCFA, HealthSystem Minnesota, the University of Minnesota School of Public Health, McGee & Evers Consulting, Inc., or the Minnesota Department of Employee Relations.
Survey items included: satisfaction with 1995 health plan; ratings of cost and quality of available health plans; perceived knowledge about health plan options; actual knowledge of health plan characteristics; ratings of the importance of health plan and provider characteristics; physician attachment; proclivity to change plans; attention to own health; past utilization (employee and covered household members); expected utilization (employee and covered household members); importance of the decision to select a health plan; factors influencing the selection of the 1996 plan; information seeking behavior in shopping for a general service; information seeking behavior in selecting the 1996 health plan; general health status (employee and covered household members); chronic illness burden (employee and covered household members); use of and opinion regarding health plan comparison materials; and employee and family demographics.
This issue is explored in greater detail in the forthcoming final report on this study.
Reprint Requests: David J. Knutson, Institute for Research and Education, HealthSystem Minnesota, 3800 Park Nicollet Boulevard, Minneapolis, Minnesota 55416.
References
- Agency for Health Care Policy and Research and the Robert Wood Johnson Foundation. Consumer Survey Information in a Reforming Health Care System Conference Summary. Rockville, MD.: Aug, 1995. AHCPR Pub No. 95-0083. [Google Scholar]
- Bushick B. Health Plan Report Cards: Current Issues and Implications for Physicians. The Medical Journal of Allina. 1996;5(1):36–40. [Google Scholar]
- Business & Health. The Quest for Accountability. 1995;13(12):9. Business & Health Special Report. [Google Scholar]
- Campbell DT, Stanley JC. Experimental and Quasi-Experimental Designs for Research. Chicago: Rand McNally; 1963. [Google Scholar]
- Cronin C. Agency for Health Care Policy and Research and the Robert Wood Johnson Foundation: Consumer Survey Information in a Reforming Health Care System: Conference Summary. Rockville, MD.: Aug, 1995. Using Health Care Quality Information: Employer Case Studies. AHCPR Pub No. 95-0083. [Google Scholar]
- Dowd B, Feldman R. Premium Elasticities of Health Plan Choice. Inquiry. 1994/1995 Winter;31:438–444. [PubMed] [Google Scholar]
- Enthoven AC. The History and Principles of Managed Competition. Health Affairs. 1993;12(Supplement):24–48. doi: 10.1377/hlthaff.12.suppl_1.24. [DOI] [PubMed] [Google Scholar]
- Epstein A. Performance Reports on Quality-Prototypes, Problems, and Prospects. New England Journal of Medicine. 1995;333(1):57–61. doi: 10.1056/NEJM199507063330114. [DOI] [PubMed] [Google Scholar]
- Firman J. Agency for Health Care Policy and Research and the Robert Wood Johnson Foundation: Consumer Survey Information in a Reforming Health Care System: Conference Summary. Rockville, MD.: Aug, 1995. Information for Consumers to Select Plans and Providers. AHCPR Pub No. 95-0083. [Google Scholar]
- Forsyth B, Burnbauer L. Information Needs for Consumer Choice: Draft Cognitive Testing Report. Research Triangle Park, NC.: Research Triangle Institute; 1996. [Google Scholar]
- Hibbard JH, Jewett JJ. What Type of Quality Information Do Consumers Want in a Health Care Report Card? Medical Care Research and Review. 1996;53:28–47. doi: 10.1177/107755879605300102. [DOI] [PubMed] [Google Scholar]
- Hibbard JH, Weeks EC. Consumerism in Health Care: Prevalence and Predictors. Medical Care. 1987;25(11):1019–1032. doi: 10.1097/00005650-198711000-00001. [DOI] [PubMed] [Google Scholar]
- Jordahl G. HMOs and Employees Unite to Collect Outcomes Data. Business and Health. 1992 Jun;10(07):44–50. [PubMed] [Google Scholar]
- Jordan HS, Straus JH, Bailit MH. Reporting And Using Health Plan Performance Information In Massachusetts. Joint Commission Journal on Quality Improvement. 1995 Apr;21(4):167–177. doi: 10.1016/s1070-3241(16)30137-7. [DOI] [PubMed] [Google Scholar]
- Lavisso-Mourey R. Agency for Health Care Policy and Research and the Robert Wood Johnson Foundation: Consumer Survey Information in a Reforming Health Care System: Conference Summary. Rockville, MD.: Aug, 1995. Information for Consumers to Select Plans and Providers. AHCPR Pub No. 95-0083. [Google Scholar]
- McGee J, Knutson D. Health Care ‘Report Cards’: What About Consumers' Perspectives? The Journal of Ambulatory Care Management. 1994;17(4):1–14. doi: 10.1097/00004479-199410000-00003. [DOI] [PubMed] [Google Scholar]
- Minnesota Health Data Institute. 1995 Consumer Survey–You and Your Health Plan: Draft Evaluation Report. Minneapolis, MN.: Jun 3, 1996. [Google Scholar]
- National Committee for Quality Assurance. Health Plan Employer Data and Information Set and User's Manual Version 2.0. Washington DC.: 1993. [Google Scholar]
- National Committee for Quality Assurance. NCQA Consumer Information Project: Focus Group Report. Washington DC.: 1995. [Google Scholar]
- Packer-Tursman J. A Report Card on Quality Accountability. HMO Magazine. 1993;34(3):46–54. [Google Scholar]
- Pauly M. Taxation, Health Insurance, and Market Failure in the Medical Economy. Journal of Economic Literature. 1987;24(2):629–675. [PubMed] [Google Scholar]
- Rosenbaum PR, Rubin DB. The Central Role of the Propensity Score in Observational Studies for Causal Effects. Biometrika. 1983;70(1):41–55. [Google Scholar]
- Sofaer S, Hurwicz M. When Medical Group and HMO Part Company: Disenrollment Decisions in Medicare HMOs. Medical Care. 1993;31:808–821. doi: 10.1097/00005650-199309000-00006. [DOI] [PubMed] [Google Scholar]
- Sofaer S. Informing and Protecting Consumers Under Managed Competition. Health Affairs. 1993;12(Supplement):76–86. doi: 10.1377/hlthaff.12.suppl_1.76. [DOI] [PubMed] [Google Scholar]
- U.S. General Accounting Office. Employers and Individual Consumers Want Additional Information on Quality. Washington, DC.: 1995. Pub. No. GAO/HEHS- 95-201. [Google Scholar]
- Walker J, Hubbard M, Garfinkel S. Beneficiary Information, Education and Marketing Strategy: Draft Materials Testing Report. Research Triangle Park, NC.: Research Triangle Institute; 1996. [Google Scholar]
