Abstract
To determine the effect of survey-based, health plan report cards on employees as they selected their 1995 health plan, the authors surveyed two groups of Minnesota State employees, one of which received the report card and one that did not. Both groups were surveyed before and after their enrollment. The authors looked for report card effects on relative changes in the employees' knowledge of health plan benefits and their ratings of quality and cost attributes, as well as their plan choice, rates of switching plans, and willingness to pay higher premiums. The only report card effect found was an increase in perceived knowledge for employees with single coverage.
Introduction
Managed care systems can compete on quality only when consumers have information about health plan quality. One method to inform consumers is report cards, i.e., documents that describe and compare managed care plans on a variety of performance measures related to health care. Report cards are made available to consumers at the time of their enrollment decision. Although the goal of report cards is to assist employees in their choice of health plans, little is known about the ways in which these reports may affect decision-making. Does the information contained in report cards make any difference in employees' knowledge about health plan benefits, attitudes toward health plans, or enrollment decisions? These are important questions, not only because they address some of the fundamental assumptions about the role of consumers in the managed competition model (Enthoven, 1993) and the role of health services research in providing accurate information about health care services (Eisenberg, 1998), but also because their answers will have important implications for how resources are allocated in producing and distributing these report cards.
This study takes advantage of a natural experiment to compare two groups of employees from the State of Minnesota Employee Group Insurance Program (SEGIP). In 1991, 1993, and 1995, the SEGIP produced report cards that used employee survey information to compare health plan choices. These report cards were distributed to SEGIP enrollees during each open enrollment period. One group of employees in the SEGIP, the University of Minnesota, did not receive the report cards, although they participated in the same enrollment process and had the same choice of health plans and the same premiums as other State employees.
Background
No one has tested the effect of report cards on knowledge, but several studies have assessed what consumers know about their health insurance coverage in the absence of report cards. Marquis, Davies, and Ware (1983) compared the results of five such studies. In each study, employees were asked if certain services were covered by their insurance. There was a great deal of misperception on the part of consumers, even though they presumably had received information from their insurers. This study also found that consumers did not know the amount they paid for premiums. More than 40 percent of the employees made errors greater than 25 percent; in one study, 31 percent of the employees made errors greater than 75 percent. There is similar evidence that elderly individuals are ill-informed about their insurance (Federa and Oettinger, 1991; Daniel Yankelovich Group, 1990).
To become informed, consumers rely heavily on information provided by friends, relatives, and neighbors when selecting health plans. The importance of lay referral in the evaluation of health care providers has been clear since the late 1950s (Rudd and Glanz, 1990). Even for doctor-shoppers, the lay network seems to be an important source of information (Rudd and Glanz, 1990). The weight that consumers give to report card information, relative to other information sources, is questioned by the results of a survey conducted by the Harvard Community Health Plan (1993). This survey found that consumers placed a lower value on the type of information contained in most report cards compared with the recommendations of friends, relatives, and coworkers (Robinson and Brodie, 1997).
Other authors have reviewed factors influencing health plan choice, including Hellinger (1982), Wilensky and Rossiter (1986), Luft and Miller (1988), and Mechanic (1989). Most of these studies have focused on issues related to adverse or favorable selection into health maintenance organizations (HMOs). Feldman et al. (1988) estimated the demand for health plans by employees in 17 firms in the Twin Cities. They found that employees were very sensitive to out-of-pocket premiums, controlling for other plan characteristics. Dowd and Feldman (1994-95) examined the relationship between the characteristics of Medicare beneficiaries and their choice of health plan in the Twin Cities during 1988. These authors' analysis found a relatively complex relationship between the enrollees' characteristics and their choice of a health plan. Mechanic's (1989) summary of the literature on health plan choice in the pre-report card era found that the continuity of the doctor-patient relationship, cost, and the special needs of the enrollee or a family member were the most important factors in the selection of a health plan. It is not clear whether the information contained in report cards is regarded as important enough, relative to these considerations, to influence the choice of a health plan.
Most of the literature assessing health plan report cards has been limited to the results of focus groups or individual interviews. The purpose of the focus groups typically has been to determine what information consumers want to have or what reporting formats are most understandable (Gibbs, Sangl, Burrus, 1996; Jewett and Hibbard, 1996; SHW, Inc., 1996; Hibbard and Jewett, 1997; Moskowitz, 1997; Robinson and Brodie, 1997; Sofaer, 1997). These studies have often been conducted as part of the development and evaluation of a specific report card. One recent study was conducted using a survey of randomly selected health care employees (Tumlinson et al., 1997). Tumlinson et al. showed that employees are interested in cost and benefit information but less so in plan performance on standardized measures of quality such as overall satisfaction. Sainfort and Booske (1996) found that comparative information on health plans, including quality measures, was used differently by subjects with different preferences and backgrounds. This study of Wisconsin State employees, presented in a simulation setting with information on hypothetical health plan choices, also found that information use influenced preferences, choice, and attitudes about the choice process. This finding indicates that, at least under simulated conditions, consumers are influenced by the type of information typically provided. The authors also found that the use of a computerized decision support tool was more influential than the spreadsheet format typically used to provide consumers with comparative health plan information.
In another recently reported cross-sectional study of health plan choice in a large employer multiple choice setting, Chernew and Scanlon (1998) found weak and often counterintuitive relationships between the choice of plans and the plans' quality ratings on a number of report card measures. They concluded that “employees do not appear to respond strongly to plan performance measures.” Additionally, these authors concluded that “the negative correlations between some plan performance measures and plan choice may reflect several potential problems with the construction of health plan report cards.” This study did not survey employees and, because of its cross-sectional design, could not directly assess the impact of report card information use on health plan choice.
Questions regarding the actual influence of report cards on employees' knowledge of, attitudes about, and choice of health plans have not been reported. In this study, we explored eight areas; specifically, we asked if survey-based report cards influenced employees':
Knowledge of health plan benefits.
Perceived knowledge of health plan benefits.
Preferences for quality over cost.
Ratings of the quality of health plans.
Consideration of switching health plans.
Rate of switching health plans.
Reasons for selecting their health plans.
Willingness to incur premium contributions.
Study Setting
To answer the study questions, we compared two groups of employees from the SEGIP. The SEGIP enrolls 57,000 employees statewide, with 144,000 covered lives including dependents. It has been identified nationally as a model for managed competition (Feldman and Dowd, 1993) and has been cited for its ability to constrain premium increases. The SEGIP is also a pioneer in the development and dissemination of consumer report card information to employees.
The health plans offered to State employees in the Twin Cities included the following:
Plan A, a staff model HMO product.
Plan B, a group model HMO product.
Plan C, a mixed independent practice association (IPA) group-model HMO product.
Plan D, an employer-sponsored HMO-like product.
Plan E, an IPA HMO product.
Plan F, offered the networks of both Plan B and Plan A products for the first time in 1996.
Plan G, a traditional employer-sponsored preferred provider organization (PPO)-like product.
In 1996 there were a number of structural and premium changes made by health plans that created some enrollment volatility. For example, some plans offered new products with smaller provider networks. Also, the traditional lowest cost plan was replaced by a different lowest cost plan.
Each year around September 15, employees receive an enrollment packet. This packet contains a spreadsheet comparing coverage options and premiums for each of the health plans offered by the State and describes major changes that have occurred since the last enrollment. Enrollment takes place between October 1 and October 31. If an employee does not make a change by October 31, the employee (and dependents) continues in the same health plan. Comparative report cards were included in the enrollment packets in 1991, 1993, and 1995.
From 1991 to 1995, the report card went through a number of changes. The information in the 1991 report card was obtained from a telephone interview of employees, using a questionnaire adapted from the Group Health Association of America (GHAA). The 1991 report card used graphs to summarize employees' ratings about quality of care, availability of care, and quality of customer service. To evaluate this report card, individual interviews were conducted with 79 State employees after open enrollment (McGee and Hunter, 1992). Based on this evaluation, both the 1993 questionnaire and the report format were substantially revised. The employee survey was repeated in 1995, using essentially the same questionnaire and methodology as in 1993. Following extensive pretesting of data display options, the 1995 report card format was designed to include a Consumer Reports-style grid showing plan comparisons. In addition, the 1995 report card was expanded to include data measuring changes from 1993 to 1995 for quality issues that had been targeted by the State for health plan improvement.
The 1995 report card was a six-fold, three-color, 8½ by 11-inch brochure that opened to poster size, titled, “Health Plans and Medical Care: What Employees Think.” It displayed results of 10 satisfaction scales: overall satisfaction, health plan paperwork, health plan responsiveness, doctor's office customer service, technical quality, communications, after-hours access, wait times, problems with access or quality, and problems finding a satisfactory primary care doctor. Each of the two quality scales, technical quality and communications, was reported separately for adults' primary care and children's primary care. Specialty care was reported for adults and children combined. The report card did not include technical performance data as represented, for example, by Health Plan Employer Data and Information Set (HEDIS) measures.
The six health plans that were available in 1995 were compared on these satisfaction scales in two formats: stars and bars. The star display showed significant differences at the P = 0.05 level. Three stars indicated significantly above average results, two stars indicated no difference among the plans, and one star indicated significantly below average results on the particular measure. The bar graphs showed the percentage of plan enrollees who rated their satisfaction as excellent, very good, good, or fair and poor for each measure. Across several measures of satisfaction, two plans had above average ratings, and one plan had consistently below average ratings. On the overall satisfaction measure, plan ratings ranged on the dissatisfied (somewhat, very, or extremely) response from 4 percent (3 plans) to 10 percent (1 plan). The range of the extremely satisfied response was 14 percent (1 plan) to 28 percent (1 plan).
The report card also contained background information, including why and how the survey was done, how to read the graphs and interpret the results, and who sponsored the project.
Although the relative cost of health plans was not part of the report card, the employee could judge from accompanying material that the lowest rated plan was usually more costly than other options, as measured in employee's annual premium contributions. The report card was mailed to all State employees except those who worked for the University of Minnesota. Legally the University of Minnesota is an autonomous system. Even though the State allows the University to participate in the SEGIP, it does not cover the cost of the enrollment materials for University employees. The University uses the same plan spreadsheets as the SEGIP and mails them to its employees at University expense. Because of budget constraints, the University has chosen not to purchase and distribute the State's report card to its employees. In this article, we refer to the University employees as the control group and the State employees as the intervention group.
Methods
Study Design
We used a quasi-experimental non-equivalent control design (Campbell and Stanley, 1963) to address the study questions (Table 1). Questionnaire data were collected by telephone from intervention group employees and control group employees before and after the enrollment periods.
Table 1. Study Design, by Enrollment Time and Study Group: Minnesota, 1995.
Study Group | Pre-Enrollment | Enrollment | Post-Enrollment |
---|---|---|---|
Intervention | O1 | X2 | O |
Intervention | — | X | O |
Control | O | — | O |
Control | — | — | O |
O is the administration of a survey.
X is the distribution of the report card to intervention group employees.
SOURCE: Knutson et al., Minneapolis, 1997.
Sample
We stratified the control and intervention samples by employees with family coverage and those with single coverage. One sample of each coverage type from the intervention and the control groups (a total of four samples) was surveyed before the open enrollment period. To allow us to evaluate a possible pretest effect, the remaining four samples were not surveyed before enrollment. All eight samples were surveyed at post-enrollment.
We defined eligible employees as those who worked full time, because only these employees qualified for health coverage. Further, employees had to work and reside in the seven-county Minneapolis-St. Paul metropolitan area.
We excluded faculty members from the intervention (State) and control (University) groups to avoid potentially large differences in educational levels. If an employee's employment status changed during the study period, he or she was dropped from the study. Additionally, employees who changed from a single policy to a family policy or vice versa were eliminated. Control group employees whose spouse was employed by the State were excluded because those households would have received a report card.
There was an error in the initial sample identification for employees in the intervention group. The pre-enrollment sample unintentionally excluded individuals who had switched plans in 1995 and also excluded those who had been hired between April 1994 and March 1995. To correct for this problem, we added questions to the post-enrollment-only survey to provide as much information as could be validly obtained about the pre-enrollment characteristics of the employees. There was no way, however, to obtain pre-enrollment knowledge levels and attitudes in the post-enrollment survey for these missing employees. We included a variable in the multi-variate analyses indicating whether a respondent had switched plans in 1995 and a variable that captured the length of his or her employment. This analytic approach helped to control for these potential differences. To determine if the pre-enrollment survey had sensitized employees by drawing their attention to consumer information during the enrollment process, we compared the intervention respondents who had been surveyed only at post-enrollment with intervention respondents who had been surveyed both at pre-enrollment and post-enrollment. We performed the same analysis for respondents in the control group. We found no statistically significant differences between respondents who had been surveyed at pre-enrollment and their counterparts (intervention or control) who had been surveyed only at post-enrollment, allowing us to conclude that there had been no pretest sensitization.
Data Sources
The primary source of data was telephone interviews of State employees. In addition, we obtained administrative data from the SEGIP. A set of independent variables was developed based on the theoretical and empirical literature already described. These variables, included as survey items, were: satisfaction with 1995 health plan; ratings of cost and quality of available health plans; perceived knowledge about health plan options; actual knowledge of health plan characteristics; ratings of the importance of health plan and provider characteristics; physician attachment; proclivity to change plans; attention to own health; past utilization (employee and covered household members); expected utilization (employee and covered household members); importance of the decision to select a health plan; factors influencing the selection of the 1996 plan; information-seeking behavior in shopping for a general service; information-seeking behavior in selecting the 1996 health plan; general health status (employee and covered household members); chronic illness burden (employee and covered household members); use of and opinion regarding health plan comparison materials; employee and covered household demographics.
The dependent variables reported were:
Change in knowledge of health plan benefits from pre-enrollment to post-enrollment.
Change in perceived level of knowledge of health plan benefits from pre-enrollment to post-enrollment.
Change in the relative importance of cost and quality health plan attributes.
Change in ratings of the quality of employee's own plan.
Change in ratings of the quality of other plans.
Influence on the degree to which switching plans was considered.
Influence on employees to switch health plans or stay with their current plan.
Change in employees' premium contribution.
Refer to Table 2 for background items for the dependent variables.
Table 2. Background Items for the Dependent Variables, by Coverage Type and Study Group: Pre-Enrollment and Post-Enrollment.
Item | Single | Family | ||
---|---|---|---|---|
|
|
|||
Intervention | Control | Intervention | Control | |
| ||||
Percent | ||||
Knowledge of Health Plan Benefits Health Education Benefit | ||||
Pre-Enrollment | ||||
Yes | 64 | 56 | 68 | 63 |
No | 10 | 12 | 9 | 10 |
Don't Know | 26 | 32 | 24 | 27 |
Post-Enrollment | ||||
Yes | 57 | 63 | 60 | 69 |
No | 7 | 6 | 9 | 5 |
Don't Know | 36 | 31 | 32 | 26 |
Urgent Care Copayments | ||||
Pre-Enrollment | ||||
All of the Cost | 45 | 44 | 55 | 54 |
Some | 34 | 34 | 33 | 35 |
None | 1 | 2 | 1 | 1 |
It Depends | 0 | 0 | 0 | 0 |
Don't Know | 20 | 21 | 10 | 10 |
Post-Enrollment | ||||
All of the Cost | 40 | 41 | 52 | 55 |
Some | 39 | 41 | 35 | 35 |
None | 1 | 1 | 1 | 1 |
It Depends | 1 | 1 | 1 | 1 |
Don't Know | 19 | 16 | 12 | 7 |
Hospital Coverage Benefit | ||||
Pre-Enrollment | ||||
All of the Cost | 50 | 48 | 65 | 60 |
Some | 35 | 37 | 26 | 32 |
None | 0 | 0 | 0 | 0 |
It Depends | 0 | 0 | 0 | 0 |
Don't Know | 15 | 15 | 8 | 8 |
Post-Enrollment | ||||
All of the Cost | 54 | 52 | 67 | 66 |
Some | 33 | 36 | 25 | 27 |
None | 0 | 0 | 0 | 0 |
It Depends | 1 | 1 | 1 | 1 |
Don't Know | 12 | 11 | 7 | 7 |
Referral Requirement to See Specialist | ||||
Pre-Enrollment | ||||
Yes | 71 | 78 | 82 | 84 |
No | 14 | 10 | 10 | 8 |
Don't Know | 15 | 11 | 8 | 7 |
Post-Enrollment | ||||
Yes | 76 | 76 | 77 | 80 |
No | 14 | 14 | 16 | 12 |
Don't Know | 11 | 10 | 7 | 8 |
Prescription Coverage Benefit | ||||
Pre-Enrollment | ||||
Yes | 65 | 71 | 71 | 73 |
No | 9 | 8 | 7 | 5 |
Don't Know | 26 | 21 | 22 | 22 |
Post-Enrollment | ||||
Yes | 66 | 65 | 71 | 68 |
No | 13 | 13 | 12 | 11 |
Don't Know | 21 | 22 | 17 | 20 |
Perceived Knowledge of Health Plan Benefits | ||||
Pre-Enrollment | ||||
A Great Deal | 6 | 5 | 4 | 9 |
A Fair Amount | 35 | 37 | 39 | 43 |
A Little | 36 | 39 | 39 | 35 |
Almost Nothing or Nothing At All | 24 | 18 | 18 | 12 |
Post-Enrollment | ||||
A Great Deal | 5 | 5 | 7 | 8 |
A Fair Amount | 40 | 38 | 45 | 43 |
A Little | 34 | 37 | 32 | 35 |
Almost Nothing or Nothing At All | 21 | 19 | 15 | 14 |
Preferences for Quality Versus Cost Importance of Customer Service Quality Relative to Premium | ||||
Pre-Enrollment | ||||
Quality More Important | 33 | 30 | 32 | 27 |
No Difference | 46 | 41 | 49 | 46 |
Quality Less Important | 22 | 28 | 19 | 27 |
Post-Enrollment | ||||
Quality More Important | 28 | 23 | 31 | 29 |
No Difference | 48 | 50 | 49 | 42 |
Quality Less Important | 24 | 27 | 20 | 29 |
Importance of Waiting for Appointments Relative to Premium | ||||
Pre-Enrollment | ||||
Appointment More Important | 38 | 30 | 34 | 31 |
No Difference | 40 | 45 | 45 | 43 |
Appointment Less Important | 23 | 25 | 21 | 26 |
Post-Enrollment | ||||
Quality More Important | 31 | 29 | 30 | 28 |
No Difference | 48 | 47 | 48 | 48 |
Quality Less Important | 21 | 24 | 22 | 24 |
Quality of Health Plans, Mean Ratings Appointment | ||||
Pre-Enrollment | ||||
Own 1995 Health Plan | 7.8 | 7.9 | 7.8 | 7.8 |
Other 1995 Health Plans | 6.8 | 6.8 | 7.0 | 6.9 |
Post-Enrollment | ||||
Own 1996 Health Plan | 7.9 | 7.9 | 7.8 | 7.9 |
Other 1996 Health Plans | 6.9 | 6.9 | 7.0 | 7.0 |
Rate of Switching Plans | ||||
Switched Plans 1995-96 | ||||
Yes, Switched | 19 | 13 | 20 | 17 |
No, Did Not Switch | 81 | 87 | 80 | 83 |
Considered Switching Plans 1995-96 (Post-Enrollment) | ||||
A Lot | 4 | 4 | 5 | 4 |
A Fair Amount | 11 | 10 | 12 | 9 |
A Little | 28 | 26 | 23 | 26 |
Not at All | 34 | 40 | 29 | 36 |
Don't Know | 0 | 0 | 0 | 0 |
Reasons for Selecting 1996 Health Plan | ||||
Importance of Decision (Post-Enrollment) | ||||
Extremely Important | 22 | 20 | 24 | 27 |
Very Important | 32 | 31 | 40 | 41 |
Somewhat Important | 28 | 29 | 24 | 19 |
Not Very Important | 11 | 14 | 6 | 7 |
Not at All Important | 6 | 6 | 4 | 5 |
Don't Know | 1 | 2 | 1 | 1 |
Quality in Decision (Post-Enrollment) | ||||
Very Big Reason | 29 | 28 | 29 | 32 |
Big Reason | 45 | 49 | 46 | 50 |
Small Reason | 7 | 7 | 5 | 5 |
Not a Reason | 18 | 16 | 19 | 13 |
Don't Know | 1 | 0 | 1 | 0 |
Cost in Decision (Post-Enrollment) | ||||
Very Big Reason | 23 | 24 | 25 | 23 |
Big Reason | 36 | 36 | 37 | 39 |
Small Reason | 15 | 15 | 13 | 12 |
Not a Reason | 25 | 25 | 24 | 25 |
Don't Know | 1 | 0 | 0 | 0 |
Scale from 1(low) to 10(high)
SOURCE: Knutson et al., Minneapolis, 1997.
Administrative data included the employee's date of birth, gender, date of hire, the health plan in which he or she was enrolled in 1994, 1995, and 1996, and whether the employee had selected family or single coverage for each of these years. We also obtained the 1995 and 1996 employee premium rates for each health plan (Table 3).
Table 3. State and University Employees' Annual Premium Contributions, by Coverage Type and Health Plan: 1995 and 1996.
Health Plan | Coverage Type | |||
---|---|---|---|---|
| ||||
Single | Family | |||
|
|
|||
1995 | 1996 | 1995 | 1996 | |
| ||||
Dollars per Year | ||||
Plan A | $129 | $110 | $378 | $438 |
Plan B (1995) | 492 | NA | 1,311 | NA |
Plan F (1996) | NA | 272 | NA | 871 |
Plan C | 176 | 209 | 690 | 919 |
Plan D | 393 | 306 | 911 | 1,009 |
Plan E | 0 | 0 | 252 | 572 |
Plan G | NA | 0 | NA | 247 |
NOTE: NA = Not applicable.
SOURCE: Knutson et al., Minneapolis, 1997.
Analysis
Respondents in the intervention and control groups were compared on all variables included in the questionnaire. Significant differences between the groups were found with respect to age, gender, educational level, income, presence of chronic medical condition in family, whether the employee (or spouse) worked in a medical setting, and 1995 health plan. These characteristics, together with the employees' length of enrollment and whether they switched health plans in 1995, were included in subsequent multi-variate analyses as control variables. For a description of selected characteristics, refer to Table 4.
Table 4. Selected Employee Characteristics: Pre-Enrollment.
Characteristic | Single | Family | ||
---|---|---|---|---|
|
|
|||
Intervention (n = 396) |
Control (n = 385) |
Intervention (n = 424) |
Control (n = 417) |
|
Mean Age in Years (SD) | 45.6 (9.2) | *41.2 (10.4) | 43.9 (8.5) | *41.6 (8.6) |
Percent | ||||
Sex | ||||
Male | 35.1 | *25.4 | 60.4 | *42.7 |
Female | 64.9 | 74.6 | 39.6 | 57.3 |
Education | ||||
High School Graduate or Less | 22.3 | *8.1 | 21.1 | *9.3 |
Some College or Vocational | 24.8 | 25.5 | 26.2 | 26.0 |
College Graduate | 32.6 | 35.9 | 26.6 | 31.5 |
Post-Graduate | 20.4 | 30.5 | 25.9 | 33.2 |
Income | ||||
Less Than $20,000 | 1.3 | *6.6 | 0.5 | 3.0 |
$20,000-$39,999 | 43.9 | 49.3 | 21.6 | 25.1 |
$40,000-$59,999 | 28.9 | 21.3 | 42.0 | 33.0 |
$60,000-$79,999 | 14.5 | 13.6 | 24.0 | 22.1 |
$80,000 or More | 11.4 | 9.1 | 11.9 | 16.9 |
Married | ||||
Yes | 34.3 | 28.9 | 91.3 | 89.4 |
No | 65.7 | 71.1 | 8.7 | 10.6 |
Children Under 25 Years | ||||
None | 79.3 | 85.2 | 17.0 | 19.2 |
1 Child | 11.8 | 6.0 | 23.4 | 27.3 |
2 Children | 6.3 | 7.0 | 40.1 | 36.7 |
3 or More Children | 2.6 | 1.8 | 19.5 | 16.8 |
Time Employed | ||||
2 Years or Less | 3.3 | *16.6 | 2.8 | *12.5 |
3 Years or More | 96.7 | 83.5 | 97.1 | 87.5 |
Employee or Spouse Work in Health Care | ||||
Yes | 10.4 | *40.4 | 14.6 | *43.9 |
No | 89.6 | 59.6 | 85.4 | 56.1 |
Employee Health Status | ||||
Excellent | 19.8 | *27.0 | 16.5 | *31.2 |
Very Good | 41.0 | 40.5 | 45.3 | 43.8 |
Good | 29.9 | 27.0 | 31.1 | 21.2 |
Fair or Poor | 9.4 | 5.5 | 7.1 | 3.9 |
Anyone in Family Hospitalized During Past Year | ||||
Yes | 12.4 | 13.5 | 26.5 | 24.8 |
No | 87.6 | 86.5 | 73.5 | 75.2 |
1995 Health Plan | ||||
Plan A | 50.5 | 41.6 | 61.3 | 52.0 |
Plan B | 9.6 | 8.3 | 8.5 | 7.2 |
Plan C | 9.8 | 4.9 | 7.3 | 5.3 |
Plan D | 17.4 | 11.4 | 16.0 | 15.4 |
Plan E | 12.6 | 33.8 | 6.8 | 20.1 |
Satisfaction With Health Plan | ||||
Very Satisfied | 38.7 | 39.3 | 42.8 | 45.0 |
Satisfied | 52.7 | 54.1 | 52.0 | 48.9 |
Dissatisfied or Very Dissatisfied | 8.6 | 6.6 | 5.3 | 6.0 |
Remember Seeing the Report Card | ||||
Yes | 67.0 | NA | 72.0 | NA |
No | 23.0 | NA | 22.0 | NA |
Don't Know | 3.0 | NA | 2.0 | NA |
Missing | 7.0 | NA | 5.0 | NA |
How Much of Report Card Was Read | ||||
Most or All | 31.0 | NA | 36.0 | NA |
Parts of It 19.0 | NA | 18.0 | NA | |
Just Glanced | 14.0 | NA | 16.0 | NA |
Never Really Looked | 2.0 | NA | 2.0 | NA |
Don't Know | 0.0 | NA | 0.0 | NA |
Missing 7.0 | NA | 5.0 | NA | |
Not Applicable | 26.0 | NA | 24.0 | NA |
p ≤ 0.05.
NOTE: SD is standard deviation. NA is not applicable.
SOURCE: Knutson, et al., 1997.
We initially used bivariate analysis to analyze the differences between the intervention and control groups. We then used multivariate analysis when bivariate analysis revealed statistically significant differences.
Results
There were 3,573 completed telephone interviews. The response rate was 74 percent for the pre-enrollment survey and 85 percent for the post-enrollment survey. The number of respondents among the samples ranged from 385 to 431.
Non-respondents were compared with respondents on age, gender, and health plan enrollment (available from administrative data); no statistically significant differences were found.
Knowledge of Health Plan Benefits
We used five measures of health plan knowledge:
Whether the employee's health plan offers health education programs (yes or no).
How much the employee's health plan pays for urgent care (all, some, or none).
How much the employee's health plan pays for hospitalizations (all, some, or none).
Whether the employee's health plan requires a referral to see a specialist (yes or no).
Whether the five health plans offer the same or different coverage for prescriptions (different or same).
Based on a chi-square analysis comparing the intervention and the control employees' knowledge of each benefit, there was no discernible effect of the report card on the employee's absolute knowledge at post-enrollment; that is, there was no difference in absolute knowledge between the intervention and control groups (data not reported here).
Using the chi-square statistic, we compared the intervention with the control employees for any change in knowledge from pre-enrollment to post-enrollment of each health plan benefit. Changes in knowledge were classified as better, worse, or no change. As can be seen in Table 5, there was no discernible effect of the report card on changes in knowledge scores, either for employees with single coverage or employees with family coverage. The changes for the intervention employees were not different from the changes for the control employees for knowledge of any benefit. Between two-thirds and three-quarters of knowledge scores were unchanged for any of the five items. Approximately equal proportions got better and got worse, perhaps reflecting random variation.
Table 5. Change in Employees' Knowledge of Benefits, by Coverage Type and Study Group: Pre-Enrollment to Post-Enrollment.
Benefit | Single | Family | ||
---|---|---|---|---|
|
|
|||
Intervention | Control | Intervention | Control | |
| ||||
Percent | ||||
Education Programs | ||||
Better | 17.3 | 21.9 | 15.6 | 15.9 |
Unchanged | 71.1 | 68.2 | 72.4 | 76.4 |
Worse | 11.7 | 9.9 | 12.0 | 7.7 |
Urgent Care | ||||
Better | 19.3 | 20.9 | 17.0 | 16.6 |
Unchanged | 58.4 | 56.7 | 63.0 | 68.0 |
Worse | 22.3 | 22.4 | 20.0 | 15.4 |
Hospitalization | ||||
Better | 13.4 | 12.8 | 13.9 | 12.8 |
Unchanged | 69.5 | 74.7 | 72.6 | 77.4 |
Worse | 17.0 | 12.5 | 13.4 | 9.9 |
Referral to Specialist | ||||
Better | 17.5 | 13.9 | 10.8 | 13.0 |
Unchanged | 65.7 | 72.2 | 75.7 | 75.7 |
Worse | 16.8 | 13.9 | 13.4 | 11.3 |
Pharmacy Coverage | ||||
Better | 14.2 | 11.8 | 12.9 | 9.7 |
Unchanged | 69.2 | 67.6 | 68.3 | 73.4 |
Worse | 16.5 | 20.5 | 18.8 | 16.9 |
SOURCE: Knutson et al., Minneapolis, 1997.
Perceived Knowledge of Health Plan Benefits
Even if knowledge levels as represented by the knowledge questions in the survey did not change, employees who receive report cards may be more likely to perceive that their knowledge had changed. We measured change in perceived knowledge with the following item that was asked both at pre-enrollment and post-enrollment:
“Overall, how much do you feel that you know about the five health plans offered by the [State/University] to employees in the Twin Cities Metro area and how these plans compare with each other? 1 = a great deal, 2 = a fair amount, 3 = a little, 4 = almost nothing or nothing at all” (pre-enrollment version).
A gain in perceived knowledge was defined as responding to a higher category at post-enrollment compared with pre-enrollment, for example, if a respondent reported that he or she knew “a little” at pre-enrollment and a “fair amount” at post-enrollment.
There was a significant difference in the change in perceived knowledge between the intervention and control employees with single coverage (chi-square 8.5, p < 0.05) but not for employees with family coverage. At the bivariate level, intervention employees with single coverage were more likely to report a gain in perceived knowledge (Table 6).
Table 6. Change in Employees' Perceived Knowledge, by Coverage Type and Study Group: Pre-Enrollment to Post-Enrollment.
Direction of Change | Single | Family | ||
---|---|---|---|---|
|
|
|||
Intervention | Control | Intervention | Control | |
| ||||
Percent | ||||
Gain in Knowledge | 24.9 | 16.7 | 25.1 | 21.3 |
Stayed the Same | 52.4 | 60.2 | 55.7 | 55.6 |
Loss in Knowledge | 22.6 | 23.2 | 19.2 | 23.2 |
SOURCE: Knutson et al., Minneapolis, 1997.
This finding, however, may have been attributable to previously described differences in the characteristics between the intervention and the control groups. To explore this initial result further, we conducted a pairwise logistic regression analysis. The dependent variable was the proportion of employees reporting a gain in perceived knowledge, first compared with the proportion who stayed the same, and then compared with the proportion with a decrease in perceived knowledge. The control variables were the characteristics that differed between the intervention and control samples and those that corrected for the sampling error: age, gender, educational level, income, presence of chronic medical condition in family, whether the employee (or spouse) worked in a medical setting, 1995 health plan, whether the employee had switched health plans the previous year, and the number of years the employee had been with the employer. Binary variables were used for the health plan in 1996 (with Health Plan A used as the reference group) and for the intervention or control group (with the control group used as the reference group). Results of the logistic regressions confirmed the initial finding: Intervention employees with single coverage were twice as likely to report a gain in perceived knowledge (odds ratio [OR] 1.93, 95-percent confidence interval [CI] 1.23, 3.04).
Preferences for Quality Versus Cost
Because report cards focus on measures of health plan quality, we hypothesized that receiving a report card could change the recipient's relative weighting of the importance of health plan quality characteristics compared with cost. In other words, the intervention group might place increased importance on quality attributes relative to cost at post-enrollment when compared with the control group. To test the hypothesis that there was a greater shift in quality ratings relative to cost ratings, we examined the change in relative importance of the two quality and one cost attributes that were represented in the report card. If our hypothesis were true, we would see a greater increase in ratings of the quality attributes from pre-enrollment to post-enrollment than in the rating of the cost attribute.
There were nine health plan attributes rated for importance in the original questionnaire. In this analysis we report only the two quality attributes that were directly related to the content of the report card and the one attribute related to cost. Employees were asked, “How important is:
The quality of customer service you get from your health plan?
The length of time between making an appointment and actually getting in to see the doctor?
Keeping the amount of the health insurance premium that you personally have to pay as small as possible?”
Although one could argue that the two quality attributes are minor compared with premiums, the respondents rated each attribute independent of ratings for other attributes and therefore were not forced to trade off preferences.
Responses to these questions were reported on a five-point Likert-type scale where 1 = extremely important, 2 = very important, 3 = somewhat important, 4 = not very important, and 5 = not at all important. Because of the infrequent use of the last three categories (“somewhat important,” “not very important,” and “not at all important”), these responses were combined for the bivariate analyses.
As shown in Table 7, there were no differences between intervention and control employees with single coverage. Intervention employees with family coverage, however, were more likely to report an increase in the relative importance of the quality of customer service compared with cost from pre-enrollment to post-enrollment (chi-square = 7.7, p < 0.05).
Table 7. Change in Preferences for Quality and Cost, by Coverage Type and Study Group: Pre-Enrollment to Post-Enrollment.
Change in Attribute | Single | Family | ||
---|---|---|---|---|
|
|
|||
Intervention | Control | Intervention | Control | |
| ||||
Percent | ||||
Customer Service Versus Cost | ||||
Cost More Important | 13.7 | 11.8 | 13.1 | 15.7 |
Unchanged | 47.5 | 50.8 | 42.8 | 49.5 |
Customer Service More Important | 38.8 | 37.4 | 44.2 | 34.8 |
Time Between Making Appointment and Visit Versus Cost | ||||
Cost More Important | 11.9 | 12.7 | 11.5 | 10.6 |
Unchanged | 52.8 | 50.7 | 49.6 | 52.5 |
Time More Important | 35.3 | 36.7 | 38.8 | 36.9 |
SOURCE: Knutson et al., Minneapolis, 1997.
As in the previous analysis, this finding may have been attributable to differences in the characteristics of the intervention and control populations. We again conducted a pairwise logistic regression analysis. The dependent variable was the increase in the proportion of employees who gave cost a higher importance rating than customer service from pre-enrollment to post-enrollment, first compared with the increase in the proportion who gave cost and customer service the same rating and then compared with the increase in the proportion who gave customer service a higher importance rating than cost. The control variables were the same as in the previous analysis. In the multi-variate analysis, there was no difference between intervention and control employees with family coverage (increase in cost rating versus stayed the same: OR 1.11, 95-percent CI, 0.79, 1.58; increase in customer service rating versus stayed the same: OR 1.02, 95-percent CI, 0.60, 1.74).
Ratings of the Quality of Health Plans
Because the purpose of the report card is to provide information on comparative health plan quality, it should have an effect on employees' ratings of health plan quality. Specific dimensions of consumer attitudes regarding health plan quality have been addressed in the literature (Ware and Snyder, 1975). The concept of health care quality, however, is “so broad and multifaceted that the issue becomes obfuscated and confused” (O'Connor and Bowers, 1990). But ultimately the many dimensions of quality and the preferences of specific consumers can be expressed through the employee's rating of each health plan's overall quality. If there are differences in the health plans presented in the report card that are meaningful to employees, the differences should influence their ratings of the plans.
A number of effects of the report card on employees' ratings of health plan quality are possible. For those who believed that the quality of the available plans differed greatly, the report card could reveal greater similarity than expected. For those who viewed all plans other than their own as lower in quality, the report card could demonstrate that other plans were more similar to their own plan than expected. For those who believed that the quality of all available plans was about the same, the report card could reveal greater variation in quality than expected. Any of these effects can be detected as differences in the relative rating changes between the control and intervention groups.
To measure this possible effect, we asked subjects to rate the quality of each health plan on a scale from 1 to 10, with 1 being the lowest quality and 10 the highest. The question “Based on whatever impressions you have, please rate the overall quality of (name of health plan),” was asked about each of the health plans.
Most employees had no personal experience (and none had recent experience) with plans other than their own. Therefore, their opinions about the quality of the other plans were more likely to be influenced by the report card than were their opinions about their own plan. To test the effect of the report card on employees' ratings of health plans, we analyzed the difference between intervention and control employees in the magnitude and direction of changes in the mean quality ratings between pre-enrollment and post-enrollment, differentiating the plan only on whether or not it was the employee's plan in 1995. This analysis included only those employees who rated each of the plans in both the pre-enrollment and the post-enrollment surveys. Mean quality ratings for the pre-enrollment and the post-enrollment survey are displayed in Table 8.
Table 8. Employee Ratings of the Quality of Their Own and Other Health Plans, by Coverage Type and Study Group: Pre-Enrollment and Post-Enrollment.
Study Period | Single | Family | ||
---|---|---|---|---|
|
|
|||
Intervention | Control | Intervention | Control | |
| ||||
Mean Score1 | ||||
Pre-Enrollment | ||||
Quality of Own Plan | 7.84 | 7.86 | 7.84 | 7.78 |
Quality of Other Plans | 6.79 | 6.84 | 7.04 | 6.87 |
Post-Enrollment | ||||
Quality of Own Plan | 7.90 | 7.93 | 7.85 | 7.89 |
Quality of Other Plans | 6.93 | 6.93 | 6.99 | 6.99 |
Scale from 1 (low quality) to 10 (high quality).
SOURCE: Knutson et al., Minneapolis, 1997.
We found no statistically significant differences between the intervention and control groups on mean quality ratings for employee's own 1995 plan at pre-enrollment. Respondent ratings for 1995 plans other than their own were lower than ratings for their own plan, as might be expected, and again there was no difference between the intervention and control mean quality ratings. Ratings of other plans and their own plans did not change significantly at post-enrollment, and there was no statistically significant difference between the intervention and control groups.
We then conducted a subanalysis of quality ratings, limited to those employees who switched plans in 1996 (n = 282). To define the employees' ratings of their own plan for this analysis, we averaged the ratings of their 1995 and 1996 plan selections. Although it can be argued that employees who switched health plans in 1996 had not had any significant experience with their 1996 plan, an attempt to rationalize their selection may have influenced their ratings of their 1996 plan.
When we excluded both the 1995 and 1996 plans of employees who switched plans from the pre-enrollment analysis, the mean quality ratings for plans other than their own did not change significantly. The mean ratings for employees with family coverage were 6.95 for the intervention group and 6.82 for the control group; for those with single coverage, the mean ratings were 6.78 for the intervention group and 6.81 for the control group.
Similarly, at post-enrollment the overall ratings of other plans remained unchanged when the employee's 1995 and 1996 plans were excluded from the definition of other plan. There was no statistically significant difference between the intervention and the control groups. At post-enrollment, the mean ratings for employees with family coverage were 6.87 for the intervention group and 6.91 for the control group; for those with single coverage, the mean ratings at post-enrollment were 6.85 for the intervention group and 6.86 for the control group.
In addition to looking at ratings of quality at pre-enrollment and post-enrollment, we looked at the change in ratings from pre-enrollment to post-enrollment. For employees who switched plans, mean quality ratings of other and own plans at post-enrollment did not significantly change from pre-enrollment ratings.
These findings indicate that the report card did not influence employees' ratings of health plan overall quality. This result was found for respondent ratings of their own 1995 health plan and, most importantly, for other plans. It was consistent for those who switched plans in 1996 and for those who remained with their 1995 plan.
Degree to Which Non-Switchers Considered Switching
An analysis of plan switchers is addressed later; but we were interested in whether the report card would influence respondents to consider switching even if they did not actually switch plans. The post-enrollment survey included an item that asked respondents who had not switched plans, “During the past open enrollment, how much did you consider switching to another plan?” We conducted bivariate analysis comparing the control and intervention groups and found a significant difference only for those with single coverage (chi-square = 8.64, P = 0.034). This relationship disappeared in followup logistic regression, in which we dichotomized the variable, combining the response “a lot” with “a fair amount” and combining “a little” with “not at all.” Variables that were strongly related to the degree to which switching was considered, however, were membership in one particular plan (OR = 2.93, p = 0.01) and, not surprisingly, satisfaction with their health plan. Those who reported any response other than being “very satisfied” with their health plan were more than four times as likely to have considered switching a lot or a fair amount during open enrollment (OR = 4.32, p = 0.001), compared with those who reported “very satisfied.”
Rate of Switching Health Plans
We addressed the question of whether the report card may have influenced the rate of switching health plans during open enrollment. There are several reasons why people switch health plans. One nonmedical reason is that people change jobs. Factors influencing switching related to medical care, but not directly related to plan performance, may include switching to keep an existing physician who has transferred to a competing health plan. Switching may also occur because of personal dissatisfaction with the current plan's performance or the awareness of good performance of an alternative health plan.
The report card could increase switching if it reinforces a negative perception of the current plan or it demonstrates the superior performance of an alternative plan. It is this latter type of influence that seems most plausible. That is, the report card offers employees an opportunity to compare health plans that they have experienced with plans that they have not experienced.
Bivariate analysis showed that the intervention group with family coverage did not switch more frequently than the control group (20.0 percent and 17.3 percent). The intervention group with single coverage, however, did switch more frequently than the control group (19.2 percent and 12.7 percent, p < 0.05). This difference was not supported in logistic regression analysis that included the standard control variables differentiating the intervention from the control group and a variable indicating the employee's level of satisfaction with his or her 1995 health plan. In this multi-variate analysis, however, males with single coverage were more likely to switch than females. Also for employees with single coverage, 1995 enrollment in two of the health plans was strongly related to the likelihood of switching (Health Plan C, OR = 5.3, 95 percent CI = 2.50, 11.20; Health Plan D, OR = 15.4, 95 percent CI = 8.49, 28.75). For employees with family coverage, the only factor that was related to switching was 1995 enrollment in these same plans (Health Plan C, OR = 13.8, 95 percent CI = 6.50, 29.52; Health Plan D, OR= 44.5, 95 percent CI = 24.72, 83.30).
For employees with single coverage, the level of satisfaction with their 1995 plan was strongly related to the likelihood of switching plans. Those who were dissatisfied or very dissatisfied were 10 times more likely to switch plans than those who were very satisfied (OR = 10.1, 95 percent CI = 4.40, 24.09). Even those who reported that they were satisfied were more likely to switch (OR = 2.1, 95 percent CI = 1.27, 3.67). For employees with family coverage, satisfaction with their 1995 plan was also related to the rate of switching but to a lesser degree. Those who were dissatisfied or very dissatisfied were more likely to switch than those who were very satisfied (OR = 4.1, 95 percent CI = 1.69, 9.84). However, the satisfied employees were not more likely to switch plans (Table 9).
Table 9. Percent of Persons Switching Plans From 1995 to 1996, by Type of Coverage and Plan.
Plan | Single | Family | ||
---|---|---|---|---|
|
|
|||
Intervention (n = 76) |
Control (n = 49) |
Intervention (n = 85) |
Control (n = 72) |
|
| ||||
Percent | ||||
Plan A | 9.5 | 6.9 | 8.1 | 2.8 |
Plan B | 15.8 | 12.5 | 25.0 | 10.0 |
Plan C | 38.5 | 26.3 | 38.7 | 36.4 |
Plan D | 52.2 | 56.8 | 61.8 | 70.3 |
Plan E | 0.0 | 3.1 | 3.4 | 11.9 |
Plan F | NA | NA | NA | NA |
Plan G | NA | NA | NA | NA |
NOTE: NA is not applicable.
SOURCE: Knutson et al., Minneapolis, 1997.
Reasons for Selecting 1996 Health Plan
We asked all employees to identify their reasons for selecting their 1996 health plan. We listed a number of possible reasons and also offered them the opportunity to list their own reasons. Using a formal content analysis process with three independent judges, these responses to the open-ended question were combined into three categories: cost, quality, and miscellaneous. Any differences in the initial classification were resolved by consensus. The miscellaneous category included convenience, physician attachment, and inertia. To evaluate the influence of the report card, we were primarily interested in identifying those employees for whom either cost or quality was a major influence on their 1996 decision, because these factors are independent attributes of the plan, in contrast to situational factors such as convenience of location.
Because the report card provides information on the quality of available health plans, it might be assumed that quality would be reported as a reason for selecting a plan more often by the intervention employees. Bivariate analysis indicated no significant difference in the proportion of intervention and control employees who reported quality as an influencing factor. This finding was true for employees with family or single coverage. Similarly there was no significant difference in the proportion of intervention and control employees who reported cost as an influencing factor. Finally, there was no significant difference in the proportion of intervention and control employees who reported one or more of the miscellaneous reasons as factors influencing their choice.
Willingness to Incur Premium Contributions
The report card may affect the willingness of employees to contribute to premiums. Employees in our study were required to pay the marginal premium for selecting a higher priced health plan, that is, the employer's contribution covered only the premium level of the lowest cost plan. The intervention and control groups had the same health plan employee contribution requirements. Any effect of the report card on price sensitivity should be detected by comparing the relative change in employee premium contributions between the intervention and the control groups.
Previous studies found that employees in the Twin Cities are price-sensitive with respect to their health plan choices (Feldman et al., 1988; Dowd and Feldman, 1994-95). Neither of these studies directly examined the influence of report cards on the employee's willingness to contribute to health plan premiums. If employees perceive that the report card demonstrates a variation in quality among plans, then this perception could decrease their price sensitivity. If the employees perceive that the report card shows that most health plans are generally of equal quality, then employees may view health care as essentially a commodity, thus increasing their price sensitivity. These potentially different mechanisms make hypothesizing about the direction of the effect of report cards on price sensitivity difficult. Under either condition, however, one could expect a report card influence to be detected through the magnitude of the relative change in actual employee premium contributions between the intervention and the control groups, regardless of the direction of the difference. Because we knew the amount of employees' premium contributions in both 1995 and 1996, we calculated a change in the contribution between 1995 and 1996 for each respondent. It is this change in employee premium that is the dependent variable for this analysis.
We analyzed the difference in the magnitude and direction of change in average employee premium contribution comparing the intervention with the control employees. There was no difference between intervention or control respondents' change in premium contribution between 1995 and 1996 for employees with family or with single coverage. In regression analysis, the only variable that was related to a difference in the change in employee contribution was whether the employee had switched plans in 1995 (p < 0.001). This significant relationship may have been attributable to large changes in the premium ranking among plans in 1995 and again in 1996.
Repeating this regression analysis for only those employees who had switched health plans between 1995 and 1996, we found that, for both employees with family and those with single coverage, there was no difference between the intervention and the control groups in the change in employee premiums. The higher the educational level of employees with single coverage, the greater the reduction in premium costs obtained by switching health plans in 1996. The only variable in the model that was significantly related to an increase in 1996 premiums was whether the respondent had switched health plans in 1995 (p < 0.05 for employees with family coverage; p < 0.001 for employees with single coverage). This result is also likely because of the significant plan pricing volatility in 1995 and 1996.
Discussion
Using a natural experiment, we attempted to detect the effect of a report card that was created from a survey of employees' opinions of health plans. This report card could be considered one of the best examples of its type. Minnesota State employees were highly experienced with this form of information. The report card had been updated and distributed to the intervention employees three times over 6 years but not to the control employees. The report card was mailed directly to the home of intervention employees as part of their enrollment packet. We looked for an influence of the report card on changes in employees' knowledge of health plan benefits, changes in preferences for health plan attributes (namely quality dimensions versus cost dimensions), changes in ratings of available heath plans' overall quality, and choice of plans. We also analyzed the influence of the report card on the extent to which quality or cost were reported as reasons for selecting the 1996 plan. We conducted separate analyses for those who switched plans during the 1996 enrollment period. We also compared employees' premium contributions between groups to determine whether the report card influenced the amount of the contribution. We conducted bivariate and multi-variate analyses. The multi-variate analysis included variables that controlled for differences in the characteristics of the study groups, variables that are known to be related to health plan choice, and variables theoretically relevant to the specific analyses. By analyzing the responses of employees who switched health plans, we investigated report card effects in populations where these effects were most likely to be present. We conclude that the report card had few discernible effects on employees' knowledge, attitudes, or choice of health plans. The only impact we found was related to the perception of employees with single coverage on how knowledgeable they felt they were about the health plans.
In a longitudinal study such as this, there is always the possibility of contamination by some external event. About the same time that the State of Minnesota distributed its report card to employees in enrollment materials sent to their homes, the Minnesota Health Data Institute disseminated a somewhat similar comparison of health plans as a supplement in the local newspaper. To track this event, we asked employees whether they saw and read this communitywide report card. Only about one-quarter of both the intervention and control employees reported seeing the newspaper report card. We compared those intervention employees who saw or read both report cards with those who saw only their employer's report card on all dependent measures. No statistically significant difference was found in their evaluations of the State of Minnesota employee report card. We concluded that the communitywide report card had no discernible influence on the effect of the employer-sponsored report card (Knutson et al., 1996).
In the setting for this study, what explanations could be offered for the lack of influence of this report card on employees?
Characteristics of Setting
Minnesota has a relatively high proportion of enrollment in managed care. This acceptance of managed care and the belief that health care is of generally high quality in the State may explain the overall high satisfaction with all of the available plans. It is also possible that differences among health plans on report card measures may not have been large enough to be relevant to employees regardless of their statistical significance. It is true that the majority of members of all the Twin Cities plans were satisfied with their plans on all report card measures. The meaningfulness of differences to consumers will ultimately be determined through further research in markets with greater differences among plans than exist in the Twin Cities.
In addition, the employees in the intervention group were highly experienced with the report card information. They had received a report card three times over 6 years. In this study however, length of employment was not found to be related to a difference in the use and impact of the report card, which would be expected if we assume that the report card is more useful for those who are new to the market.
Characteristics of the Population
The population was State employees and University employees, excluding faculty. Employees had a range of educational and income levels. This population could be different from other employed populations, thus limiting generalizability We believe, however, that it is representative of employed populations in a managed competition setting.
Measures
It is possible that this study did not adequately test the potential influences of the report card, either through its method or content. We chose measures related to straightforward assumptions about the potential impact of report cards on employees. We included measures on virtually all factors known to influence health plan choice as control variables. We achieved adequate response rates and a sufficient number of employees in each group for our analyses. There were very few missing responses. We controlled for known sources of bias in our analyses. However, there may be measurement imprecision that obscured our sensitivity to a report card effect. This possibility would be greater for the attitudinal measures than for the behavioral measures related to plan choice.
Characteristics of the Design
Study results may be questioned if the study design is weak or the data are incomplete. Our study design is the strongest available without randomization. We analyzed changes from pre-enrollment to post-enrollment in the intervention group, comparing these changes with those of a non-randomized control group.
Nature of Quality Measures
Another reason for the apparent lack of report card impact may be the intrinsic difficulty employees have in evaluating certain aspects of health care quality. The content of the current version of the report card is arguably relevant to most health care employees. Because most health plan members have routine office visits, measures related to these experiences are relatively easy to capture. But does this content address the most important qualities of health care to consumers, qualities that they fundamentally value but take for granted or cannot easily evaluate?
In general, judging the quality of a service is more difficult than judging the quality of a good (O'Connor and Bowers, 1990). Health care quality is particularly difficult to evaluate. The typical consumer survey-based report card is focused on functional quality, such as access to care and courtesy of staff—features that most health care consumers have experienced. Technical quality, such as long-term outcomes of treatment, is even more difficult to evaluate than functional quality, yet it may be the most important component of health care quality.
Report cards may need to focus more on the difficult task of reporting technical quality, possibly by assessing the clinical processes and outcomes, as well as the experiences and the attitudes of those who have chronic illnesses or have had recent serious medical events. Technical quality measured using utilization or medical record data, such as HEDIS quality indicators, may in the end be more important and useful to consumers than consumer survey-based information. If, however, survey-based information is to include more technical quality content, then questionnaire development, sampling, and data collection will become much more difficult.
In summary, we hypothesized that:
The report card would improve employees' knowledge of health plans, in part by increasing their attention to objective plan attributes such as benefits.
The report card would increase the importance of quality dimensions among a list of plan attributes known to influence choice, including cost and convenience.
Much of the information processing and valuation could be summarized in respondent ratings of the overall quality of all plans available to them.
The report card would influence the rate of switching plans or the degree to which switching was considered.
The report card would influence the reasons employees reported for selecting their 1996 plan, whether remaining with their 1995 plan or switching.
The report card would influence employees' willingness to pay the marginal premium for higher priced health plans. We found none of these effects.
Although these results are disappointing, they suggest that we reconsider our initial model of the effects of report cards. Our theoretical understanding of the role of report cards is incomplete. The simple model of economic utility theory is not sufficient to explain the lack of measurable response to systematic, written information about plan quality. We need to enrich our theoretical approaches to this complex problem, turning to decision theory, communication theory, and other social psychological paradigms.
In addition to weaknesses in our theoretical understanding, there may also be objective characteristics of the market that impede our ability to detect an influence of report cards. For example, the reporting unit in this report card is the health plan, but consumers tell us repeatedly that they are more interested in information provided at the clinic or, even better, at the individual physician level. At a minimum, if we compare plans, we ought to evaluate the effects with plans that do not have such broadly overlapping provider networks. Similarly, we should test report cards in markets where we have evidence of greater variation in health plan quality. Report cards may have greater significance to consumers who face a large annual premium. Lastly, we should consider that we may not have the most effective dissemination practices. We have not yet tested any kind of mediated dissemination, using formal or informal agents. In short, it is too soon to declaim the failure of report cards.
Acknowledgments
We acknowledge the substantial contributions of the other study investigators, Michael Finch, Ph.D., Center for Health Policy and Evaluation, United HealthCare, Jeanne McGee, Ph.D., McGee & Evers Consulting, Inc., and Nanette Dahms, M.S.W., Department of Medicine, Medical School, University of Minnesota.
Footnotes
David J. Knutson, Elizabeth A. Kind, Jinnet B. Fowles, and Susan Adlis are with HealthSystem Minnesota. This study was funded by the Health Care Financing Administration (HCFA) Grant Number 18 P 90601/5. The views expressed herein are those of the authors and do not necessarily reflect the opinions of HealthSystem Minnesota or HCFA.
Reprint Requests: David J. Knutson, HealthSystem Minnesota, 3800 Park Nicollet Boulevard, Minneapolis, Minnesota 55416. E-mail: knutsd@hsmnet.com.
References
- Campbell DT, Stanley JC. Experimental and Quasi Experimental Designs for Research. Chicago: Rand McNally College Publishing Company, American Educational Research Association; 1963. [Google Scholar]
- Chernew M, Scanlon DP. Health plan report cards and insurance choice. Inquiry. 1998 Spring;35(1):9–22. [PubMed] [Google Scholar]
- Daniel Yankelovich Group, Inc. Long Term Care in America: Public Attitudes and Possible Solutions, Report of Study Findings. Washington, DC.: Jan, 1990. Prepared for The American Association of Retired Persons. [Google Scholar]
- Dowd B, Feldman R. Premium elasticities of health plan choice. Inquiry. 1994-95 Winter;31(4):438–444. [PubMed] [Google Scholar]
- Eisenberg JM. Health services research in a market-oriented health care system. Health Affairs. 1998 Jan-Feb;17(1):98–108. doi: 10.1377/hlthaff.17.1.98. [DOI] [PubMed] [Google Scholar]
- Enthoven AC. The history and principles of managed competition. Health Affairs. 1993;12(Supp):24–48. doi: 10.1377/hlthaff.12.suppl_1.24. [DOI] [PubMed] [Google Scholar]
- Federa RD, Oettinger NL. Beyond catastrophic insurance: The future of public funding for long-term care. Topics in Health Care Finance. 1991 Summer;17(4):22–31. [PubMed] [Google Scholar]
- Feldman R, Dowd B. The effectiveness of managed competition in reducing the costs of health insurance. In: Helms RB, editor. Health Policy Reform: Competition and Controls. Washington, DC: The American Enterprise Institute Press; 1993. [Google Scholar]
- Feldman R, Finch M, Dowd B, et al. The demand for employment-based health insurance plans. Journal of Human Resources. 1988 May;24(1):115–142. [Google Scholar]
- Gibbs DA, Sangl JA, Burrus B. Consumer perspectives on information needs for health plan choice. Health Care Financing Review. 1996 Fall;18(1):55–73. [PMC free article] [PubMed] [Google Scholar]
- Harvard Community Health Plan, Inc. Annual Report: Keeping score: How does health care measure up? Brookline, MA.: 1993. [Google Scholar]
- Hellinger FJ. Perspectives on Enthoven';s consumer choice health plan. Inquiry. 1982 Fall;19(3):199–210. [PubMed] [Google Scholar]
- Hibbard JH, Jewett JJ. Will quality report cards help consumers? Health Affairs. 1997 May-Jun;16(3):218–228. doi: 10.1377/hlthaff.16.3.218. [DOI] [PubMed] [Google Scholar]
- Jewett JJ, Hibbard JH. Comprehension of quality care indicators: Differences among privately insured, publicly insured, and uninsured. Health Care Financing Review. 1996 Fall;18(1):75–94. [PMC free article] [PubMed] [Google Scholar]
- Knutson DJ, Fowles JB, Finch M, et al. Employer-specific versus community-wide report cards: Is there a difference? Health Care Financing Review. 1996 Fall;18(1):111–125. [PMC free article] [PubMed] [Google Scholar]
- Luft HS, Miller RH. Patient selection in a competitive health care system. Health Affairs. 1988 Summer;7(3):97–119. doi: 10.1377/hlthaff.7.3.97. [DOI] [PubMed] [Google Scholar]
- Marquis MS, Davies AR, Ware JE. Patient satisfaction and change in medical care provider: A longitudinal study. Medical Care. 1983 Aug;21(8):821–829. doi: 10.1097/00005650-198308000-00006. [DOI] [PubMed] [Google Scholar]
- McGee J, Hunter M. Employee response to health benefits survey results brochure: Findings from fall 1992 interviews. Dec 28, 1992. Final report to State of Minnesota Department of Employee Relations.
- Mechanic D. Consumer choice among health insurance options. Health Affairs. 1989 Spring;8(1):138–148. doi: 10.1377/hlthaff.8.1.138. [DOI] [PubMed] [Google Scholar]
- Moskowitz DB, editor. Can health plan report cards spur competition on quality? Marketplace, supplement to Medicine and Health. 1997 Jul 7; [PubMed] [Google Scholar]
- O'Connor SJ, Bowers MR. An integrative overview of the quality dimension: Marketing implications for the consumer-oriented health care organization. Medical Care Review. 1990 Summer;47(2):193–219. doi: 10.1177/107755879004700204. [DOI] [PubMed] [Google Scholar]
- Robinson S, Brodie M. Understanding the quality challenge for health consumers: The Kaiser/AHCPR survey. The Joint Commission Journal on Quality Improvement. 1997 May;23(5):239–244. doi: 10.1016/s1070-3241(16)30313-3. [DOI] [PubMed] [Google Scholar]
- Rudd J, Glanz K. How individuals use information for health action: Consumer information processing. In: Glanz K, Lewis FM, Rimer BK, editors. Health Behavior and Health Education: Theory, Research, and Practice. San Francisco: Jossey-Bass Publishers; 1990. [Google Scholar]
- Sainfort F, Booske BC. Role of information in consumer selection of health plans. Health Care Financing Review. 1996 Fall;18(1):31–54. [PMC free article] [PubMed] [Google Scholar]
- Sofaer S. How will we know if we got it right? Aims, benefits, and risks of consumer information initiatives. The Joint Commission Journal on Quality Improvement. 1997 May;23(5):258–264. doi: 10.1016/s1070-3241(16)30316-9. [DOI] [PubMed] [Google Scholar]
- SHW, Inc. Consumers want report cards to tell them how health plans really work, but data is hard to get. State Health Watch. 1996 Sep;3(9):1. [Google Scholar]
- Tumlinson A, Bottigheimer H, Mahoney P, et al. Choosing a health plan: What information will consumers use? Health Affairs. 1997 May-Jun;16(3):229–238. doi: 10.1377/hlthaff.16.3.229. [DOI] [PubMed] [Google Scholar]
- Ware JE, Snyder MK. Dimensions of patient attitudes regarding doctors and medical care services. Medical Care. 1975 Aug;8(13):669–682. doi: 10.1097/00005650-197508000-00006. [DOI] [PubMed] [Google Scholar]
- Wilensky GR, Rossiter LF. Patient self-selection in HMOs. Health Affairs. 1986 Spring;5(1):66–80. doi: 10.1377/hlthaff.5.1.66. [DOI] [PubMed] [Google Scholar]