Skip to main content
Health Care Financing Review logoLink to Health Care Financing Review
. 1995 Summer;16(4):155–173.

Surveying Consumer Satisfaction to Assess Managed-Care Quality: Current Practices

Marsha Gold, Judith Wooldridge
PMCID: PMC4193529  PMID: 10151887

Abstract

Growing interest in using consumer satisfaction information to enhance quality of care and promote informed consumer choice has accompanied recent expansions in managed care. This article synthesizes information about consumer satisfaction surveys conducted by managed-care plans, government and other agencies, community groups, and purchasers of care. We discuss survey content, methods, and use of consumer survey information. Differences in the use of consumer surveys preclude one instrument or methodology from meeting all needs. The effectiveness of plan-based surveys could be enhanced by increased information on alternative survey instruments and methods and new methodological studies, such as ones developing risk-adjustment methods.

Introduction

Managed-care plans are a substantial and growing share of the health insurance market (Gabel et al., 1994). Because managed care integrates financing with service delivery, overseeing quality and access to health care within individual plans is very important (Kongstevdt, 1993). Some of this can be done by formal assessment of clinical quality using medical records, administrative systems, or similar information. However, these sources are not well suited to measuring the perceptions of health plan customers. For identifying consumer perspectives, surveys are a useful tool, providing more systematic data to complement information from grievance systems and other sources of consumer feedback. Consumer surveys are receiving increased attention (Agency for Health Care Policy and Research, forthcoming) as a component of Total Quality Management and Continuous Quality Improvement to enhance quality of care and service (James, 1994; Press, Ganey, and Malone, 1992; Inguanzo, 1992; Kritchevsky and Simmons, 1991; Berwick, 1989). Though some controversy exists about the role of consumer information in monitoring quality (Goldfield, Pine, and Pine, 1991), most researchers, policymakers, and managers agree that consumer satisfaction is an important measure of quality and, hence, of system and health plan performance (Cleary and McNeil, 1988; Davies and Ware, 1988; Press, 1994a). However, because many of these applications are operational, they are poorly documented in the published literature, a shortcoming we aim to remedy in this article.

As more of the population enrolls in managed care, there has been an increasing policy focus on use of consumer satisfaction surveys to provide information to purchasers and consumers to assist them in making choices among plans. This article discusses the types of agents collecting and disseminating consumer satisfaction information for these purposes.

Focus and Approach

This article discusses the nature and use of consumer surveys for generating information on satisfaction with individual health plans, including health maintenance organizations (HMOs), other managed-care products such as preferred provider organizations (PPOs) and point-of-service (POS) arrangements, and traditional indemnity insurance. We summarize current knowledge about how widely surveys are used or encouraged by diverse parties such as individual managed-care plans, government and voluntary oversight agencies, community and consumer groups, and public and private purchasers of care. Next, we review the different kinds of surveys (e.g., all enrollees, system users, users of particular services), survey content, and survey methods. Then, we consider outstanding issues relevant to using consumer surveys to assess plan quality and particularly to compare health plans. We end with brief conclusions on surveys as tools for assessing care and recommend three types of activities to better support such efforts.

Our approach to analyzing surveys on consumer satisfaction with health care plans was shaped largely by the data available to us. Information about surveys of consumer satisfaction with managed-care plans is evolving rapidly and is not part of the formal literature. Furthermore, most surveys are intended to address operational needs rather than research objectives. As a result, this article relies heavily on information from the trade press and unpublished materials. These were obtained by reviewing materials we had, making calls to plans known or thought to be involved in survey efforts, and referencing bibliographies and collections maintained in the Group Health Association of America (GHAA) library.

Our methods generated information on the most publicized and broad-based surveys as of mid-1994. This article is not intended to provide a complete inventory of surveys. Furthermore, it contains only limited information on the nationwide prevalence of the approaches illustrated. These are not major constraints, as the focus of the article is conceptual, stressing methodology, purpose, and illustrative applications rather than empirical results.

Nature and Uses of Consumer Surveys

Managed-Care Organizations: Internal Management

Although consumer surveys are used more widely today by managed-care plans than in the past, more established HMOs have long used such surveys. These surveys were generally initiated to support internal plan activities related to marketing and quality assurance (Kongstevdt, 1993). Even though many plans developed their approaches independently, there are now several examples of collective efforts by plans similar in management or philosophy.

A recently completed national survey of managed-care plans documents the widespread use of consumer surveys by managed-care plans. More than 95 percent of the HMOs and about 55 percent of the PPOs surveyed report that they use consumer surveys to monitor care (Gold et al., 1995). Survey data are used to measure enrollee satisfaction and support such functions as: strategic planning and marketing; improving quality; provider profiling and payment; and responding to employer requests. The more sophisticated plans use consistent survey methods over time to monitor trends and issues requiring management attention.

Surveys of new enrollees and disenrollees are most likely to be conducted to support strategic planning and marketing. Some plans also survey area residents not enrolled in the plan in order to establish external benchmarks and identify opportunities for or barriers to growth. Plans will occasionally target special groups (such as smokers or pregnant women) to assess their special needs and their satisfaction with services.

Surveys of consumer satisfaction with various aspects of health care and health insurance are important input into efforts to improve quality and retain plan members (Packer-Tursman, 1994). Such surveys capture information on users and non-users of care and are, therefore, valuable sources of information on barriers to access. Plans vary in whether they ask individuals to answer questions about satisfaction with services they may not have used but about which they may have an opinion. To elicit information on how users of services perceive that care, plans may survey a sample of users of specific services (most often physician visits or hospitalizations, but also ancillary services) on their satisfaction with that service encounter. The success of using surveys to support quality improvement depends on both creating a structure through which results are reviewed, changes are identified, and improvements are charted, and a commitment by staff to these activities (Kritchevsky and Simmons, 1991).

In recent years, more surveys have been conducted to assess performance of individual providers or provider groups (as opposed to the plan as a whole). These techniques have been pioneered by network and independent practitioner association (IPA) models which, because of dispersed physician practices, have a greater desire for information that promotes socialization to the norms of care management and assists in network management. Provider-specific surveys are being used to profile physician practices and compare peer profiles, to modify payment rates or provide bonuses, and to identify outliers—on the high and low ends—for closer review and potential exclusion from the network. GHAA (1993) reports that in 1992, 60 of the 326 plans responding to its annual HMO industry survey used consumer satisfaction measures to adjust primary-care physician payments. Among larger plans, the use of such techniques is even more common, with survey results being used to adjust physician compensation and as input to decisions on physician contract renewals (Gold et al., 1995). The most well-known applications of surveys involve sampling panel members of each physician or provider group to assess patient satisfaction, and using the results in provider payment calculations (Morain, 1992).

Plans are also using surveys of individual provider performance to develop provider report cards that may be made available to plan members. U.S. Healthcare, for example, develops report cards for individual practices by surveying users of primary care, users of specialists, and hospital users. These surveys focus on specific encounters or practices. At least one Kaiser plan uses member surveys to develop quantitative ratings of physician and non-physician providers, which are provided to facilities and physicians. Physicians are given respondents' comments as part of plans' performance development systems.

Some managed-care plans enhance their ability to use survey data by participating in consortium survey efforts, typically involving other plans affiliated in some way. For example, since 1989, the Blue Cross/Blue Shield Association has done annual national benchmarking of consumer satisfaction which member plans can use to interpret their performance. United HealthCare, through its Center for HealthCare Policy and Evaluation, has similarly developed a system to generate a performance measure that plans can use to benchmark themselves relative to others in the system, as well as to respond to external interests. The HMO Group, which consists of 30 prepaid group practices, supports the regular exchange of data and information used to measure, report, and improve performance. They have sponsored a consumer survey biennially since 1988, again primarily as a benchmarking tool that can be used to identify quality-improvement initiatives and opportunities at the plan level. Kaiser Central develops member and non-member surveys that can be used to establish benchmarks and to compare the Kaiser plans with each other.

Managed-Care Organizations: External Purposes

Though consumer surveys had their origin in internal operations, they are increasingly being applied for external uses. In particular, purchasers are requesting information from consumer surveys to help monitor plan performance, select plans to be offered, and facilitate employee choice. In 1991, 88 percent of the HMOs in the GHAA's (1992) annual HMO industry survey reported receiving requests for consumer satisfaction information from employers. Some HMOs probably meet these requests using data already collected for internal use purposes, but plans may initiate new studies to fill in the gaps or tailor information to employer specifications.

More recently, several plans have developed plan “report cards” for an external audience including both purchasers and enrollees or potential enrollees as well as a more broad-based national audience. These report cards typically include measures from administrative data, special studies, and consumer surveys. United HealthCare, U.S. HealthCare, and Kaiser Health Plan of Northern California are producing report cards (Zablocki, 1994).

The issue of consistency in data definitions and measures, both across plans and across the kinds of requests made by purchasers, has created an interest in developing standardized tools that plans can use to respond to employer requests. The most prominent current effort was recently completed under the sponsorship of the National Committee for Quality Assurance (NCQA), which is the main accrediting body for HMOs.

NCQA's effort was based on version 2.0 of the Health Plan Employer Data and Information Set (HEDIS) (National Committee for Quality Assurance, 1993). HEDIS 2.0 is a standardized list of about 60 measures of quality, access, and patient satisfaction, membership and utilization, and finance. It does not mandate a standardized measure of consumer satisfaction, though its appendix provides as examples both the second edition of the GHAA's Consumer Satisfaction Survey and the Employee Health Care Value Survey, which includes most of the GHAA instrument along with other batteries.

NCQA's pilot project involved 21 health plans from across the country selected for diversity of model type, size, geographic location, and type of information system (National Committee for Quality Assurance, 1995). The goal was to develop a report card based on a subset of HEDIS performance measures consistently defined across plans and audited by NCQA, with the experience serving to refine HEDIS 2.0. The pilot moved beyond HEDIS 2.0 in the area of consumer satisfaction, sponsoring a survey using the GHAA Consumer Satisfaction Survey instrument (second edition) with the intention of identifying a subset of the items that will be meaningful to consumers. Although HEDIS 2.0 was developed chiefly to serve the needs of commercial insurers, NCQA (with support from HCFA and several States) has a grant from the Packard Foundation to develop an adaptation suitable for measuring care received from publicly supported Medicaid enrollees in managed care.

Public Accountability, Oversight, and Community Assessment

The chief mechanisms for ensuring public accountability and oversight of health plans are State licensure, voluntary certification as a federally qualified HMO, and accreditation by voluntary organizations. Rarely do the regulatory entities or their agents conduct plan-based consumer surveys. However, they sometimes require or encourage plans to conduct surveys and verify that this and other requirements are met. These verification activities vary in nature and extent.

Accreditation programs for managed-care plans have become much more established over the past few years. NCQA currently is the principal HMO accrediting body (Gold et al., 1995). NCQA requires that plans have mechanisms to protect and enhance membership satisfaction with their services, including membership satisfaction surveys, studies of reasons for disenrollment, and evidence that the organization uses this information to improve the quality of its service. Relevant documentation (that is, results of member satisfaction and disenrollment surveys) is reviewed by an NCQA team during the onsite review for accreditation. NCQA also requires, as part of a managed-care organization credentialing system, a periodic performance appraisal of providers. This appraisal includes information from quality-assurance activity, risk and utilization management, member complaints, and member satisfaction surveys. Current NCQA accreditation requirements do not require plans to be capable of producing HEDIS 2.0. However, we have been told that plans believe they will ultimately need to provide HEDIS 2.0 for accreditation, and thus are gearing up for it as part of their accreditation activities.

Some plan-based consumer surveys have been sponsored independently by consumer and community organizations, occasionally with external funding. Two examples are the plan-specific consumer satisfaction survey information on 46 plans that was included as part of a detailed report on HMOs and other managed-care products in Consumer Reports (Consumers Union, 1992) and the Central Iowa Health Survey, funded by the John A Hartford Foundation. The latter was a pilot study for the population-provided-data component of the patient-centered Community Health Management Information System (CHMIS), which forms the core of the John A Hartford Foundation's Community Health Management Initiative launched in 1991 (Allen, 1993). CHMIS is intended to develop a blended data set incorporating claims, survey, and other kinds of data from competing organizations at multiple levels, including health plans, hospitals, and doctor's offices.

As enrollment in managed care expands, oversight is likely also to expand and, with it, the use of surveys. The recent health reform debate emphasized oversight of managed-care plans and proposals through centrally collected consumer satisfaction data. The Clinton Administration's Health Security Act, for example, called for AHCPR to administer a consumer survey on access, use of services, health outcomes, and patient satisfaction by plan and by State (Title V.A, section 5004). Consumer satisfaction surveys have been built into some State reform efforts as well. Two States undertaking extensive reforms—Minnesota and Washington—are working through public-private partnerships to find ways to disseminate information on quality of care, including information from consumer surveys. A 1994 survey of senior State officials sponsored by the Robert Wood Johnson Foundation found that fewer than 10 were currently involved in developing consumer satisfaction data by health plan, and that most such efforts were at an early stage. However, 73 percent of those responding perceived that data on health system and health plan performance were very important for health reform (Gold, Burnbauer, and Chu, 1995).

Commercial, Medicare, and Medicaid Markets

Purchaser-sponsored surveys represent a relatively new trend, and sponsors are, for the most part, the largest purchasers. Some surveys are sponsored by a single purchaser, and others involve groups of purchasers. The broader the coalition of purchasers, the smaller the distinction between this approach and community-based approaches. So far, most of these surveys are sponsored by employers rather than by Medicare or Medicaid—however, this could change.1

The distinguishing feature of purchaser-sponsored surveys is that they involve estimates of satisfaction specific to the purchaser's population relative to the health plan overall. Leading examples include: the Bank of America/Bay Area Business Group on Health (1994);2 Minnesota State Employees (State of Minnesota Joint Labor-Management Committee on Health Plans, 1993a, 1993b); the Federal Employees Health Benefits Program (Francis and the Center for the Study of Services, 1994); and the employer consortium of Xerox, GTE, and Digital Equipment Corporation (Allen et al., 1994). These employer-sponsored surveys represent three private employers, one State government, and the Federal Government.

Management consultants and survey research firms are the other major sponsors of surveys aimed at the employer market. Potentially the largest such effort, the approach developed by the National Research Corporation (NRC) (1994) rests on a methodology that involves an ongoing panel drawing on 200,000 volunteer households. NRC also conducts customized surveys for a number of managed-care plans (e.g., CIGNA and Family Health Plan) and markets plan-specific results by geographic areas. Other firms, such as Novalis (Ribner and Stewart, 1993) and Towers Perrin (HMO Managers Letter, 1992a, 1992b, 1994), have conducted surveys of employee satisfaction with health plans, but results are rarely plan-specific.

Externally sponsored consumer surveys are used less extensively in publicly financed programs such as Medicare and Medicaid, although this is changing as managed-care enrollment in these programs grows. Medicare does not routinely generate plan-based consumer information for use in monitoring managed-care plans. Medicare has mounted a continuing Current Beneficiary Survey. Periodic surveys that do not involve plan-specific estimates have been used in sponsored evaluations (Brown et al., 1993) and to address such specific programmatic issues as disenrollment (Porell et al., 1992). A recent HCFA initiative recommended using validated surveys to evaluate quality of care and patient satisfaction with various aspects of the care provided by managed-care plans (Delmarva Foundation for Medical Care, Inc., 1994).

Consumer surveys generating plan-specific estimates are not currently common among Medicaid programs, though their use is growing. Because of the shared Federal-State structure of Medicaid, States are more likely than the Federal Government to sponsor plan-specific consumer surveys, although Federal interest in this area has expanded, particularly for demonstration projects involving broad-based reforms. An early example of State use of surveys for the Medicaid population comes from California, which sponsored a 13-plan survey for 3 years (in the mid-1970s) to monitor prepaid health plan quality in response to highly publicized problems (Ware et al., 1981). Consumer information has been used in some national evaluations (e.g., the Arizona Health Care Cost-Containment System) and will be used to support evaluations of 1115 waiver programs now being implemented. Some State Medicaid programs include consumer surveys as part of their quality monitoring activity. As of September 1992, 8 of 25 Medicaid agencies surveyed required HMOs to conduct patient satisfaction surveys, and 7 conducted their own surveys to assess recipient satisfaction (Office of the Inspector General, 1992). More recent efforts include a survey of Medicaid recipients in Maryland that the State is fielding with Robert Wood Johnson Foundation funding, as well as consumer surveys conducted by States participating in the demonstration of the Medicaid Managed-Care Quality Assurance Reform Initiative (Felt, 1995). We know of no efforts to use plan-based estimates from these surveys to support beneficiary choice, and Medicaid has no parallel to the existing Medicare Current Beneficiary Survey. However, in 1994, the Physician Payment Review Commission recommended that Congress fund such a survey based on research showing its feasibility for generating State-based estimates (Gold et al., 1995).3

Medicare and Medicaid may become more involved in sponsoring plan-based surveys to generate consumer information. HCFA contracted in 1994 for a study in which prototypes of consumer information materials will be developed. The data-based approaches are likely to involve the use of surveys (Research Triangle Institute, 1994).

Survey Focus, Content, and Methods

Variations in Survey Focus

Surveys of consumer satisfaction with health plans vary in several ways, the most important of which are illustrated in Figure 1. First, surveys differ in terms of the population they are intended to represent. That population may be in a given geographic area, in a particular plan, or in the specific purchaser's share of the plan. The Novalis survey is an example of a geographically-based survey that provides estimates of how satisfaction varies by type of plan, though it is not market-specific. Community-based efforts, such as the Central Iowa Health Survey, and plan-based surveys provide plan-specific estimates. Most purchaser surveys focus on the employer-specific population in a plan.

Figure 1. Varying Features of Samples for Consumer Satisfaction Surveys of Managed-Care Plans.

Figure 1

Surveys differ according to whether they focus on all those eligible for the plan or on service users only. Within each of the three types of populations (geographic, plan-specific, and employer-specific), we find surveys that focus either on all eligibles or on users. The focus may have important implications for the results and how they are interpreted. The distinction is particularly important for surveys involving PPOs, because use in itself may be an important measure of satisfaction. Even for HMOs, surveys with the same questions may yield dissimilar estimates depending on whether all enrollees or users only respond. The two focuses persist largely because there are strong opinions, but no consensus, among survey developers about how information on satisfaction with use should be collected. It is possible that this occurs in part because developers have different goals for the survey—marketing, performance evaluation, or quality improvement.

Although Figure 1 helps to create a basic understanding of survey population and focus, it simplifies reality. People move from one category to another over time, so changes in satisfaction may reflect changes in population composition as much as changes in plan quality or access. Moreover, moves across categories vary among plans and types of populations (e.g., Medicaid versus commercial enrollees), creating a potential source of bias in trend estimates. Second, the unit of analysis may not always be the person but may be a user of a particular service or provider, two “targets” common among internal surveys designed to support plan management efforts. Finally, “population” may be variously defined. Estimates may be based on a sample of all individuals in one of the three categories or only on those of a particular type (e.g., insured individuals only or commercial group enrollees only). In addition, items may be framed to capture information on the household, the insurance unit, the subscriber, the respondent, or a child.

Item Content

Research studies since 1980 on consumer satisfaction and other performance measures were recently summarized by Miller and Luft (1994). Their analysis highlights the importance of item content, because the studies found that satisfaction varies for different dimensions of care.

Many current surveys designed to develop plan-based measures of satisfaction are based on the GHAA Consumer Satisfaction Survey instrument. This instrument was based on others, beginning with satisfaction measures developed in the 1970s with grants from the National Center for Health Services Research and Development (Ware and Snyder, 1975; Ware et al., 1983) that were adapted first for the Health Insurance Experiment (Davies et al., 1986) and later for the Medical Outcomes Study (Marshall et al., 1993). Table 1 summarizes the evolution of these related satisfaction measures.

Table 1. Evolution of Consumer Satisfaction Surveys.

Battery Development Items Sources Content Response Scale Major Changes From Source Primary Use
PSQ-I 1972-75 80 Literature reviews, content review of earlier instruments, and item generation studies produced pool of 2,300 items.1-3 Accesibility and convenience; availability of services; continuity of care; finances; interpersonal aspects; technical quality; facilities; and general satisfaction Strongly agree;
Agree;
Not sure;
Disagree;
Strongly disagree
NA Multiple tests to identify dimensions of care and services; test-scaling assumptions; score reliability; response bias; and validity
PSQ-II 1972-75 68 Same as PSQ-1 and results of PSQ-I studies. Items were revised to emphasize or clarify object of measurement, improve score distributions, and reduce ambiguity.2-7 Same as PSQ-I Strongly agree;
Agree;
Not sure;
Disagree;
Strongly disagree
Shorter than PSQ-I; more focused on empirically confirmed dimensions of care Multiple field tests to replicate methodological studies; describe health care attitudes of adults across practices, cities, counties, and States
PSQ-43 1971-77 43 42 PSQ-II items and “crisis in health care” item from CHAS-NORC.2, 3, 8-10 Same as PSQ-I; additional item does not assess attitudes toward own medical care and services Strongly agree;
Agree;
Not sure;
Disagree;
Strongly disagree
Shorter than PSQ-II; retains fundamental concepts Support assessments of health care (along with other batteries) in omnibus surveys; used in this way to compare health insurance plans in the HIE; develop norms for U.S. population in CHAS-NORC survey.
PSQ-III 1984-85 50 PSQ-II items; pilot tests of new items written to distinguish financial and physical access.11-12 Interpersonal manner; communication; technical quality; financial security; time spent with physician; access to care; and general satisfaction Strongly agree;
Agree;
Not sure;
Disagree;
Strongly disagree
New items on financial security Medical Outcomes Study
VSQ 1985-86 9 PSQ-III13-15 Physical access; telephone access; office wait; appointment wait; time spent with physician; communication; interpersonal aspects; technical quality; and overall care Excellent;
Very good;
Good;
Fair;
Poor
Reduced in length to one item per concept; uses EVGFP response scale Medical Outcomes Study
GHAA CSS (First Edition) 1987-88 35 Based on PSQ-III items, rewritten to be used with EVGFP response scale. (Satisfaction battery represents 1 of 3 included in entire survey; others capture prior use/experience with plan and sociodemographics.)16 Access; finances; technical quality; communication; choice; continuity; interpersonal care; outcomes; overall care; and general satisfaction Excellent;
Very good;
Good;
Fair;
Poor
Uses EVGFP response scale; collapsed PSQ-III items to yield survey while retaining content; added outcomes Made available to GHAA member plans and employers for use in producing plan-level estimates for employers
GHAA CSS (Second Edition 1991 35 Care Services 14 Plan Same care and services battery as GHAA CSS (first edition). Content of new satisfaction battery based on review of literature, individual plan surveys, and focus groups.14 As in GHAA CSS (first edition), but with the following additional items: services covered; information from plan; paperwork; costs of care; and overall plan Excellent;
Very good;
Good;
Fair;
Poor
Addition of battery to yield ratings of selected managed-care plan features in response to requests from plans and employers Same as GHAA CSS (first edition)
2

Ware, J.E., Snyder, M.K., and Wright, W.R.: Development and Validation of Scales to Measure Patient Satisfaction With Health Care Services: Volume I of a Final Report. Part A: Review of Literature, Overview of Methods, and Results Regarding Construction of Scales. Pub. No. PB-288-329. Springfield, VA. National Technical Information Service, 1976a.

3

Ware, J.E., Snyder, M.K., and Wright, W.R.: Development and Validation of Scales to Measure Patient Satisfaction With Health Care Services: Volume I of a Final Report. Part B: Results Regarding Scales Constructed From the Patient Satisfaction Questionnaire and Measures of Other Health Care Perceptions. Pub. No. PB-288-330. Springfield, VA. National Technical Information Service, 1976b.

4

Ware, J.E., Wright, W.R., Snyder, M.K., and Chu, G.C.: Consumer Perceptions of Health Care Services: Implications for the Academic Medical Community. Journal of Medical Education 50(9):839-848, 1975.

5

Doyle, B.J., and Ware, J.E.: Physician Conduct and Other Factors That Influence Patient Satisfaction. Journal of Medical Education 52(10):793-801, 1977.

6

Ware, J.E.: Effects of Acquiescent Response Set on Patient Satisfaction Ratings. Medical Care 16(4):327-336, 1978.

8

Aday, LA., Andersen, R., and Fleming, G.V: Health Care in the United States: Equitable for Whom? Beverly Hills. Sage Publications, 1980.

9

Marquis, M.R., Davies, A.R., and Ware, J.E.: Patient Satisfaction and Change in Medical Care Provider. Medical Care 21 (8):821-829, 1983.

10

Davies, A.R., Ware, J.E., Brook, R.H., and Peterson, J.: Consumer Acceptance of Prepaid and Fee-for-Service Medical Care: Results From a Randomized Control Trial. Health Services Research 21(3):429-452, 1986.

11

Safran, D., Tarlov, A.R., and Rogers, W.: Primary Care Performance in Fee-for-Service and Prepaid Health Care Systems: Results From the Medical Outcomes Study. Journal of the American Medical Association 271(20):1579-1586, 1994.

13

Hays, R.D., and Ware, J.E.: Methods for Measuring Patient Satisfaction With Specific Medical Encounters. Medical Care 26(4):393-402, 1988.

NOTES: PSQ is Patient Satisfaction Questionnaire. VSQ is Visit Satisfaction Questionnaire. GHAA is Group Health Association of America. CSS is Consumer Satisfaction Survey. CHAS-NORC is Center for Health Administration Studies—National Opinion Research Center. HIE is Health Insurance Experiment. EVGFP is excellent, very good, good, fair, or poor.

SOURCE: Davies, A.: Personal communication, 1994.

The GHAA battery has subsequently been used by the Health Institute at the New England Medical Center in the Iowa Health Survey and other projects. In addition to batteries from the GHAA Consumer Satisfaction Survey, the Iowa survey instrument included a modified version of the inpatient hospital quality trends that measures satisfaction with the most recent hospitalization (if within 3 months) (Meterko, Nelson, and Rubin, 1990) and the visit satisfaction questionnaire (VSQ), which captures satisfaction with the last physician visit (if within 4 months) (Rubin et al., 1993).4 In addition to consumer satisfaction, the Iowa survey also measures health status through the short form SF-36 (Ware and Sherbourne, 1992; Ware et al., 1993) and also included a pilot test of enrollees' ratings of management of care and coverage. The package of instruments is intended to be a reference set of batteries to be used individually or together for different purposes. This set of batteries has since been followed by the Employer Health Care Value Survey (EHCVS). The EHCVS satisfaction battery includes most items in the second edition of the GHAA Consumer Satisfaction Survey, augmented by a set of questions on the management of care and coverage (partially pilot-tested in the Iowa survey). The EHCVS also includes the SF-36 and items on health risk behavior drawn from previous survey instruments.

Though GHAA's interest in sponsoring Davies and Ware (1991) to develop the Consumer Satisfaction Survey instrument was to promote consistency across surveys, most users have modified the instrument by adding and dropping items, adapting them to specific encounters or providers, and modifying the satisfaction categories. For example, the HMO group survey instrument incorporates questions from the GHAA Consumer Satisfaction Survey but includes additional questions on prescriptions, lab tests, ease of choosing a primary-care physician, and hospital care. There is also a module on out-of-plan visits. Many of these modifications reflect differences in philosophy and opinion about how certain methodological issues should be handled. The shortening of the instrument by omitting items may be intended to reduce respondent burden. It may also reflect a narrower set of purposes and individual user views on what is most valuable. Although these adaptations, particularly the omission of items, make it impossible to compare plans on the nine scales, plans can be compared on matching retained items.

Adaptations of the GHAA instrument also illustrate differences in opinion about whether individuals should be asked to rate features of care they have not used, the relative emphasis on ratings of aspects of care versus reports on actual experiences; for whom the respondent should answer (e.g., self versus family); and whether satisfaction should be requested by proxy for children.

There are other bodies of work on consumer satisfaction or related measures of health plans. For example, the Bank of America survey instrument includes satisfaction ratings and factual reports on process and outcomes of care (e.g., Does this plan offer all the health services you need? How would you categorize the attitudes of doctors, nurses, and support staff serving you under this medical plan? In the past year, have you had any illness or bad reaction caused by medicine your physician prescribed?). The instrument also has items that solicit information on health behaviors that may serve as markers of adverse selection based on incidence of health risks (e.g., smoking, stress). It is distinguished mainly by its emphasis on the reporting of events rather than ratings of satisfaction, though both are included. The former have intuitive appeal to some purchasers, consumers, and health plan members. Current work is underway to identify how surveys, particularly those with consumers as the intended audience, can be better grounded in an understanding of what information consumers really use to make decisions. For example, some say that knowing which providers are affiliated with a plan is more important to consumers than is satisfaction information (Winslow, 1994).

Because survey instruments have evolved independently, plans vary considerably in the instruments they use (Table 2). However, the availability of the GHAA survey has contributed to some consistency in use of instruments among plans that have recently initiated surveys. Of the 21 survey instruments we obtained from managed-care plans, 10 of them draw on the GHAA satisfaction battery in whole or part, though 3 had modified the rating system (either using the response categories “satisfied” to “dissatisfied” or inventing new rating systems, such as 1 to 10 representing “unacceptable” to “excellent”). Some of them added items—e.g., covering access to specialist care in greater detail and satisfaction with the facility appearance, staff demeanor and dress, and ease of parking. The length of these instruments varied from the 47-item GHAA survey (for a first-time baseline survey of plan satisfaction) to 9 items for a survey of satisfaction with specialist care.

Table 2. Summary of Content of Plan-Based Consumer Surveys.

Satisfaction With Aspect of Care or Service Number of Plan-Based Surveys Included
Overall Quality and Satisfaction 21
Interpersonal Aspects 18
Communication or Information 18
Timeliness of Services 16
Intention to Recommend Organization 16
Technical Aspects 14
Time Spent With Providers 14
Access and Availability of Services 13
Intention to Use Organization Again 11
Satisfaction With Outcomes of Care 8
Choice or Continuity 8
Financial Aspects and Billing 8
Physical Environment 6

SOURCE: Gold, M., and Wooldridge, J.: Derived from 21 plan-based survey instruments collected from managed-care organizations.

Methodological Practices and Issues

Frequency, Mode, and Response Rates

Of the plans for which we have information, many reported using key surveys either on a continual basis or annually. Plan use of surveys appears to be growing, particularly as more plans aim for NCQA accreditation, as survey models become more available, and as examples of applications become more publicized. However, the range of sophistication, uses, and methods vary considerably across plans—for example, we identified instances of quota rather than random sampling.

Although in-person studies of satisfaction are sometimes conducted—mostly in focus groups—the predominant modes of administering plan-based surveys are telephone, mail, and mail with telephone followup. Of the 21 surveys for which we have mode information, 12 were administered by mail or mixed mode, and 9 were administered by telephone.

The advantages of mail surveys are lower expense and greater anonymity. Press (1994b) insists on the importance of the anonymity in collecting objective measures of satisfaction with hospital stays, citing differences in satisfaction level between the two modes. The disadvantage of mail surveys is that they generally yield lower response rates (often less than 50 percent, though rates increase with followup mailings). Plans reported mail survey response rates to single-mailing surveys ranging from 30 percent to 60 percent.

Plans use telephone surveys almost as often as mail surveys to collect information on satisfaction, and many of them use computer-assisted interviewing, which reduces cost. Telephone response rates can be higher than response rates to single-mailing surveys, achieved through repeat calls to those not answering the first time. The lowest response rate to a plan's telephone survey we identified was 60 percent. We found that some of the external surveys had response rates of 70 percent or more by telephone. However, the estimates frequently involve sampling with replacement to obtain a target sample size. Hence, the response rates for telephone surveys cannot readily be compared directly with mail surveys in which such techniques are not used.

Selection of Respondents

Respondents are typically plan members, though sometimes they are spouses of plan members. They are usually asked about their own health care, but in some instances they are asked to respond in a general way, which implies they are answering for the family, or they are asked to respond specifically about their children's care.

Cultural and Ethnic Diversity

We were unable to identify from the materials we collected how surveys account for cultural and ethnic diversity of members. This diversity includes non-English speakers, the possibility of low levels of literacy (particularly for mail surveys), and any cultural differences in response sets that might bias the results. One plan's approach to language on a mail survey is to express each item on the same instrument in both English and Spanish. This issue is important, particularly as managed-care penetration grows among low-income populations, some of whom speak little English.

Sample Selection

We have very little information about sampling methods for the plan-based surveys, although most plans reference random samples. Having drawn a random sample, however, some plans appear to use quota sampling to collect a specified number of responses, and others describe fielding procedures that suggest attempts to complete all of the sample initially drawn. Packer-Tursman (1994) describes the increasingly targeted sampling methods being used by Kaiser Permanente. Satisfaction data across plans that use different sampling and fielding procedures will not be comparable. In addition, the quality and utility of the data obtained by individual plans obviously depend on whether the methods minimize potential bias and provide for generalizable estimates.

Type of Measurement

Existing surveys have developed different types of measures. Some surveys emphasize ratings over reports; that is, consumers are asked to rate features of care and service rather than to report on actual events as they experience them. Ratings are more common, but interest in reports has grown because they are viewed by some as providing both the basis for more “objective” or “normative” performance standards and as potential substitutes for or complements to other sources of direct quality measurement.

Surveys differ also in the form of the scales they employ. Historically, it had been common to ask respondents to rate their care on some form of satisfied-dissatisfied or agree-disagree scale. Based on research on survey design (Ware and Hays, 1988), some use a four-point scale running from “excellent” to “poor,” which makes it easier to compare ratings across different features of care. It is also common to add a fifth category, “very good,” making this a five-point scale. Although such an approach may superficially appear unbalanced, this five-point scale discriminates better among the large majority of respondents who typically cite care as either excellent or good.

Finally, surveys differ in their emphasis on use of composite scales constructed from multiple measures rather than on use of individual items. For cross-plan comparisons of complex features of care that involve several dimensions of performance, scales are likely to provide more useful measures and more stable estimates. However, individual items may be more intuitively appealing and more useful for identifying specific aspects of performance that need improvement.

Generating Comparable Plan-Based Measures

Although plans have considerable experience using consumer surveys for internal management needs, the use of consumer survey data for cross-plan comparisons or other external purposes is relatively recent. These new uses raise operational issues that would not otherwise arise.5 These issues are important to address if tools such as report cards are to be practical and relevant.

Developing a Sampling Frame

Health plans typically know their membership (or at least their users, in the case of PPOs and indemnity products), and employers know their employees. However, lists that can be used to generate representative samples for the target population may not be available to other external survey sponsors (such as a community group). Such sponsors must either rely on participating plans to generate enrollment lists or samples voluntarily or use population-based survey techniques. Plans may be hesitant to provide such lists, and they may be precluded from participating because of confidentiality issues. Population-based sampling techniques are potentially feasible when enrollment is high in an area or can be predicted from known factors (e.g., ZIP Code). However, population-based sampling techniques are not generally feasible for developing estimates for a large number of individual plans, many of which may represent only a small share of the population.

Ensuring Consistent Methods and Valid Results

There are two options for developing comparative information from consumers across health plans: collect it centrally or compile plan results individually. Central collection allows for consistency in method across plans. If the central collector is regarded as objective, this option is also likely to generate more credible data. Compiling individual plan reports (e.g., from internal plan surveys) is less burdensome on the external entity and can take advantage of ongoing surveys. However, methods and results may not be comparable, and plans may have incentives to show positive results. A compromise is for any given purchaser to provide or to agree, in conjunction with its contracted health plans, on a standardized methodology and to develop a mechanism for validating a sample of the data each plan then submits.

Developing Plan, Purchaser, or Employer Data

Individual purchasers (or groups of purchasers) may find plan-based data specific to their enrollees of greatest interest or value. However, only the largest employers are likely to be able to conduct surveys to collect such information. Also, collecting data on each employer group can generate substantial administrative costs. Unfortunately, we know of little research comparing satisfaction across diverse purchasers, particularly those from a similar market segment (e.g., comparing scores across commercial accounts rather than between commercial group accounts and Medicaid).

Market Segmentation and Risk Adjustment

Health plans serve differing market segments; hence, the characteristics of their enrollees vary. Some differences in enrollee characteristics may be correlated with consumer responses to surveys, reflecting both objective differences (medical factors, such as health risk, or social factors, such as compliance) or response (e.g., relative importance attributed to different characteristics or expectations). Differences of opinion exist whether adjusting consumer responses for risk factors is appropriate, some arguing that consumer responses reflect the prevailing market and should not be adjusted. Among others who wish to compare across plans or markets, the issue is how to adjust for risk rather than whether to adjust. Unless these differences are accounted for in the measures developed from surveys, proponents of risk adjustment argue that the results may be misleading and biased in the plan comparisons they provide.

Although risk-adjustment methods have been developed for payment purposes, methods appropriate for adjusting consumer satisfaction have not been developed. This is an area that requires further development. For those wishing to adjust for risk, the issue can be addressed by separately reporting measures for different segments (such as group versus individual enrollee, commercial accounts versus Medicaid) or by standardizing the data to represent a standardized population across plans. However, the latter approach may not be feasible if some plans do not serve key segments of the population (in which case, there are no performance data to apply to the standardized population mix). It may also imply that different standards of performance are acceptable across the population. For different purposes, it is important to present both unadjusted and adjusted data. Again, these issues are particularly germane to public purchasers.

Disenrollment Bias

The same degree of dissatisfaction may generate different disenrollment behavior across plans depending on the scope of the network. At one extreme, those dissatisfied with care under indemnity coverage retain the same health insurance but switch providers. At the other extreme, those dissatisfied under a tight network-based managed-care plan with no point-of-service option may be much more likely to switch plans. Moreover, some enrollees may disenroll involuntarily because of changes in plans offered by their employer, changes in employer, or other types of loss of eligibility (e.g., among Medicaid beneficiaries). Depending on the net direction of these efforts, surveying only current users or long-term members may overstate satisfaction and may lead to biased comparisons across plans and delivery systems with distinctly different designs.

Conclusions and Recommendations

There is a growing interest in plan-based measures of consumer satisfaction with access and quality. Although there is no consensus on survey content or approach, there is a growing body of work and experience that can inform future developments. The content of instruments appears to be better developed than do the methods for using them. In addition, work on rating-type approaches is more advanced than work on report-type approaches. Yet, there are enough examples to conclude that it is reasonable to strive for methodologically sound surveys with high response rates on a timely basis. The two key constraints on this effort are likely to be resources and the sophistication of users, particularly given the large number of potential sponsors and estimates desired. Current experience also suggests that item content for consumer surveys needs to be based on an understanding of the varying objectives of the surveys and that no one instrument or survey methodology can meet all needs.

Our review and analysis suggest that research and policy support can considerably strengthen the ability to develop effective plan-based surveys. Our work suggests that both increasing the availability of information on consumer-satisfaction survey methods and furthering the development of these methods is important.

Existing experience with plan-based surveys is decentralized. Communication about what is being done and how is ad hoc. Proprietary interests and concerns contribute to this situation because public disclosure could limit marketing opportunities or remove competitive advantages. Yet, the content of many survey instruments is in the public domain. In addition, there are many ongoing efforts where disclosure would not appear to create disadvantages, and a little effort would make it easier for individuals and organizations to find out how to conduct satisfaction surveys. Some approaches to improving consumer surveys include publicly available and current compilations of existing survey instruments and documentation of their application and guidance to help potential users understand the strengths, weaknesses, and potential applications of alternative survey purposes, the batteries appropriate for each and what “best practices” may exist for specific purposes.

AHCPR has made a useful start in designing a prototype set of survey instruments to monitor consumers' satisfaction and other aspects of care use such as amount, access problems, and health outcomes (Lubalin et al., 1995). This design project has developed modules for different aspects of care and is intended for different types of sponsoring organizations. AHCPR plans further development of these modules for specific populations and a long-term evaluation of the usefulness of the results of these surveys to consumers and purchasers of health plans (RFAHS-95-003).

Our review also suggests that there are several areas that need methodological study if plan-based surveys become more common. Three particularly important areas for research are:

  • Development of methods for risk adjusting plan-based survey results. The sociodemographic mix in managed-care plans varies, often considerably. To the extent these characteristics are correlated with survey response, they may lead to biased comparisons among health plans. From a public-policy perspective, such biases are of particular concern because they can create incentives diametrically opposed to desirable social responses—e.g., service to the poor, the chronically ill, and those with special needs, social or medical. Research is needed to assess whether risk adjustment makes a difference to consumer responses, and, if it does, to extend current risk-adjustment work from medical to social risk adjustment and to adjusters suitable for survey data. In addition, alternative forms of adjustment and correction need review.

  • Short-form batteries for diverse needs. Many surveys are constrained in the number of items they can include, leading users to develop various “short forms” of items from larger batteries. Often these are developed in an ad hoc manner and not well validated. The use of diverse surveys also reduces the ability to compare across plans. A systematic study comparing the validity of existing approaches and testing alternative new short forms would be a valuable contribution. Although such forms exist for visit and hospital services, they are much less developed for general enrollee surveys.

  • Concordance between employer-specific, group enrollment, and plan-wide estimates of satisfaction. Current trends will contribute to a proliferation of surveys for diverse populations. This can enhance consumer information but could add to administrative cost and burden. Yet there is little research to show how well more general measures predict sub-group responses and whether plan-wide measures are just as effective in discriminating among health plans based on performance.

In summary, consumer surveys are a valuable tool for assessing quality of care and other aspects of health plan performance, but additional work and thoughtful application will enhance their value.

Acknowledgments

This article draws substantially on work originally commissioned by AHCPR for use at a Conference on Consumer Survey Information in a Reformed Health Care System, jointly sponsored by AHCPR and the Robert Wood Johnson Foundation. The full report (Agency for Health Care Policy and Research, forthcoming) is included in the proceedings from that conference. Allyson Ross Davies provided advice on sources of information, reviewed and commented on drafts of the AHCPR work, and assisted in identifying the evolution of survey content We also benefitted from the advice of Jill Bernstein, Terry Shannon, and Sandy Robinson on the staff. At Mathematica Policy Research, Barbara Foot, Rachel Thompson, and Sabrina Perrault provided research support. Daryl Hall edited the article. Ann Miles, Marjorie Mitchell, and Kathleen Donaldson provided secretarial support.

Footnotes

The research presented in this article is based to a considerable extent on work commissioned by the Agency for Health Care Policy and Research (AHCPR) under Contract Number 282-91-0027. The authors are with Mathematica Policy Research, Inc. (MPR). The opinions expressed are solely those of the authors and do not necessarily reflect those of AHCPR, MPR, or the Health Care Financing Administration (HCFA).

1

This may reflect a lesser extent of penetration by managed care in Medicare and the fact that there are more employers than States.

2

The results were reported in a 1991 Bay Area Consumers' Checkbook, making this survey an example of the first efforts to identify individual HMOs.

3

Gold et al. (1995) also highlighted special issues that apply to low-income and Medicaid populations, including limitations in the Medicaid eligibility files as a sampling frame, biases created by the absence of telephones, and eligibility turnover.

4

The VSQ is included as a model in the appendix of the second edition of the GHAA Consumer Satisfaction Survey (Davies and Ware, 1991).

5

Some of the same operational issues arise, however, when sub-units within plans (e.g., centers, physicians, regions) are compared.

Reprint Requests: Marsha Gold, Sc.D., Mathematica Policy Research Inc., 600 Maryland Avenue, SW., Suite 550, Washington, DC 20024-2512.

References

  1. Agency for Health Care Policy and Research. Conference Summary: Consumer Survey Information in a Reformed Health Care System. 1995. Public Health Service. AHCPR Pub. No. 950083. Forthcoming. [Google Scholar]
  2. Allen HM. Consumer Assessment of Health and Health Care: The Central Iowa Pilot Study. Boston: The Health Institute, New England Medical Center; Jun, 1993. [Google Scholar]
  3. Allen H, Darling H, McNeill D, et al. The Employee Health Care Value Survey: Round One. Boston: The Health Institute, New England Medical Center; Jun, 1994. [DOI] [PubMed] [Google Scholar]
  4. Bank of America/Bay Area Business Group on Health: Personal communication. August 1994.
  5. Berwick DM. Continuous Improvement as an Ideal in Health Care. New England Journal of Medicine. 1989 Jan;320(1):53–56. doi: 10.1056/NEJM198901053200110. [DOI] [PubMed] [Google Scholar]
  6. Brown RS, Bergeron JW, Clement DG, et al. The Medicare Risk Program for HMOs—Final Summary Report on Findings From the Evaluation. Princeton: Mathematica Policy Research, Inc.; Feb, 1993. Prepared for the Health Care Financing Administration. [Google Scholar]
  7. Cleary P, McNeil BJ. Patient Satisfaction as an Indicator of Quality of Care. Inquiry. 1988 Spring;25:25–36. [PubMed] [Google Scholar]
  8. Consumers Union. Health Care in Crisis: Are HMOs the Answer? Consumer Reports. 1992 Aug;:519–531. [Google Scholar]
  9. Davies A, Ware JE., Jr Involving Consumers in Quality of Care Assessment. Health Affairs. 1988 Spring;:33–48. doi: 10.1377/hlthaff.7.1.33. [DOI] [PubMed] [Google Scholar]
  10. Davies A, Ware JE, Jr, Brook RH, et al. Consumer Acceptance of Prepaid and Fee-for-Service Medical Care: Results From a Randomized Controlled Trial. HSR: Health Services Research. 1986 Aug;21(3):429–452. [PMC free article] [PubMed] [Google Scholar]
  11. Davies A, Ware JE. GHAA's Consumer Satisfaction Survey and User's Manual. Second. Washington, DC.: Group Health Association of America; 1991. [Google Scholar]
  12. Delmarva Foundation for Medical Care Inc. External Review Performance Measurement of Medicare HMOs/CMPs. Easton, MD.: Aug, 1994. Prepared for the Health Care Financing Administration. [Google Scholar]
  13. Felt S. The First Twenty Months of the Quality Assurance Reform Initiative (QARI) Demonstration for Medicaid Managed Care: Interim Evaluation Report. Washington, DC.: Mathematica Policy Research, Inc.; Mar, 1995. Prepared for the Health Care Financing Administration. [Google Scholar]
  14. Francis W Center for the Study of Services. Checkbook's Guide to 1995 Plans for Federal Employees. Washington, DC.: 1994. [Google Scholar]
  15. Gabel J, Listen D, Jensen G, Marsteller J. The Health Insurance Picture in 1993: Some Rare Good News. Health Affairs. 1994;13(1):327–336. doi: 10.1377/hlthaff.13.1.327. [DOI] [PubMed] [Google Scholar]
  16. Gold M, Burnbauer L, Chu K. Half Empty or Half Full: The Capacity of State Data to Support Health Reform. Washington, DC.: Mathematica Policy Research, Inc.; Jan, 1995. [Google Scholar]
  17. Gold M, Hurley R, Lake T, et al. Arrangements Between Managed Care Plans and Physicians: Results from a 1994 Survey of Managed Care Plans. Washington, DC.: Physician Payment Review Commission; Feb, 1995. (Selected External Research Series Number 3). [Google Scholar]
  18. Goldfield N, Pine M, Pine J. Measuring and Managing Health Care Quality: Procedures, Techniques, and Protocols. Gaithersburg, MD.: Aspen Publishers; 1991 and 1992. [Google Scholar]
  19. Group Health Association of America. HMO Industry Profile. Washington, DC.: 1993. [Google Scholar]
  20. Group Health Association of America. HMO Industry Profile. Washington, DC.: 1992. [Google Scholar]
  21. HMO Managers Letter: BCBSA/Gallup Survey: HMO Member Satisfaction Tops 90 Percent for 3rd Straight Year. May, 1992a. p. 5. [Google Scholar]
  22. HMO Managers Letter: Recent Surveys Find Managed Care's Popularity With Employer on the Rise. Jul, 1992b. p. 5. [Google Scholar]
  23. HMO Managers Letter: Towers Perrin Survey Shows HMO Members as Satisfied as Members of Other Health Plans. Apr, 1994. p. 4.163. [Google Scholar]
  24. Inguanzo JM. Taking a Serious Look at Patient Expectations. Hospitals. 1992 Sep; [PubMed] [Google Scholar]
  25. James V. Quality Assurance: The Cornerstone of Managed Care. Presented at Understanding Managed Care: An Introductory Program for New Managers in HMOs; Washington, DC.. Group Health Association of America; Feb, 1994. [Google Scholar]
  26. Kritchevsky SB, Simmons BP. Continuous Quality Improvement: Concepts and Applications for Physician Care. Journal of the American Medical Association. 1991 Oct;266(13):1817–1823. doi: 10.1001/jama.266.13.1817. [DOI] [PubMed] [Google Scholar]
  27. Kongstevdt PR. Member Services and Consumer Affairs. In: Kongstevdt PR, editor. The Managed Health Care Handbook. Second. Gaithersburg, MD.: Aspen Publishers, Inc.; 1993. [Google Scholar]
  28. Lubalin J, Schnaier J, Gibbs D, et al. Design of a Survey to Monitor Consumers' Access to Care, Use of Health Services, Health Outcomes, and Patient Satisfaction Questionnaire and Survey Materials, Draft 2. Research Triangle Park, North Carolina: Research Triangle Institute; Jan, 1995. Prepared for the Agency for Health Care Policy and Research. [Google Scholar]
  29. Marshall GN, Hays RD, Sherbourne CD, Wells KB. The Structure of Patient Satisfaction with Outpatient Medical Care. Psychological Assessment. 1993;5(4):477–483. [Google Scholar]
  30. Meterko M, Nelson EC, Rubin HR. Patient Judgments of Hospital Quality: A Taxonomy. Medical Care. 1990;28(Supplement)(9):S10–S14. doi: 10.1097/00005650-199009001-00003. [DOI] [PubMed] [Google Scholar]
  31. Miller RH, Luft HS. Managed Care Plan Performance Since 1980: A Literature Analysis. Journal of the American Medical Association. 1994 May;271(19):1512–1519. [PubMed] [Google Scholar]
  32. Morain C. HMOs Try to Measure (and Reward) ‘Doctor Quality’. Medical Economics. 1992 Apr;69(7):206–215. [PubMed] [Google Scholar]
  33. National Committee for Quality Assurance. Health Plan Employee Data and Information Set, HEDIS 2.0. Washington, DC.: 1993. [Google Scholar]
  34. National Committee for Quality Assurance. Report Care Pilot Project, Technical Report. Washington, DC.: 1995. [Google Scholar]
  35. National Research Corporation. Satisfaction Report Card: National Results. Lincoln, NE.: 1994. [Google Scholar]
  36. Office of the Inspector General. A Review of HMO Quality Assurance Standards Required by Medicaid Agencies. Washington, DC.: Department of Health and Human Services; Sep, 1992. [Google Scholar]
  37. Packer-Tursman J. Keeping Members. HMO Magazine. 1994 Mar-Apr;35(2):39–43. [Google Scholar]
  38. Porell FW, Cocotas C, Perales PJ, et al. Factors Associated with Disenrollment From Medicare HMOs: Findings From a Survey of Disenrollees. Waltham, MA.: Brandeis University; Jul, 1992. [Google Scholar]
  39. Press I. The Last Word. Hospitals and Health Networks. 1994a Mar; [PubMed] [Google Scholar]
  40. Press, I.: Personal communication. Press Ganey Associates, Inc., July 1994b.
  41. Press I, Ganey R, Malone M. Patient Satisfaction: Where Does it Fit in the Quality Picture. Trustee. 1992 Apr; [PubMed] [Google Scholar]
  42. Research Triangle Institute. Information Needs for Consumer Choice. Research Triangle Park, North Carolina: 1994. Prepared for the Health Care Financing Administration under Contract Number 55-94-0047. [Google Scholar]
  43. Ribner S, Stewart J. 1993 Novalis National Health Care Survey: Consumer Ratings of Managed Care, A Special Report. Albany: Novalis Corporation; Oct, 1993. [Google Scholar]
  44. Rubin HR, Gandek B, Rogers WH, et al. Patients' Ratings of Outpatient Visits in Different Practice Settings: Results From the Medical Outcomes Study. Journal of the American Medical Association. 1993 Aug;270(7):835–840. [PubMed] [Google Scholar]
  45. State of Minnesota Joint Labor-Management Committee on Health Plans. Health Plans and Medical Care: What Employees Think. 1993a. [Google Scholar]
  46. State of Minnesota Joint Labor-Management Committee on Health Plans. 1993 Survey of Employees on Health Plans and Medical Care. 1993b. [Google Scholar]
  47. Ware JE, Jr, Curbow B, Davies AR, Robbins B. Medicaid Satisfaction Surveys Research (1977-1980): A Report of the Prepaid Health Research, Evaluation, and Development Project. Sacramento: California State Department of Health Services; 1981. [Google Scholar]
  48. Ware JE, Jr, Hays RD. Methods for Measuring Patient Satisfaction With Specific Medical Encounters. Medical Care. 1988 Apr;26(4):393–402. doi: 10.1097/00005650-198804000-00008. [DOI] [PubMed] [Google Scholar]
  49. Ware JE, Jr, Sherbourne CD. The MOS 36-Item Short-Form Health Survey (SF-36): I. Conceptual Framework and Item Selection. Medical Care. 1992 Jun;30(6):473–483. [PubMed] [Google Scholar]
  50. Ware JE, Jr, Snow KK, Kosinski M, Gaudek B. SF-36 Survey Manual and Interpretation Guide. Boston: The Health Institute, New England Medical Center; 1993. [Google Scholar]
  51. Ware JE, Jr, Snyder MK. Dimensions of Patient Attitudes Regarding Doctors and Medical Care Services. Medical Care. 1975;13(8):669–682. doi: 10.1097/00005650-197508000-00006. [DOI] [PubMed] [Google Scholar]
  52. Ware JE, Jr, Snyder MK, Wright WR, Davies AR. Defining and Measuring Patient Satisfaction With Medical Care. Evaluation and Program Planning. 1983;6:247–263. doi: 10.1016/0149-7189(83)90005-8. [DOI] [PubMed] [Google Scholar]
  53. Winslow R. Health-Care Report Cards Are Getting Low Grades From Some Focus Groups. Wall Street Journal. 1994 May;:1. Section B. [Google Scholar]
  54. Zablocki E. Employer Report Cards. HMO Magazine. 1994 Mar-Apr;:26–32. [Google Scholar]

Articles from Health Care Financing Review are provided here courtesy of Centers for Medicare and Medicaid Services

RESOURCES