Skip to main content
Health Care Financing Review logoLink to Health Care Financing Review
. 1995 Summer;16(4):85–105.

Reconciling Practice and Theory: Challenges in Monitoring Medicaid Managed-Care Quality

Marsha Gold, Suzanne Felt
PMCID: PMC4193533  PMID: 10151896

Abstract

The massive shift to managed care in many State Medicaid programs heightens the importance of identifying effective approaches to promote and oversee quality in plans serving Medicaid enrollees. This article reviews operational issues and lessons from the ongoing evaluation of a three-State demonstration of the Health Care Financing Administration's (HCFA) Quality Assurance Reform Initiative (QARI) for Medicaid managed care. The QARI experience to date shows the potential utility of the system while drawing attention to the challenges involved in translating theory to practice. These challenges include data limitations and staffing constraints, diverse levels of sophistication among States and health plans, and the practical limitations of using quality indicators for a population that is often enrolled only on a discontinuous basis. To overcome these challenges, we suggest using realistically long timeframes for system implementation, with intermediate short-term strategies that could treat States and managed-care plans differently depending on their stage of development.

Introduction

The massive shift to managed care in many State Medicaid programs heightens the focus on identifying approaches that can be used effectively in promoting and overseeing quality in plans serving Medicaid enrollees. Interest in quality oversight is high because: (1) the Medicaid population includes a disproportionate number of vulnerable individuals; (2) there are long-standing problems with access and quality of care in inner city and rural areas, where Medicaid beneficiaries are disproportionately located; and (3) many Medicaid initiatives, by necessity or design, build on newly developed managed-care infrastructures that are outside the mainstream of managed care (U.S. General Accounting Office, 1993; Physician Payment Review Commission, 1994; Rowland et al., 1992). Thus, internal systems for quality protection may be poorly developed. Also, there may not be what some perceive as the “protection” gained by the integration of Medicaid with commercial populations. And although most research studies on Medicaid, Medicare, and the general population indicate that quality and access typically tend to be equal or better in managed-care plans than under fee-for-service (FFS) arrangements, performance can be uneven, and experience shows that serious problems can arise if regulatory oversight is lacking (U.S. General Accounting Office, 1993, 1990; Brown et al., 1993; Gold, 1991).

This article draws on an ongoing evaluation of a recent HCFA-sponsored initiative—the QARI for Medicaid managed care—that aims to address these issues. QARI was developed for prepaid Medicaid managed-care options involving health maintenance organizations (HMOs), health insuring organizations (HIOs), and prepaid health plan (PHP) contracts; it was not developed for primary-care case-management models that are non-risk and FFS. Under the 24-month demonstration described more fully later, three States have been testing the QARI system for almost 2 years. The demonstration began in February 1993 and is supported by the Henry J. Kaiser Family Foundation in partnership with HCFA. It is being managed by the National Academy for State Health Policy (NASHP).

In this article, we identify operational challenges and lessons from the demonstration for the implementation of QARI-type systems. We draw these from our ongoing evaluation that began with the start of the demonstration and has, to date, included two rounds of site visits to each demonstration State, where we conducted interviews with key participants in the program. The evaluation's findings are more completely presented in two interim evaluation reports available from Mathematica Policy Research, Inc. (Felt, 1995; Felt and Gold, 1994). The experience under the demonstration is of growing interest because of the growth of Medicaid managed care and, with it, the enhanced concern for quality oversight. QARI has also gained attention as a potential tool for overseeing quality under the extensive restructuring of State Medicaid plans associated with section 1115 waivers.

Although our experience is restricted to the Medicaid population and QARI, QARI includes within it many of the same elements included in other current quality initiatives. In particular, as discussed later, QARI incorporates internal quality-assurance (QA) plan standards and emphasizes use of focused studies to develop population-based indicators that can be used to assess, improve, and monitor changes in health plan performance. Thus, our findings should provide insights relevant to a wider set of approaches, both within Medicaid and outside of it. Specifically, our findings are likely of interest to work in both the public and private sectors to develop and test the use of indicators and “report cards” that can be used by purchasers and consumers to assess performance of alternative managed-care plans. Such related efforts include:

  • Health Plan Employer Data and Information Set (HEDIS) 2.0 and subsequent revisions, developed by the National Committee for Quality Assurance (NCQA) together with health plan and purchaser representatives to provide a common format for reporting quality, utilization, and other key information on managed-care plans and their commercially enrolled population.

  • The Report Card Pilot Project, also developed by NCQA, which identifies and pilot tests a subset of performance indicators and descriptive items from HEDIS (with a small number of additional items) intended to be more easily understood by purchasers and consumers than the full set of HEDIS indicators.

  • The Medicaid HEDIS effort, for which NCQA has a Packard Foundation grant and is working with HCFA, the American Public Welfare Association, and State and health plan representatives to adapt HEDIS to the Medicaid population.

  • The Delmarva project, which was a HCFA-sponsored initiative that began in 1993 to select performance measures suitable for Medicare managed care, and to develop a strategy for using these indicators.

All these efforts share an interest in using indicators to monitor quality and enhance performance and may ultimately be used in tandem. For example, QARI is a more broad-based framework than HEDIS or Medicaid HEDIS and could readily be adapted to include the HEDIS or Medicaid HEDIS work on indicators. QARI defines specific indicators for immunization and pregnancy only, but Medicaid HEDIS could provide alternative specifications. Additionally, some or all of the additional indicators found in HEDIS or being developed for Medicaid HEDIS may provide States with additional quality-related information consistent with (but not specified by) QARI. Additional quality indicators found in HEDIS include, for example, rates of hospital admission for asthma, rates of pap smears for cervical cancer, and rates of annual retinal eye examinations for enrollees with diabetes (National Committee for Quality Assurance, 1993).

In this article, we first provide an overview of the QARI system, along with a description of the history and current status of the QARI demonstration and evaluation. We then summarize what we have learned so far from the demonstration about each of the main components of QARI (internal QA programs, focused studies, external review) and the State administrative capacity to implement the system. We conclude with a discussion of the key challenges identified through the QARI experience and some of their potential policy implications.

QARI System

QARI has a dual purpose: To improve the consistency of the oversight of Medicaid managed-care quality across States; and to assist States in updating and strengthening their QA systems (U.S. Department of Health and Human Services, 1993). QARI both builds on the evolving industry standards for managed care and strives to improve the consistency between Medicaid and industry standards and guidelines. HCFA's Medicaid Bureau staff initiated QARI by authorizing NASHP to convene a work group of medical directors from managed-care plans in order to evaluate a compilation of existing quality standards and propose a uniform set of guidelines for managed-care plans contracting with the Medicaid program. The system was revised subsequently based on comments from State Medicaid directors, HCFA's managed-care technical advisory group, industry representatives, consumer advocates, and other interested parties. Although the three demonstration States are required to make a good-faith attempt to implement these specifications, they serve only as guidelines for other States, pending assessment of the demonstration experience (U.S. Department of Health and Human Services, 1993).

Figure 1 summarizes the QARI system, which assumes a framework in which national standards are adapted for State-administered and State-supervised QA oversight. Conceptually, the system rests on the premise that a strong internal QA program for managed-care plans is the best front-line quality defense. QARI calls for States to establish specifications for such QA programs by adding to or modifying Federal specifications to address State-specific conditions. States monitor plans' implementation of their QA programs, and an external review organization conducts an independent assessment of quality of care. Medicaid recipients participate in the quality-improvement system at the plan and State levels; QARI aims to strengthen the voice of recipients in the system. QARI also encourages States to coordinate their quality oversight mechanisms to avoid duplication or unnecessary cost.

Figure 1. QARI's Quality-Improvement System for Medicaid Managed Care.

Figure 1

QARI also draws on the growing interest in Continuous Quality Improvement (CQI) as a strategy for encouraging quality care (Berwick, 1989; Kritchevsky and Simmons; 1991). Taking its cue from CQI, QARI attempts to shift the focus of both internal plan quality systems and external oversight of quality from policing “bad apples” toward broad-based cooperation and systemwide improvement CQI involves measuring and monitoring performance indicators of quality to identify areas for improvement and appropriate actions and to track the results of these interventions. The guidelines call for targeted quality-of-care studies in specified clinical and health services areas of concern so that quality indicators can be compared with goals, clinical practice guidelines, or standards. More specifically, QARI identifies 33 clinical and 6 health services areas of concern for the Medicaid population, giving first priority to childhood immunizations and pregnancy as 2 areas that should receive ongoing monitoring. For these two areas only, QARI defines measures for initial quality indicators that are recommended for monitoring. The specific indicators and methodology specified by QARI are shown in Tables 1 and 2 alongside similar indicators specified in HEDIS.

Table 1. Quality Indicators and Methodologies Recommended by QARI and Similar HEDIS Indicators: Childhood Immunizations.

Item QARI HEDIS
Indicators
Rate of Overall Immunization Completeness By month 24, full complement of DPT, OPV, MMR, and HBV By month 24, full complement of DPT, OPV, and MMR and at least 1 dose of H influenza B
Rate of Immunization Completeness for Each Type of Vaccine DPT, OPV, MMR, HBV, H influenza B (1 in months 13-24) DPT, OPV, MMR, H influenza B (1 in months 13-24)
Additional, Related Data Items Tracked Extent to which immunizations were received in-plan versus out-of-plan; documented refusal by parent or guardian; medical contraindications; two attempts to contact parent of need None
Population
Payer Group Medicaid enrollees Direct pay/group enrolled members (does not include Medicaid)
Age Were or attained 2 years of age during the most recent 12-month reporting period Attained 2 years of age during the most recent calendar year
Enrollment At least 6 consecutive months during the 12-month reporting period Continuously enrolled from 42 days of age (approximately 22.5 months)
Methodology
Sampling Recommendation At least 100 of the eligible 2-year-old enrollees Sample size unspecified set to be computed by formula to be statistically valid
How Immunization Must be Documented Not specified Presence of a dated order for immunization, or a provider note indicating the date an immunization was given

NOTES: QARI is Quality Assurance Reform Initiative. HEDIS is Health Plan Employer Data Information Set. DPT is diphtheria-tetanus-pertussis. OPV is polio. MMR is measles-mumps-rubella. HBV is hepatitis B. H influenza B is Haemophilus influenza type B.

Table 2. Quality Indicators and Methodologies Recommended by QARI and Similar HEDIS Indicators: Pregnancy.

Item QARI HEDIS
Indicators
Prenatal Care Indicators Timing of enrollment with respect to pregnancy (preconception, first, second, and third trimester) Percent of pregnant women for whom prenatal care begins in the first trimester
Weeks of gestation on the date of the first prenatal care visit
Number of prenatal care visits from and including the first prenatal care visit to and including the last visit prior to delivery
Pregnancy Outcomes Fetal losses (≥20 weeks) and live births Low birth weight rate (<2500 grams)
Birth weight for live births by weight category: Very low birth weight rate (<1500 grams)
<500 grams
500-1499 grams
1500-2499 grams
≥2500 grams
Population
Payer Group Medicaid enrollees Direct pay/group enrolled members (does not include Medicaid)
Population of Women Delivered a live or stillborn fetus of greater than or equal to 20 weeks gestation during 12-month reporting period Had a live birth during the calendar year
Enrollment No continuous enrollment requirement Continuously enrolled for 12 months prior to delivery
Methodology
Sampling Recommendation At least 100 of the eligible 2-year-old enrollees Sample size computed by formula to be statistically valid
Sampling may be used for prenatal care indicator, with sample size computed by formula to be statistically valid
What Counts as a Prenatal Care Visit Not specified Any obstetrical visit
Obstetrical visits to physicians, nurse practitioners, and midwives
Definition of “First Trimester” Not specified 26-44 weeks prior to delivery (or prior to estimated date of confinement, if known)

NOTES: QARI is Quality Assurance Reform Initiative. HEDIS is Health Plan Employer Data Information Set.

The QARI guidelines for plans' internal QA programs build on those developed by the NCQA (which uses them in its increasingly accepted accreditation program), existing HCFA Medicare standards, and those of the National Association of Managed Care Regulators. The specific areas addressed by the guidelines are shown in Table 3. A key feature of the QARI guidelines for internal QA programs is that health plans perform targeted quality-of-care studies of the type described above and use quality indicators as part of their ongoing QA program.

Table 3. Guidelines for Internal Quality-Assurance (QA) Programs: An Outline of Areas Covered.

Standard
I. Written QA Program Description
Subparts:
Goals and objectives
Scope
Specific activities
Continuous activity
Provider review
Focus on health outcomes
II. Systematic Process of QA and Improvement
Subparts:
Specification of clinical or health services delivery areas to be monitored
Use of quality indicators
Use of clinical care standards/practice guidelines
Analysis of clinical care and related services
Implementation of remedial/corrective actions
Assessment of effectiveness of corrective actions
Evaluation of continuity and effectiveness of the QA Program
III. Accountability to the Governing Body
Subparts:
Oversight of QA Program
Oversight entity
QA Program progress reports
Annual QA Program review
Program modification
IV. Active QA Committee
Subparts:
Regular meetings
Established parameters for operating
Documentation
Accountability
Membership
V. QA Program
VI. Adequate Resources
VII. Provider Participation in QA Program
VIII. Delegation of QA Program Activities
IX. Credentialing and Recredentialing
Subparts:
Written policies and procedures
Oversight by governing body
Credentialing entity
Scope
Process
Recredentialing
Delegation of credentialing activities
Retention of credentialing authority
Reporting requirement
Appeals process
X. Enrollee Rights and Responsibilities
Subparts:
Written policy on enrollee responsibilities
Communications of policies to providers
Communication of policies to enrollees/members
Enrollee/member grievance procedures
Enrollee/member suggestions
Steps to assure accessibility of services
Written information for members
Confidentiality of patient information
Treatment of minors
Assessment of member satisfaction
XI. Standards for Availability and Accessibility
XII. Medical Record Standards
Subparts:
Accessibility and availability of medical records
Recordkeeping
Record review process
XIII. Utilization Review
Subparts:
Written program description
Scope
Pre-authorization and concurrent review requirements
XIV. Continuity of Care System
XV. QA Program Documentation
Subparts:
Scope
Maintenance and availability of documentation
XVI. Coordination of QA Activity with Other Management Activity

QARI guidelines retain regulatory elements, using a combination of direct State oversight (including a second-level grievance procedure to complement the internal QA program) and an annual review of quality by an independent external entity to monitor implementation of the QA program and oversee quality. The external review entities must either validate the targeted quality studies performed by managed-care plans, complement plan efforts by conducting such studies themselves, or both.

The QARI guidelines contrast with the historical Federal requirements for Medicaid QA that contained relatively little detail. Current law requires that managed-care plans under capitation or risk-payment arrangements have a QA program and an enrollee grievance procedure, but it is not specific about what these systems need to include. The State Medicaid agency must have a grievance procedure available to Medicaid recipients as well and must sponsor an annual, independent, external review of the quality of services delivered. Thus, the specificity of the QARI guidelines (spelled out in a 67-page document) has not historically existed at the Federal level.

History and Status of QARI Demonstration

The QARI demonstration has four objectives:

  • To test the QARI system for Medicaid managed-care QA and to determine its effectiveness in monitoring quality of care.

  • To increase State capacity to implement and manage Medicaid managed-care QA reforms.

  • To provide HCFA with information that can be used to refine the QA standards according to the experience of the demonstration States.

  • To provide Congress and others with information on whether QARI can be relied on to protect the quality of care provided to Medicaid recipients in managed care, whether offered by an HMO, HIO, or PHP.

Based on a competitive request for proposals, three States—Minnesota, Ohio, and Washington—were awarded 2-year grants from the Henry J. Kaiser Family Foundation to test the feasibility and effectiveness of QARI. The demonstration began in February 1993, and the 24-month demonstration end-date has since been extended to April 30, 1995. The characteristics of these States and participating plans are summarized and compared with the Nation in Table 4. The States differ from one another in several ways, generating insight into how different kinds of States may fare under QARI. Because the selection criteria included prior experience in Medicaid managed care, the demonstration will not test well the challenge of simultaneously implementing QARI and managed care for the first time. Probably also because of this, the demonstration also includes States with less reliance on HIO/PHP contracts than contracts with HMOs. However, the selected States vary in terms of the sophistication of their QA systems for Medicaid managed care. Thus, the demonstration should provide a test of what is involved in implementing QARI starting from a very rudimentary quality oversight system. It also will test simultaneous implementation with large, rapid increases in managed care, which is relevant today for many States.

Table 4. Comparison of Demonstration States With the Nation on Key Medicaid and Managed-Care Characteristics.

Item Minnesota Ohio Washington Nation
Medicaid Characteristics
Percent of Population Enrolled 8.7 11.3 9.2 9.6
Number of Beneficiaries (in Thousands) 380 1,221 448 25,255
Managed-Care Characteristics
Percent of Insured Population Enrolled in HMOs (December 1991) 32.8 16.3 16,7 18.8
Change in Total HMO Enrollment 1988-92 16.6 19.7 32.9 26.7
Medicaid Managed-Care Characteristics
Total Number of Medicaid Managed-Care Enrollees, 1993 97,403 158,656 35,243 4,808,951
Percent of Total State Medicaid Population in Managed Care, 1993 23 11 6
Number of Plans Participating in Medicaid and QARI Demonstration Model, 1993
Staff/Group 1 1 3
Network/IPA/Unclassified 5 11 3
Total (HIOs/PHPs) 6(1) 12(0) 6(1)

NOTES: Numbers in parentheses are HIOs and PHPs in total number of participating plans. HMO is health maintenance organization. QARI is Quality Assurance Reform Initiative. IPA is individual practice association. HIO is Health Insuring Organization. PHP is Prepaid Health Plan.

Minnesota

Minnesota has concentrated its implementation efforts on designing and implementing four clinically focused studies. The studies use a mixture of study methodologies involving chart reviews conducted by an external review organization (immunization), chart reviews conducted by plans using computerized abstraction software (obstetrics), administrative data (diabetes), and a survey (asthma). During the first 20 months of QARI, the State worked with managed-care plans and NCQA, its external review agent, completing the study designs and collecting most of the data. It also asked plans to complete a self-assessment survey comparing their internal QA programs with QARI standards, began working toward a Medicaid enrollee satisfaction survey to be coordinated with a broader State effort, and developed plans to hold consumer focus groups on diabetes and asthma care. Of the three States, Minnesota's existing QA system entering the demonstration most closely followed the CQI emphasis of QARI. Internal QA program requirements, less detailed than QARI but with similar overall structure, have long been monitored as part of the general HMO quality oversight through the State Department of Health. This function is coordinated with Medicaid oversight to some extent, although there is some duplication of effort. Project staff issues and contractual issues with its external review organization relating to division of responsibilities and communication have slowed and complicated implementation in Minnesota.

Ohio

Ohio had internal QA program requirements similar to QARI at the start of the demonstration and a long history of monitoring plans through on-site reviews and analysis of utilization and quality-related data. Reviews and standards were specific to Medicaid and were not closely coordinated with the State's general HMO quality oversight, which was relatively weak. In the first 20 months of the demonstration, the State developed protocols for immunization, prenatal care, and asthma studies through work groups, including health plans and other State personnel. It also: (1) asked plans to complete a self-assessment survey comparing plan internal QA programs with QARI standards; (2) revised its rules to reflect most of the QARI standards; (3) contracted to create an automated system for analyzing quality from various sources; and (4) drafted a data plan for increasing the consumer's voice in the system, including planning for a statewide client satisfaction survey. Because of the State's contract cycle and administrative delays, external review under QARI was only begun during month 20 of the demonstration. Startup was delayed for about 3 months by the need to recruit staff, with staffing not complete for about 9 months, as well as some turnover later during implementation.

Washington

Washington, which had little history of uniform statewide QA before QARI, had the most to develop of all three States in the demonstration. The State has created minimum internal QA program standards and incorporated them into health plan contracts and has also tested some initial ideas for a process to monitor the standards. It has completed several studies that provide baseline data on health plan performance in the areas of obstetrical care; early and periodic screening, diagnosis, and treatment for children; and emergency care. It has nearly completed studies on immunization, asthma, and obstetrical care. The State also has increased consumers' voice through a revised complaint handling and monitoring process and an ongoing (monthly) client satisfaction survey (analyzed quarterly). To coordinate QARI with other quality initiatives in the State, Washington holds periodic interagency update meetings and has influenced the State's health reform body to move toward a single process under health reform that would come closer to QARI than initially proposed. As in Ohio and Minnesota, implementation in Washington was slowed by staffing delays and turnovers that continue now, in part because of the lack of permanence of QARI staff positions in the State system.

Evaluation

The evaluation of the QARI demonstration is heavily focused on comprehensively assessing the implementation and operation of the QARI system with the aim of determining its applicability and desirable revision for broader use. The evaluation began at the start of the demonstration and continues through June 1996. It extends beyond the end of the demonstration to provide additional time to study system effects and determine how States use QARI features once the formal demonstration ends. The implementation component of the evaluation examines the process, content, feasibility, and burden of implementing QARI at both the State and plan levels and, based on State and plan experiences, seeks to determine what revisions to QARI or additional guidance is needed. The second major component of the evaluation examines, to the extent feasible, the system's effect on quality of care. This includes a qualitative assessment of how well the system works to identify and correct problems as well as some quantitative analysis of quality indicators. The latter is heavily constrained by data limitations and constraints on the evaluation imposed by operational features (e.g., the lack of comparison groups) and limited timeframe. This article focuses exclusively on information related to the first study objective.

The lessons presented draw on two rounds of site visits (Fall 1993 and Fall 1994) to the demonstration States. The project team interviewed the relevant stakeholders, including demonstration and other State Medicaid agency staff, officials from other agencies or units with related quality oversight responsibilities (typically from the health department and the insurance department), advocacy groups for low-income populations, external review organization staff, and many representatives of health plans. We used a semi-structured interview protocol to ensure information was obtained on the key topics across all States and similar types of interviewees.

To examine in more depth implementation at the health plan level, we selected three diverse managed-care plans in each State for more intensive study. We made an initial and a repeat visit and will make a third visit. These visits involved a series of interviews with key QA and other QARI-related staff on QARI implementation and on the structure and functioning of their internal QA programs. Because of the small number of plans in the demonstration States at the start of the demonstration, the case-study plans represented one-half of the plans in Washington and Minnesota and one-fourth of the plans in Ohio. We purposely selected a cross-section of plans to obtain a mix of plans in terms of their size, data capability, model type (group, staff, network, IPA, or mixed), and emphasis on Medicaid. To obtain some input from all of the health plans in the demonstration, we held a group interview (by conference call or in person) with representatives of the health plans not selected for individual study. This involved discussion of the same topics but on a more general level. In addition, we participated in monthly conference calls with the State QARI staff convened by NASHP. These discussions provided insights into the challenges of, influences on, and processes of the QARI demonstration.

The varied mix of States and plans visited suggests our findings should be of broad-based relevance. However, because of the small number of States and plans involved and the case-study methodology, the lessons we draw cannot be assumed to apply to all States and health plans—there are likely to be additional issues that would surface in other types of States. In interviewing plans about their internal QA programs, we did not attempt to directly assess compliance with each QARI guideline, nor did we conduct any clinical review of quality—a technique that would require duplication of and, in effect, auditing of the already existing external reviews by the State or its peer review organization. Rather, we discussed with plan staff the major components of their QA programs and any changes underway; we sought concrete examples of how their system had functioned in the past year to identify and address problems. We selectively reviewed related documentation and obtained related results from States or plans, such as plans' self-reports of compliance with QARI guidelines and results of State monitoring activities. Thus, our analysis reflects both plan perceptions and our independent judgments on these perceptions, given a variety of sources of input and information.

Lessons Learned

Internal Requirements

Our initial results are encouraging with respect to QARI's requirements for internal QA programs. Focusing on the internal QA program appears consistent both with current regulatory requirements and with industry trends. The fact that QARI's internal standards are perceived by the managed-care industry to reflect NCQA requirements has made them more acceptable, because a growing share of plans appear to have decided to commit the resources needed to bring their systems up to these standards within the next few years. As of May 1995, 149 plans had received NCQA accreditation and 23 had been denied accreditation, with 33 decisions pending and 102 reviews scheduled.

The experience under QARI suggests that the extent of change QARI would require of QA programs will differ among managed-care plans and States. As previously discussed, we are conducting annual on-site interviews with three diverse plans in each State and a group telephone interview with the others as part of our evaluation. Of the total of nine plans visited on site, four appear very close to meeting QARI standards. That is, State QARI staff believe they are generally in compliance, based on a checklist self-review and/or discussions with the plan on their QA program; during our discussions with key QA staff at the plan, we found: (1) no or only very minor apparent inconsistencies with QARI, in terms of the structure of the QA program,1 and (2) examples of good functioning of the QA program in terms of problem identification, assessment, correction, and followup using focused studies and other techniques consistent with QARI.

It appeared that the other plans would need to make moderate to extensive changes to meet QARI standards. Two of the nine plans appear to have programs moderately close to QARI. These plans self-reported meeting many of the QARI standards, but through our discussions with QA staff, we noted (and they agreed) that there was some inconsistency with QARI of a degree that was greater than the minor inconsistencies previously described. For example, one plan had purchased software for credentialing and recredentialing, but was not yet in compliance with much of this standard. Another admittedly did not have a QA program that functioned well enough to follow up on issues identified through its extensive system of clinical indicators.

The last three plans of the nine we visited self-reported (and our visit confirmed) that much change would be needed to implement QARI. These plans are located one in each of the three States, and all three have enrollee populations that are near 100-percent Medicaid enrollees. One of the three had a well-developed QA program in the past but has completely changed its program to accommodate organizational changes and had not implemented the new system at the time of our last visit. In the interim year, there was little in the way of a functional QA program because of the intense QA planning activities. The second of these plans has hired a new medical director and quality manager to focus heavily on quality-improvement issues as it gears up to expand its managed-care population. The third plan is a small, Medicaid-only plan with the most limited resources of any we visited. The additional plans we interviewed in each State as a group generally reflected this distribution as well, varying widely in the amount of change they would need to implement QARI.

Most plans indicated that they were improving their QA programs with a variety of improvements spanning most of the QARI standards, as shown in Table 5. Of the nine plans visited onsite, seven had made significant recent improvements in their internal QA programs. For example, even the well-established network/IPA plans we visited had not had a tradition of site visits to primary-care providers as a part of credentialing; this is a fairly major program to implement. In addition, changes in QA programs are needed as organizational changes occur (e.g., mergers, shift in model type). In four of the seven plans making substantial changes, the improvements were primarily in response to QARI. In two plans, the improvements were attributed primarily to meeting NCQA standards, and in one, changes were prompted by growth in their Medicaid population. In general, internal QA programs appeared most developed in older, well-established plans. Newer plans and plans with a heavy Medicaid focus that have not had strong external oversight generally had less developed systems.

Table 5. Types of Substantial Changes Made to QA Programs This Year in the Nine Health Plans Visited.

QARI Standard1 Type of Change2
Written QA Program Description Revised written QA Program to meet QARI and NCQA standards Developed new QA Program
Systematic Process of Quality Assessment and Improvement Tracked quality indicators and conducted focused studies for first time:
 implemented dry run of new “value incentive” which should encourage providers and improve immunization rates, patient satisfaction, and medical records documentation
Developed extensive software to assist in implementing the QA program
With QARI, is implementing focused studies; developed a mental health service plan to followup on identified problems; began financial reward and followup system to improve immunization
Reviewing/evaluating this year's QA activities; conducted outcomes studies
Drafted internal “performance targets” for Medicaid on quality indicators; created a monitoring and evaluation team for Medicaid
Accountability to the Governing Body Improved structure of QA reporting to management
Formally reported to management on QA issues for first time
Active QA Committee Established an active QA Committee
Adequate Resources Hired a quality-management coordinator with experience in Total Quality Management; hired assistant medical director to focus on practice guidelines; hired staff person to visit physician offices to do site inspections and medical records assessments
Increased staff time devoted to QA
Provider Participation in the QA Program Formally reported to physicians on QA activities for the first time
Delegation of QA Program Activities Made some revisions to accommodate merger, e.g., establishing oversight where functions are delegated
Tightened oversight of delegated QA for mental health
Credentialing and Recredentialing Planned new credentialing process better integrating QA information
Hired staff person to visit physician offices to do site inspections and medical records assessments
Conducted onsite reviews of safety and medical records for one-half of plan providers; purchased software for credentialing/recredentialing
Expanded onsite reviews for provider offices—have completed over 700 reviews for credentialing initiated primary source verification, and made program more active, drawing on QA data and engaging in substantive debate
Completed credentialing changes (primary source verification)
Enrollee Rights and Responsibilities Revised beneficiary and provider handbooks; revised complaint system
Revised member complaint handling process to involve QA coordinator
Revised member rights/responsibilities materials
Medical Record Standards Conducted medical records inspections in provider offices for first time or on a greatly expanded basis (2 plans)
1

No changes were identified for standards V, XI, XIII - XVI.

2

Across from each QARI standard, each example or example set represents a separate health plan.

NOTES: QARI is Quality Assurance Reform Initiative. QA is quality assurance. NCQA is National Committee for Quality Assurance.

SOURCE: Felt, S.: Analysis of information collected from 9 health plans visited during the Fall 1994 round of evaluation site visits.

Other issues relate to the consistency between QARI and other quality initiatives such as NCQA, State licensure, and Medicare's Delmarva project. Participating plans expressed a strong interest in a consistent set of requirements as a means to reduce administrative burden. Both small substantive inconsistencies between NCQA and QARI internal QA program requirements and the perceived burden of duplicative reviews were of concern. Consistent with a concern for duplication, many plans expressed the desire that NCQA accreditation allow them to be granted “deemed” status. Using NCQA or similar accreditation as an optional substitute to verify the internal QA program for the Medicaid agency was viewed by plans as an effective means to serve the needs of multiple payers and to reduce the administrative burden of multiple onsite reviews. HCFA and States will face a number of issues as they make decisions about deeming, including (1) whether there is a need for a “validation” survey at a sample of plans or for participation or observation of surveys by State staff, and (2) whether and how accreditation organizations other than NCQA could qualify to provide accreditation that would have deemed status. Even more fundamental is the issue of whether NCQA accreditation should exempt a plan from other QARI requirements, such as those involving any other payer-specific external review. This requires judgment about whether any additional review is warranted in light of the vulnerability of the Medicaid population—a policy that could be debated.

Finally, there is the question of coordination between internal QA program requirements for Medicaid and those for State licensure as well as the more fundamental issue of what role the Medicaid agency as a purchaser should have in quality monitoring, independent of system-wide approaches. Although coordination or complete integration would reduce administrative burden, Medicaid agencies are typically in a different department from staff charged with general quality over-sight for managed care. As a result, bureaucratic barriers to developing consistent approaches may exist.

Areas of Interest/Clinical Indicators

As with QA programs, the concept of studies focused on important clinical areas and other areas of concern also appears to have widespread support The feasibility of this approach, however, remains an outstanding issue largely because of data issues that relate both to data availability and the validity of the measures they support.

All three demonstration States are using chart review to obtain a large amount of the data needed to compute the indicators for their focused studies because administrative data are limited and lack key clinical data elements. Charts are also limited to the extent key items are not recorded (e.g., immunizations received from non-plan providers). Charts also are less useful for trending or complex analysis and are a more time-consuming, costly, and intrusive data source. Within the context of the QARI demonstration, participating plans appear willing to perform chart reviews, which they also use internally. However, many expressed reservations about the feasibility of this approach as a permanent strategy, particularly if studies are to be designed with sufficient power to generate useful results. Administrative burden was viewed as particularly problematic to the extent that the required studies' content or methods are inconsistent with those desired by other payers and with the plan's internal studies on the same topics. Some effort is underway in the demonstration for plans to compare administrative data (e.g., on immunization) with data from charts. Results so far show big differences between the two sources. These findings are consistent with those recently reported by the Report Card Project (National Committee for Quality Assurance, 1995). These discrepancies lessen the comfort States have with using indicators from administrative data.

All three States were overly optimistic about how long it would take to initiate focused studies and how many areas could be covered simultaneously. To some extent, the delays are a function of the demonstration because protocols had to be developed to operationalize and refine the measures, and these protocols and refinements would presumably be readily available to subsequent users. However, in at least two States, work groups were viewed as important in building consensus for measures and adapting them to the State environment. Thus, extensive planning and group work may be needed even after the measures are refined and tested. Different opinions exist about the respective merits of adopting national standards or adapting them to local interests in order to achieve buy-in to improvements (Field and Lohr, 1992).

The demonstration raised the issue of whether (at least in theory) the same indicators are appropriate for Medicaid and privately insured individuals, an issue also being considered in the Medicaid HEDIS project Because indicators are viewed as reflecting quality standards, many health plan participants perceived that the same indicators could and should be used for both populations. For example, assessing social risk factors was viewed as important in all pregnant women, not just Medicaid enrollees; and although violence could be considered an issue for disadvantaged populations, it was recognized as a relevant health influence on all groups of enrollees. However, even though the same clinical indicators could be used, separate measures for each population were viewed as potentially desirable because of differences in such factors as social risk factors and patient compliance that plans can not easily control. On the final, forthcoming round of site visits, the evaluation team will assess whether plans' views remain consistent with this initial view once specific Medicaid HEDIS indicators are proposed and discussed. Even if a single set of indicators is defined to be applicable across all populations, priorities for both measurement and programmatic intervention could vary with the mix of enrollees by payer due to demographic or risk factor variation. Demonstration States also desired regulatory flexibility so that they could change what they measured over time to accommodate study results and changing priorities.

Technically, an important constraint on feasibility highlighted by the demonstration (and also being considered in the Medicaid HEDIS project) concerns the validity of an indicator-based approach when eligibility turnover is extensive. Indicators work best for continuously enrolled populations because plans can most readily be held accountable for their care. Because many Medicaid enrollees are only intermittently eligible for the program, and also may account for a small proportion of a given plan's enrollment, measures for populations continuously enrolled for long periods of time typically involve small numbers, biased samples, and unstable estimates. For this reason, QARI uses shorter periods of continuous enrollment than does HEDIS for the commercial population. However, the impact of turnover remains a serious concern to the demonstration States.

External Review Requirements

The external review component of QARI may be the one on which consensus is most difficult to achieve. For example, in public comment on the draft guidelines for QARI, advocates for beneficiaries wanted more oversight, but Group Health Association of America staff noted concern over the disparity between this and oversight requirements for other payers. Each of the three States is implementing this component differently and bringing its distinct philosophies and emphases to bear.

As required by current Federal Medicaid law and QARI, each State is using review agents. Minnesota stresses technical assistance and plan support, with the State working closely with the external review agent and set of plans. Washington is also oriented toward cooperation, but because it has a much shorter history of cross-plan work and State staff activity, the external review agent plays a more dominant role in specifying the data to be reviewed, for example. Ohio's pre-QARI system was highly regulatory. Its external review included very detailed standards and extensive corrective action plans. For example, one plan commented that, despite a score of 98.5 out of a possible 100 on the review, it still had to write a lengthy corrective action plan for several items in which it was deficient, a burden the plan perceived as inappropriate. Although Ohio has not yet implemented this QARI component and some technical assistance is planned, the emphasis on monitoring and corrective action plans is likely to be maintained.

State Administrative Capacity

State administrative capacity has constrained the speed and scope of QARI implementation, although the amount of resources required of a State does not appear unreasonable in light of the objective. Although staffing needs may change once the system is fully implemented, States have been using from 1.5 to 4 full-time equivalents internally for implementation, which has to some extent involved a reallocation of State resources. Washington, which staffed in the middle of this range, perceived that it could have used more resources, given the absence of a previous base on which to build. None of the States was able to fully staff up immediately, and staffing issues in all of the States have slowed the demonstration considerably. The external review component has not thus far affected States' costs because an external review requirement already existed. The two States that have already adopted QARI requirements for external review said they were able to implement this component within the budget they had previously used for this contract Washington faces an increasing challenge to its external review resources, however, because the number of plans requiring oversight is increasing dramatically (from 6 plans last year to 16 plans this year), with rapid growth in Medicaid managed care. Even though State budgetary and personnel constraints could limit the ability to secure necessary QARI resources when demonstration support is not available, these resources do not appear to be particularly high. However, this appears likely to change if Medicaid enrollment grows substantially and with it the burden of external review and oversight.

Bureaucratic and other barriers appear to limit the ability of QARI to become integrated with more general systemwide quality monitoring. The Medicaid agency typically is distinct from the State agencies charged with general oversight of managed care and may have little power or influence over them. For example, in Ohio, the Medicaid agency and State health department both conduct on-site reviews of HMOs. The health department's triannual review is based on far less stringent standards than the Medicaid agency's annual review and has often been conducted by personnel with no clinical training. For Medicaid-serving HMOs, the health department review completely or nearly completely duplicates the review done that year by the Medicaid agency. Yet, despite discussions between the Medicaid agency and health department on whether the health department could deem the Medicaid review sufficient for its purposes, the health department has no plans to change its review policy for the sake of coordinated reviews. Health plans conclude the issue is one of “turf” and continue to tolerate the duplicate reviews in order to remain licensed in the State.

Key Challenges in Quality Oversight

Based on the QARI experience, we identify and discuss here six key challenges in quality oversight for Medicaid managed care and their policy implications.

Setting Realistic Timeframes

Early QARI experience shows that considerable lead time is needed to develop quality requirements that go beyond what is now in place. Staffing delays, contractual and regulatory timeliness, and process requirements associated with communication and achieving buy-in among all participants lengthen the time that States will need to implement new quality oversight requirements. The demonstration experience suggests that new plans or plans with little QA now will require several years or more to fully develop internal QA programs. Most plans will probably fall short on at least some requirements, although the time needed to address them may become shorter given current efforts to address quality issues. The experience in the three demonstration States suggests that policymakers should not assume that licensed HMOs have fully developed internal QA programs. New and smallar plans—especially those not serving commercial populations—may need special oversight, as their timeframe for implementation could, be longer and their capacity to undertake such changes more limited. Policymakers also should not assume that all or even most health plans will be able to rapidly generate a large number of quality indicators.

Being Forthright About Data Limitations

Administrative data, at least in the short-to mid-term, will constrain the feasibility of and increase the administrative burden associated with implementing QARI or almost any quality oversight system that reflects current views on best practice. Current administrative records in many managed-care plans do not adequately support the development of important clinical indicators, yet chart reviews are perceived as administratively burdensome and flawed. Data inadequacy is slowing plans' ability to generate clinical indicators such as in QARI or HEDIS, and this limits the number that can be reasonably requested at any one time. These factors reduce the utility of indicator-based approaches at present. Within Medicaid, additional and critical limitations arise because of the lack of continuous enrollment resulting from eligibility turnover. Turnover means that data are lacking for periods of time and also that it is hard to hold plans accountable when they are only partially responsible for care. Other issues arise for Medicaid because, even under managed care, there remains a shared (and in the demonstration areas, not very well coordinated) split in responsibility between health plans and traditional public health programs.

Creating Flexible Approaches That Account for Diversity

QARI illustrates the diversity in existing regulatory experience with Medicaid managed care across the States and does so even though the demonstration, by design, included only States with existing experience in Medicaid managed care. As currently written, the experience in Washington suggests QARI would provide a relatively demanding “floor” for States with little prior experience with managed care and quality oversight depending on the timeframe provided for implementation. Yet, lesser standards could raise issues of appropriate quality and oversight in Medicaid managed care. In today's environment of rapid implementation of Medicaid managed care, particularly in States with limited prior managed-care experience, the challenge of establishing an effective quality oversight system is particularly crucial.

Similar issues arise in terms of the differences in health plans participating in Medicaid managed care. The QARI experience suggests that Medicaid oversight needs are greatest for less established plans and for those based solely in the Medicaid population, with limited commercial enrollments to generate other outside pressures to enhance systems in order to gain accreditation. Yet these also are the plans that may have greatest difficulty in meeting QARI standards both because more new elements need to be developed and, in some cases, the resources are more constrained.

The preceding suggests that equitable and effective approaches may involve treating States and managed-care plans differently depending on their stage of development. The fact that a State has more limited experience in quality oversight of managed care in Medicaid or more generally perhaps should be considered in granting waivers, approving timeframes, attaching conditions to performance, or targeting technical assistance. Also, States might be provided authority to distinguish among plans based on objective measures of the existing state of their QA system. NCQA or similarly accredited plans, for example, could be exempted from some requirements and could be subject to less Medicaid-specific oversight.

Quality Approaches That Can Evolve

The preceding three challenges imply that QARI as a system will not be fully operational in most States and many health plans for some time. An obvious issue, therefore, is how to respond in the near term in establishing requirements for health plans participating in Medicaid initiatives. This issue is of special interest because some plans with less developed systems may be important providers for the poor and because some States, under severe fiscal pressure, are eager to move relatively rapidly in implementing managed-care initiatives.

Our evaluation suggests that intermediate-level approaches and protection could be important in providing short-term protection as more sophisticated systems evolve. This could be accomplished in diverse ways, including targeted implementation of managed-care initiatives over time, restrictions on the amount of financial risk plans can assume until they meet certain requirements, and targeted monitoring and onsite review.

Currently, these concerns are addressed in part under Medicaid through the 75/25 requirement that, with some exceptions (such as for federally qualified health centers), requires a 25-percent commercial enrollment from participating health plans under the theory that this will increasingly generate pressure to maintain adequate quality. However, the 75/25 requirement has been criticized as a poor proxy for quality and, in any case, is becoming less a central issue as States move to managed care under section 1115 waivers that allow this requirement to be waived.

In responding to these issues, Washington has, for example, allowed 3 years for full implementation of the QARI plan standards and established a pared-down set of requirements for plan contracts as a minimum “floor” to be established prior to contracting. The State intends to encourage plans to move from the minimum level to more fully developed systems using its 3-year requirement, State monitoring, technical assistance, and offering competitive advantages to plans with more fully developed QA programs.

Addressing Inconsistency Issues Across Payers

A strong theme from health plans we visited involved reducing duplicative efforts and inconsistencies in quality oversight requirements across payers to reduce administrative burden on plans. Examples included on-site reviews by State licensing bodies, Medicaid, Medicare, and others; payer-specific requirements that diminished the ability to establish plan-wide consistent policies across payer groups; and data requirements or inconsistent definitions of clinical indicators that added to the cost of administrative systems. Tables 1 and 2, for example, showed the many slight differences in specification between QARI and HEDIS indicators for childhood immunization and pregnancy. These inconsistencies occur within as well as across States. For example, Medicaid agencies may have little communication with insurance departments, and there could be bureaucratic barriers in coordinating with the health department, as we found within the demonstration States. Yet in some States, technical expertise or resources may be insufficient to support two independent efforts.

In considering the use of QARI within Medicaid managed care, these are important issues to consider. Our experience provides both positive and negative insights into the likelihood or desirability of developing a more coordinated systemwide approach to oversight of managed care. The fact that QARI internal QA program requirements were developed to be consistent with those of NCQA and others (and, in fact, are viewed so by health plans) should increase the flexibility of considering approaches that build on NCQA or equivalent accreditation when applicable. But our evaluation has also identified some barriers to more coordinated efforts system wide.

One such barrier involves determining a set of clinical indicators (such as currently exists in QARI or HEDIS or is under development in Medicaid HEDIS) that can apply equally to Medicaid and commercial populations. As previously discussed, our interviews suggest a common belief among many managed-care plans that the same indicators are relevant to all enrollees, even though the resulting measures and ability to influence them could vary. In fact, the Medicaid HEDIS project in part follows this logic as it seeks to determine which, if any, HEDIS indicators that were developed for the commercial population are appropriate for monitoring quality of care for the Medicaid population as well. However, our understanding is that the Medicaid HEDIS project partially departs from this logic as it seeks to define any additional indicators that are needed to monitor the Medicaid population. Although the areas of concern addressed in HEDIS overlap substantially with the areas of concern for the Medicaid population that are listed in QARI, the HEDIS indicators have not been tested on nor even been considered specifically for the Medicaid population. The Medicaid HEDIS group may, therefore, find that some of the specific indicators do not work well for this population, and/or the group may reach a consensus that additional indicators are desirable given this population's vulnerability. Although such a departure from HEDIS may prove warranted, this will leave a separate group working on further developing the HEDIS indicators for commercial populations to decide whether and how to incorporate any such additional indicators into its next version of HEDIS. In short, the current efforts to develop indicators for commercial, Medicaid, and Medicare populations are aiming for some consistency, but consistency is not assured because of the separate development process and focus of each group.

A second, and more intractable, barrier to coordination relates to the lack of continuous enrollment in managed care within the Medicaid population, largely because of eligibility turnover. To the extent that this reflects a structural reality that is unlikely to be changed soon, it limits both the ability to use focused studies and clinical indicators in quality oversight in Medicaid managed care and also the ability to use measures consistent with those of other payers in defining the denominator population of interest (e.g., all those continuously enrolled for 24 months). However, rather than suggesting that the clinical indicator approach is flawed, this could highlight more fundamental conflicts—such as the conflict between managed-care objectives and eligibility turnover.

A third barrier relates to the plans participating in Medicaid managed care, some of whom do not serve other populations or may not be subject to the same licensure standards. Coordination and integrated approaches obviously are difficult to achieve when these conditions apply. On the other hand, the distinctions arise because of longstanding barriers to access and supply in inner-city and low-income areas and may not readily change.

Disparity in Oversight and Regulation Attitudes

Both Medicaid and Medicare historically have imposed stronger regulatory requirements on their participating managed-care plans than private purchasers have, though this may be changing as private purchasers request more detailed information and increasingly ask about plans' quality systems and data. Private purchasers historically relied on general regulatory protection of State licensure or Federal HMO qualification and, more recently, voluntary accreditation through NCQA or similar entities. In contrast, QARI includes external oversight within its system consistent with the historical and legal requirements of the Medicaid program. In addition, some policymakers perceive oversight critical given the vulnerability of the Medicaid population, the historical problems of access, and the incomplete and uneven development of quality systems in their participating plans. However, such perspectives may be increasingly challenged in today's environment that emphasizes State discretion and elimination of regulatory requirements.

In conclusion, the QARI demonstration is providing an invaluable opportunity to study the feasibility of various approaches to quality oversight. The challenge for government and the private sector is to draw on these experiences to create a truly workable and effective system that performs in practice as well as in theory.

Acknowledgments

Dennis Beatrice of the Henry J. Kaiser Family Foundation developed and oversaw this project. Marsha Lillie-Blanton currently performs this role. Barbara Kehrer, previously with the Kaiser Family Foundation, identified important questions for the evaluation. NASHP directs the demonstration, which is funded by the Kaiser Family Foundation and cosponsored by HCFA. HCFA, working with HMO medical directors, NASHP, and others, developed the system that is the basis of the demonstration. Trish Riley and Maureen Booth of NASHP, Greg Scott of HCFA, Deborah Bachrach and Patricia MacTaggart of the Minnesota Department of Human Resources, Jennifer Lopez of the Ohio Department of Human Services, and Casey Zimmer and Michael Cooper of the Washington Department of Social and Health Services provided insights useful for the manuscript. We are appreciative of the time and insights provided by these individuals as well as others in the State agencies, external review organizations, and managed-care plans in these three States. Joel Schectman, M.D., of George Washington University Medical School served as medical consultant to the project and provided insights and suggestions. The article has also benefited from suggestions and advice by Robert Hurley of the Medical College of Virginia, Embry Howell of Mathematica Policy Research, Inc., and David Colby of the Physician Payment Review Commission.

Footnotes

The research presented in this article was supported by the Henry J. Kaiser Family Foundation under Grant Number 92-1201. The authors are with Mathematica Policy Research, Inc. The opinions expressed are those of the authors and do not necessarily reflect those of the Henry J. Kaiser Family Foundation, Mathematica Policy Research, Inc., or HCFA.

1

The following is an example of a minor inconsistency: at present, a plan has not taken steps to ensure that the medical records of its patients document whether or not the individual has executed an advance directive (a small component of Standard XII in QARI that documents the presence of a living will or durable power of attorney for health care). However, the plan is preparing to implement this over the next few months.

Reprint Requests: Marsha Gold, Sc.D., Mathematica Policy Research, Inc., 600 Maryland Avenue, SW., Suite 550, Washington, DC 20024-2512.

References

  1. Berwick D. Continuous Improvement as an Ideal in Health Care. New England Journal of Medicine. 1989 Jan;320:53–56. doi: 10.1056/NEJM198901053200110. [DOI] [PubMed] [Google Scholar]
  2. Brown R, Bergeron J, Clement DG, et al. The Medicare Risk Program for HMOs—Final Summary Report on Findings from the Evaluation. Princeton, NJ.: Mathematica Policy Research, Inc.; Feb, 1993. Prepared for the Health Care Financing Administration. [Google Scholar]
  3. Felt S. The First Twenty Months of the Quality Assurance Reform Initiative (QARI) Demonstrations for Medicaid Managed Care: Interim Evaluation Report. Washington, DC.: Mathematica Policy Research, Inc.; Mar, 1995. [Google Scholar]
  4. Felt S, Gold M. First Annual Evaluation Report on the Demonstration of a Quality Improvement System for Medicaid Managed Care (QARI) Washington, DC.: Mathematica Policy Research, Inc.; Apr, 1994. [Google Scholar]
  5. Felt S, Gold M. First Report on the QARI Demonstration: A Profile of the QARI Initiative and State Demonstrations, and Discussion of Issues for the Demonstration and Evaluation. Washington, DC.: Mathematica Policy Research, Inc.; 1993. [Google Scholar]
  6. Field MJ, Lohr K, editors. Guidelines for Clinical Practice: From Development to Use. Washington, DC.: Institute of Medicine, National Academy Press; 1992. [PubMed] [Google Scholar]
  7. Gold M. Health Maintenance Organizations: Structure Performance and Current Issues. Journal of Occupational Medicine, Special Issue on Assuring Value in Health Care. 1991 Mar;33:288–296. [PubMed] [Google Scholar]
  8. Kritchevsky SB, Simmons BP. Continuous Quality Improvement: Concepts and Applications for Physician Care. Journal of the American Medical Association. 1991 Oct;266:1817–1823. doi: 10.1001/jama.266.13.1817. [DOI] [PubMed] [Google Scholar]
  9. National Committee for Quality Assurance. Report Card Pilot Project Technical Report. Washington DC.; 1995. [Google Scholar]
  10. National Committee for Quality Assurance. Health Plan Employer Data and Information Set and Users Manual, Version 2.0. Washington DC.; 1993. [Google Scholar]
  11. Physician Payment Review Commission. Annual Report to Congress, 1994. Washington, DC.: Mar, 1994. [Google Scholar]
  12. Rowland D, Feder J, Lyons B, Salganicoff A. Medicaid at the Cross Roads. Washington, DC.: The Kaiser Commission on the Future of Medicaid; Nov, 1992. [Google Scholar]
  13. U.S. Department of Health and Human Services. A Health Care Quality Improvement System for Medicaid Managed Care: A Guide for States. Washington, DC.: Jul, 1993. [Google Scholar]
  14. U.S. General Accounting Office. Medicaid: Oversight of Health Maintenance Organizations in the Chicago Area. Washington, DC.: Aug 27, 1990. Pub. No. GAO/HRD-90-81. [Google Scholar]
  15. U.S. General Accounting Office. States Turn to Managed Care to Improve Access and Control Costs. Washington, DC.: Mar, 1993. Pub. No. GAO/HRD-93-46. [Google Scholar]

Articles from Health Care Financing Review are provided here courtesy of Centers for Medicare and Medicaid Services

RESOURCES