Skip to main content
Health Care Financing Review logoLink to Health Care Financing Review
. 1995 Summer;16(4):39–54.

Measuring Quality of Care Under Medicare and Medicaid

Stephen F Jencks
PMCID: PMC4193519  PMID: 10151894

Abstract

The Health Care Financing Administration's (HCFA) approach to measuring quality of care uses an accepted definition of quality, explicit domains of measurement, and a formal validation procedure that includes face validity, construct validity, reliability, clinical validation, and tests for usefulness. The indicators of quality for Medicare and Medicaid patients span the range of service types, medical conditions, and payment systems and rest on a variety of data systems. Some have already been incorporated into operational systems while others are scheduled for incorporation over the next 3 years.

Introduction

Measuring quality of care is the essential foundation for improving care, and improving the care provided to Medicare and Medicaid beneficiaries is the central goal of HCFA's Health Care Quality Improvement Program (HCQIP) (Gagel, 1995). This article describes the foundations of HCFA's Quality Indicator System (HQIS), which comprises measurement tools and supporting data systems. I start with a brief overview, describe what is meant by a valid indicator, review some major controversies in design of the system, and report HCFA's progress in developing indicators.

HCFA's basic strategy is to create a system of quality indicators (QIs) that supports improvement across the full range of Medicare services and most Medicaid services by tailoring approaches to types of services and settings. The range comprises Medicare managed-care and fee-for-service acute, chronic, and preventive services, hospitals, nursing homes, ambulatory settings, home health agencies (HHAs), and dialysis centers, and a variety of diseases and procedures; as well as Medicaid managed care and nursing home care.

HCFA's measurement strategy incorporates the Institute of Medicine's definition of quality of care: “the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge” (Lohr, 1990). The HQIS QIs, in turn, either measure access to care, desired care outcomes, or satisfaction or they measure processes of care that have been shown to strongly predict access, outcomes, or satisfaction.

The degree to which consumer choices are informed is also, perhaps, part of quality because the health outcomes must be desired. In addition, a growing body of research shows that there is neither scientific evidence nor substantial professional consensus to guide many treatment decisions, which makes the argument for informed choice much more compelling. Finally, some consider efficiency a part of quality because the correlation between efficiency and quality is becoming clearer as we apply industrial models of process improvement to health care and because, in an era of increasingly constrained budgets, our ability to deliver quality care is limited by our efficiency. Nevertheless, neither informed choices nor efficiency are part of our current measurement plan.

Uses of the HQIS

The HQIS has four major purposes:

  • Supporting Improvement Projects—Local and national projects to improve quality of care (see Technical Note at the end of this article) are the keystone of the HCQIP. These projects are carried out through partnerships between providers and two kinds of HCFA contractors: peer review organizations (PROs) (State-level contractors who carry out Medicare quality-assurance and improvement activities for HCFA) and end stage renal disease (ESRD) networks (regional contractors who carry out quality-assurance and improvement activities for the Medicare ESRD program). A project has four phases: developing a measure of quality, measuring an opportunity for improvement, achieving improvement, and measuring success. Clearly, the HQIS can play a critical role. One of the most important impacts of QIs to date has been to establish that major opportunities for improving care are prevalent (Ellerbeck, 1995; Weiner, 1995), supporting the argument that emphasis on improving the mainstream of care promises greater benefits for HCFA's beneficiaries than had been imagined in the past.

  • Supporting Surveys—State agencies under contract to HCFA survey nursing homes, HHAs, hospitals, and other institutions to determine whether they meet quality standards. The HQIS can identify the institutions most in need of survey by helping surveyors to focus on those parts of institutions that are most likely to be in violation of standards.

  • Measuring Trends and Variation—The HQIS provides ongoing information on time trends and geographic and demographic variations in quality of care. This information can help in evaluating the HCQIP as well as changes in payment and coverage policy. This goal will be attained primarily through data generated for improvement projects and surveys.

  • Public Reporting—Data collected for administrative reasons are routinely released for research, and aggregated administrative data that are not personally identifiable may be released for the use of consumers and providers; from time to time HCFA has published specific systematic information on hospitals and nursing homes. HCFA does not plan to publish provider-specific information that is collected primarily for quality-improvement activities, because law limits release of information collected as part of a quality study and because, in the interests of efficiency, samples are too small to yield precise information about individual providers. On the other hand, systematic data submitted by health plans, nursing homes, or HHAs as a condition of participation in the program might be released if they proved to be valid indicators of quality.

Determining Validity

The validity of QIs is of critical concern to all users—from the public to individual physicians and administrators. HCFA has developed a five-stage validation procedure (Table 1) intended to give users clear evidence of the level of confidence they can place in information related to a specific indicator or group of indicators.

Table 1. Indicator Validation According to Intended Use.

Validation Intended Use

Quality Improvement Survey Support Report Cards Tracking Trends
Construct Review of scientific evidence
Face Consultation with professional and industry groups
Reliability Reliability among abstractors Moderate comparability and precision among institutions High comparability and precision among institutions Independence of trends in record-keeping and coding
Clinical Congruence with medical record findings Congruence with survey findings Same as for quality and survey Sensitivity to policy changes
Usefulness Usefulness in quality improvement projects Improves impact of surveys on quality of care Usefulness to consumers in making decisions Usefulness to policymakers

SOURCE: Jencks, S.F., Health Care Financing Administration, 1995.

Construct Validity

An indicator must rest on a sound scientific base. The exact requirements vary with the domain of measurement:

  • An indicator related to access should measure factors that research has shown to strongly influence at least whether needed care is received and, ideally, whether outcomes are changed. An access indicator that really measures only convenience is better considered a measure of satisfaction.

  • An indicator related to processes should measure processes that science has shown are strongly linked to better outcomes, not just processes that are widely used or popular.

  • An indicator related to outcomes should measure outcomes that, after appropriate adjustment for the risk carried by the patients treated, reflect quality of care rather than unmeasured patient characteristics.

  • An indicator related to satisfaction should be reliable, sensitive to differences among institutions, and independent of any patient factors (such as social class) that are difficult to measure.

  • An indicator related to informed consumer choices should reflect information that patients actually have at the time of decision and that patients have been shown to use in making decisions.

Many of these science bases are evolving, some rapidly, so maintaining construct validity can be as challenging as establishing it.

Face Validity

Indicators relating to process and outcome must be supported by consensus of respected clinical leaders. Similarly, indicators relating to access and satisfaction must be supported by consensus of respected consumer representatives and advocates. Without such consensus, controversy over the indicators is likely to impede quality-improvement efforts, and indicators may promote change that amounts to tampering rather than improvement. When practical, process-of-care indicators should rest on science-based, professionally developed clinical practice guidelines. Good guidelines tend to document clinical consensus as well as to provide an adequate review of the scientific literature, but indicator design must often rest on less formal consensus and review of the science. The HQIS indicators are very selective compared with guidelines; they operationalize only the elements of guidelines for which there is most convincing evidence and consensus. For access, outcomes, and satisfaction, guidelines are rarely applicable.

Reliability and Data Availability

An indicator for which data are unreliable, erroneous, or unavailable is worthless. Data may not be recorded in any accessible place (for example, baseline functional status) or may depend on tests that are not regularly performed (for example, left ventricular ejection fraction). The reliability of data may compromise the indicators, either because they are not accurately recorded (a problem with coding of diagnosis on billing records) or because abstracting the data requires judgment calls that abstractors cannot make reliably. Measuring reliability and availability usually requires field testing, which can be complex if, for example, the validity of data in medical records is questionable.

Clinical Validation

For process-of-care indicators, the best test of indicator construction is to have knowledgeable clinicians review the medical records and to determine whether their assessment of the particular process of care matches with the indicator. This kind of clinical validation does not need to repeat the research on which the indicator was based. Clinical validation of outcomes indicators, however, requires determining whether the adverse outcomes result from care that can be improved. Validating outcome indicators is difficult because clinician judgments are much more subjective and unreliable (Goldman, 1992; Rubin, 1992) when directed to overall quality rather than to specific processes of care.

Applicability

The final and critical test of an indicator is whether it succeeds in one of its purposes: supporting successful quality-improvement projects, supporting survey activities, helping program managers make better policy, or providing information that really helps consumers make decisions. Thus, the user is the customer who must be satisfied with the indicator. The test of usefulness requires monitoring implementation of indicators, generally in pilot efforts; in turn, users must see the cost—including the burden of obtaining the data—as reasonable.

Measurement Challenges

As it builds the HQIS, HCFA faces both formidable technical obstacles and significant controversies regarding process versus outcome, openness versus confidentiality, and the burdens of data collection.

Technical Obstacles

The availability and the science base for indicators vary widely across measurement domains as well as across clinical conditions and settings.

Access to Care

Although there is extensive research on the relationship between access to care and outcomes of care, it relates primarily to insurance coverage and community measures. There is little research validating the relation of access measures such as travel distance or waiting time to specific outcomes for individual institutions.

Appropriate Processes of Care

Much biomedical research has been devoted to showing that certain treatments and processes of care lead to better outcomes, and this research can be translated into explicit indicators of quality. There are, however, significant technical problems in dealing with situations, which can be frequent in Medicare patients, where the treatment is contraindicated. The indicators in the HQIS therefore provide only a statistical picture of quality, and are not standards of care for individual cases. In general, better performance indicates better quality up to a fairly high level, but 100 percent is often inappropriate. For example, the percent of women between 65 and 75 years of age who receive mammograms at least biennially is an indicator of quality, but a mammogram is not indicated for a woman who has had bilateral mastectomy or who is dying. Thus, “perfect” performance would suggest mindless medicine, but 80 percent would almost always indicate better quality of care than 40 percent.

Outcomes of Care

The fundamental theorem for using outcomes as QIs states that, after adjusting for differences in risk among the patients that institutions treat, differences in outcomes between institutions reflect differences in quality of care. Naturally, the most discussed problem in comparing outcomes of care among providers is adjusting for differences in risk among patient populations; for example, a hospital may have lower mortality rates for a surgical procedure because it operates on healthier patients. Risk adjustment requires substantial amounts of data, and those data almost always require abstraction from medical records (when they are available at all). Only a few risk adjustors (such as those for coronary artery bypass surgery) have been adequately studied. In addition, important outcomes such as the patient's functional status are often not available at all.

Satisfaction

There are widely used satisfaction measures for both managed care and fee for service that have been applied across a wide range of settings, and providers are generally interested in the results because of their competitive implications in the marketplace. Good comparative data bases, however, are just beginning to develop. There is controversy about adjusting the results for differences in patient populations, especially differences in how tolerant different populations are of bureaucratic problems and delays in the health care system.

Processes of Care Versus Outcomes of Care

Heated controversy surrounds whether to construct indicators around the processes of care that were carried out or outcomes. This argument grows in part out of a long history of quality-assurance systems based on processes whose direct relationship to good outcomes is unproven, such as the adequacy of physical examination or the frequency of medical record entries. Credible process indicators address specific elements of care whose linkage to good outcomes has been demonstrated, such as immunization, adequate control of diabetes, and use of specific drugs in patients with a heart attack. A second source of controversy is ambiguity of definition: for example, whether a patient's immunization status is a process or an outcome. But there are other issues, summarized in Table 2, that support using processes of care in some situations and outcomes in others.

Table 2. Processes Versus Outcomes.

Criterion Process
(and Access)
Outcomes
(and Satisfaction)
Content Validity Needs proof that the process causes good outcomes Needs proof that risk adjustment method is adequate
Face Validity Moderate to high High, especially with public
Timeliness Immediate results Results may be long delayed
Additional Requirements Proof that processes matter; definition of exceptions Adjustment for differences in patient risk, other care
Clinical Validation Relatively easy Often extremely difficult
Usefulness for Action Needed actions fairly clear, although change may be difficult Needed actions may require extensive knowledge and analysis
Sample Size May be smaller Tend to be larger if outcome is infrequent (e.g., death, epidemics)
Data Needs Data usually accessible but often require medical record abstraction Baseline and followup data for functional outcomes rarely available; risk adjusters often hard to get

SOURCE: Jencks, S.F., Health Care Financing Administration, 1995.

  • Proving that a process of care actually affects outcomes may appear just as difficult as proving the adequacy of adjustment for differences in patient risk among institutions. In practice, proof that a process is effective has generally been developed in prior research by others, while the risk adjustment usually must be developed by those developing the indicator.

  • Process-of-care indicators must include the major clinical exceptions for which a process is not necessarily appropriate; these often require extensive data collection and may be difficult to document. We rarely need to account for circumstances in which outcomes such as survival and improved functional status are not desired.

  • The face validity of the two types of indicators depends very much on the audience; physicians tend to be very sensitive to differences among patients and the adequacy of risk adjustment and thus skeptical of outcomes comparisons. Outcomes have higher face validity when we lack detailed evidence on what treatments are effective or when the critical treatments are not well documented in records.

  • Processes of care can be observed at once, while many outcomes take months or years; those that take a long time may be influenced by unknown prior or subsequent care. In addition, some adverse outcomes such as measles epidemics and nursing home fires are rare but so cataclysmic that focusing on processes of patient protection is essential.

  • As already noted, the clinical validation of process measures is easier than the validation of outcome measures.

  • Process measures are more easily used for quality improvement because they clearly indicate which processes of care need improvement. Outcomes indicators typically require extensive and deep knowledge of clinical issues and quality-improvement methods to identify which processes need improvement.

  • In general, we can draw stronger inferences from smaller samples using a QI for usual processes of care than when dealing with uncommon outcomes such as death or serious complications.

  • Most process indicators require abstracting data from the medical record, but it is sometimes possible to determine whether appropriate processes have been followed using billing records; clinicians are appropriately skeptical that outcomes indicators can be sufficiently adjusted for differences in patient risk using data from billing records.

In rough summary, process indicators are more suitable when science clearly identifies critical elements of care that are well documented (for example, heart attacks and diabetes) and when outcomes are delayed. Outcome indicators are more suitable when clinical experience shows that outcomes can largely be controlled but the science base for individual elements of care is weak and documentation is poor or difficult to summarize (for example, nursing homes and home health).

Improvement Efforts Versus Public Release

There are inherent tensions between using QIs for quality-improvement efforts and using them for public release.

Defensiveness

Public release of comparative QI data inevitably puts hospitals, health plans, physicians, and whoever else is compared into a defensive mode. Although it is theoretically possible to engage in honest internal efforts to identify ways to improve while externally denying the validity of the data, publication does not help get participants into the cooperative, unfearful mood in which improvement is best achieved.

Standards of Proof

Data for public release almost always require higher standards of rigor and proof than data for internal quality improvement.

  • Managers of health facilities are accustomed to making decisions on far lower levels of certainty than are appropriate before publishing potentially damaging information. For example, any manager who awaited a p < .05 level of confidence before investigating a possible outbreak of nosocomial infection would be appropriately censured.

  • An indicator that cannot compare institutions accurately can nevertheless be sufficiently reliable to help an institution track its performance. To be useful for supporting quality improvements, an indicator needs only to increase when quality increases and decrease when quality decreases. Far greater precision is necessary to support public comparisons among hospitals, plans, or physicians.

  • In general, we are interested in moving toward benchmark performance. Comparability across institutions is necessary to identify benchmarks; nevertheless, risk adjustment almost never moves an institution from far below average to benchmark performance (Krakauer et al., 1992; Hanan et al., 1990). That is, most hospitals can improve and need information that tells them how, while published reports are typically aimed at determining which hospitals most need to improve.

Presentation

Indicator information is best presented in rather different ways for quality improvement and for public release. In particular, data presented in terms of compliance with a standard may be suitable for “report cards” or monitoring but counterproductive when presented to an institution. For example, if the concern is the time from a heart attack patient's arriving in the emergency room to receiving a thrombolytic drug, a typical “report card” publication might give the percent of patients receiving a thrombolytic within 1 hour. However, despite clear evidence that prompt therapy averts complications and death, the commonly cited goal of 1 hour has little specific justification and is often considered too long (National Heart Alert Program, 1994). Reporting the QI in terms of a 1-hour standard tends to substitute debate over the interval for the much more important question of whether and how performance can be improved. By contrast, a histogram showing the distribution of times (Figure 1) makes clear that the system is delaying care for almost everybody and requires re-engineering. Further, the histogram shows that far shorter times are regularly achieved for a minority of patients, suggesting the existence of benchmark practices that can be copied.

Figure 1. Time From Emergency Room Arrival to Thrombolytic Therapy.

Figure 1

Burden of Data Collection

One of the most urgent QI development issues is who will collect the data. This issue involves considerations of data integrity, cost, burden, control, and philosophy. Although there are many possible sources for data, there is near consensus that quality-assessment data should ideally be collected as part of routine operations of an institution, not by an external monitor. There are two key reasons:

  • Quality management should be part of operations; separating quality data from operational data inherently divorces quality management from process management Quality measures should be in front of management every hour of every day.

  • Operational data will be used by the facility and will be validated through that use. Data not used by the people who generate them tend to be unreliable, and validating them is costly.

Clearly, we would like a comprehensive computerized clinical information system in which data are entered only once and can be retrieved for many purposes. Practically, however, such systems are still rare, few data suited to QIs are collected automatically, and the changes needed to collect them are not easy to implement or cheap. In addition, the match between current QIs and the internal data needed for management is still crude, and investing heavily in collecting data for current QIs is made more risky by the likelihood that requirements will evolve in the next few years. HCFA has, therefore, developed a variety of solutions in collecting indicator data that fit the varied situations of different kinds of institutions; these approaches are described later. They depend on four major sources: (1) billing or administrative data, (2) data collected by an institution as part of its operations, (3) data abstracted from medical records as part of quality efforts by either the institution or HCFA, and (4) data collected from patients in surveys.

HCFA's Indicator Development Strategy

HCFA's priority is to develop indicators for conditions that are frequent among its beneficiaries, a source of substantial morbidity or mortality, and for which the likelihood of producing valid indicators is high. Indicator development in different areas proceeds with a fair degree of independence so that priorities for inpatient care, for example, do not compete with priorities for dialysis programs. The strategy reflects the different uses for indicators in different sectors (Table 3).

Table 3. Overview of the HQIS, by Site and Type of Service.

Source of Care First Pilot Initial Purpose Process Versus Outcome Examples Data Source
Hospital 1993 Projects Process Care of heart attack, pneumonia CDACs, claims
Office Fee for Service 1995 Projects Process Care of diabetes Claims, PRO abstraction
Home Health 1995 Targeting surveys Primarily outcomes Functional status Agency
Long-Term Care 1995 Targeting surveys Primarily outcomes Functional status, pressure sores Institution
Dialysis Centers 1994 Projects Both Anemia, adequacy of dialysis Centers, ESRD network abstraction
Medicare Managed Care 1995 Projects Process Care of diabetes, preventive services Managed-care plan; records
Medicare/Medicaid Managed Care 1995-96 Public reporting Both Immunization, prenatal care Plan reports

NOTES: HQIS is Health Care Financing Administration's Quality Indicator System. CDAC is clinical data abstraction center. PRO is peer review organization. ESRD is end stage renal disease.

SOURCE: Jencks, S.F., Health Care Financing Administration, 1995.

Inpatient Care and Ambulatory Surgery

The primary use of QIs for inpatient care is to support quality-improvement collaborations between PROs and hospitals.

Examples:

  • Percent of patients who do not have a contraindication to anticoagulants who are discharged on anticoagulants after a transient ischemic attack.

  • Average time from arrival at the hospital to administration of antibiotics for patients admitted with pneumonia or urinary tract infection.

Development

The indicators primarily measure process of care with very limited use of outcomes; at this time, there are no access or satisfaction indicators. A typical module for a condition will have only a few indicators; the module for acute myocardial infarction (Ellerbeck, 1995) (see the Technical Note) rests on very extensive research on treatment of acute myocardial infarction and is unusual in having 11 indicators. Indicators may be developed by HCFA (as are the modules for heart attack and the diabetes unit discussed later), by PROs and hospitals in collaboration with HCFA, or by individual PROs and local hospitals and physicians. In each case, professional societies and specialty leaders are involved early in the process.

Data Sources

The primary source of data for the inpatient and ambulatory surgery indicators is abstraction from medical records, either by national clinical data abstraction centers or by PROs. Medicare claims identify hospitalizations for particular conditions and sometimes show what care was provided.

Implementation and Progress

We expect that there will be QIs for about 30 percent of Medicare discharges by 1996. By the end of 1996, we expect to have indicators for about 60 percent of Medicare discharges (Table 4). HCFA will apply national indicators to a surveillance sample of 1-2 percent of national discharges. One condition—acute myocardial infarction—is in national use. Two conditions—pneumonia and urinary tract infection—are in national pilot, and seven will be in pilot by the end of 1995. More than 100 indicators have been developed as part of local projects.

Table 4. Hospital Inpatient Indicator Development, by Year and Quarter.
graphic file with name hcfr-16-4-39-g002.jpg

Medicare Chronic Disease Care

Management of chronic diseases such as diabetes and hypertension is critical to Medicare's beneficiaries. HCFA is taking a very similar approach in fee-for-service and managed-care settings. The indicators will be identical (except for a few indicators of access that work only in managed-care settings) and will primarily support quality-improvement collaborations—between PROs and physicians in one case and between PROs and health plans in the other.

Examples:

  • Histogram of frequencies of diabetic eye exams for patients within a health plan.

  • Histogram of frequencies and results of glycosylated hemoglobin for patients of a provider.

Development

The chronic disease indicators address processes of care. They were developed under a contract between HCFA, the Delmarva Foundation for Medical Care Inc., and the Harvard School of Public Health. The contractor reviewed all current health plan indicators, convened a panel of managed-care experts to make recommendations, and then convened a panel of subject matter experts to refine clinical content (Delmarva Foundation for Medical Care, Inc., 1994). The indicators contrast with the Health Plan Employer Data Information Set (HEDIS) (National Committee for Quality Assurance, 1994) in having far more depth for an individual condition (diabetes will be the first topic) while being limited to the care of one condition. The diabetes unit addresses 11 clinical issues and has 26 indicators. HCFA is also exploring the creation of a Medicare version of HEDIS to provide broader-based complementary information about overall health plan performance. The definitions of the HCFA indicators have been coordinated with those used in HEDIS to minimize burden on health plans.

Data Sources

In fee for service, the strategy uses a combination of claims data and data abstracted from medical records by PROs to identify patients and to assess their needs and what services are provided. In managed care, we expect that data for these indicators will generally be collected by health plans. In both settings, record abstraction and analysis of available administrative data will be used to validate one another.

Implementation and Progress

A multistate pilot for diabetes in both fee for service and managed care will go into the field in late summer 1995.

Preventive Services

The primary purpose of preventive services indicators is to support and stimulate collaborative projects between PROs, physicians, and hospitals to improve delivery of preventive services.

Examples:

  • Percent of women 65-75 years of age having a mammogram during the last 2 years.

  • Percent of beneficiaries having a current influenza vaccination.

Development

Efforts to date have focused on mammography and flu vaccine. Professional societies and HCFA have collaborated, using National Institutes of Health (NIH) consensus statements and Healthy People 2000 (U.S. Public Health Service, 1994) as a foundation. Projects are developed both nationally, particularly as part of HCFA's Consumer Information Strategy (Vladeck, 1994), and by individual PROs.

Data Sources

Preventive services covered by Medicare are generally reflected in claims that can be analyzed to identify patterns of care. The two major limitations are flu vaccine, which is often provided by clinics that do not bill Medicare, and patients enrolled in health plans, for whom claims are not paid by HCFA.

Implementation and Progress

The Preventive Health Care pilot, active in three States, focuses on mammography for women 65-75 years of age. The Department of Health and Human Services conducted a national flu immunization campaign in 1994, in which HCFA indicators were used both for national information and to support PRO projects; that campaign will be repeated in 1995. Several individual PROs have developed projects to improve flu immunization that used local indicators; for example, the Minnesota PRO worked with hospitals to assure that patients admitted during the fall are evaluated for flu immunization.

Nursing Homes

Although Medicare pays for relatively little nursing home care, more than one-quarter of Medicaid payments goes to nursing homes (Health Care Financing Administration, 1995). Initially, HCFA will use the indicators to guide the frequency of surveys of different nursing homes and to target the surveys to the areas of greatest concern in each facility. In the longer run, however, the indicators will make a greater contribution to quality by allowing nursing homes to monitor and improve patient care and allowing HCFA to give them objective data on their relative performance. The indicators are predominantly directed toward outcomes but with a significant number of processes of care.

Examples:

  • Prevalence of pressure ulcers.

  • Percent of patients with capability for activities of daily living declining over 3 months.

  • Prevalence of antipsychotic drug use in patients with no diagnosis of psychosis or other indication of need.

Development

HCFA's Health Standards and Quality Bureau and Office of Research and Demonstrations are collaborating to develop the indicators. The key work is being done by Zimmerman and colleagues (1995), under contract to HCFA and in close collaboration with the industry and subject matter experts. The indicators focus primarily on outcomes for three reasons:

  • To the extent that the indicators become a substitute for some surveys, it is important that they support the refocus of surveys from structure and process to measurable outcomes of care. Likewise, the focus on outcomes supports the public credibility of the transition.

  • The outcomes measured in the system are immediately observable by the surveyors, as is much of the needed baseline data.

  • There is little science that would allow us to focus on critical processes of care that we could confidently associate with better outcomes. When such science exists (for example, in prevention of pressure ulcers), important aspects of critical treatments are poorly documented in most medical records.

Data Sources

Nursing homes are required to collect regularly a Minimum Data Set (Health Care Financing Administration, 1995) describing their patients' functional status and treatment and to report that data set to State governments. States and HCFA are in the process of automating this data set for use in QI systems. A number of contractors are developing management information systems for nursing homes based on these data; to the extent that nursing homes adopt and use these systems in daily operations, data quality will probably rise.

Implementation and Progress

The indicators are now being used in a five-State demonstration of nursing home payments based on case mix. During the next 5 years, as the data system becomes national, national application of the indicators will be phased in.

Initially, HCFA will use these indicators to identify nursing homes where surveys of compliance with Medicare conditions of participation are most likely to lead to significant improvements in care. The vision is then to reduce the frequency of full surveys for institutions with good performance on the indicators and increase the frequency of surveys for those with poor performance. In addition, the indicators will permit surveyors to focus survey activities on areas where improvement is most achievable.

HHAs

HCFA is developing QIs for home health care that reflect changes in functional and health status. Initially, HCFA will use the indicators to guide the frequency of surveys in different agencies and to target the surveys to the areas of greatest concern in each agency. In the longer run, however, the indicators will make a greater contribution to quality by allowing agencies to monitor and improve patient care and allowing HCFA to give agencies objective data on their relative performance.

Examples:

  • Percent of patients showing improvement in ambulation.

  • Percent of patients readmitted to an acute-care hospital.

Development

The logic of using outcome indicators in home health is essentially identical to the logic for nursing home indicators. HCFA's Health Standards and Quality Bureau and Office of Research and Demonstrations are also collaborating on the home health indicators, again in close collaboration with industry and subject matter experts.

Data Sources

HCFA has developed a data set that, when refined, will be specified in regulation and collected regularly by HHAs as part of their routine operations. Functional and health status measurements would be imbedded in the comprehensive assessment tool required for each patient and thus be an integral part of care planning.

Implementation and Progress

The HHA indicators were developed by Shaughnessy and colleagues (Shaughnessy, 1994). Reliability and validity studies are complete; data collection for a pilot of operational feasibility, including risk adjustment methods, will begin in summer 1995, with larger scale collection in 1996.

Dialysis Centers

The development of indicators in dialysis is highly focused on a few processes and physiologic outcomes—adequacy of dialysis, anemia, nutrition, and control of hypertension. These “core” indicators are specifically linked to a proposal to improve performance on each through a concerted national intervention effort.

Examples:

  • Histogram of urea reduction ratios (a measure of adequacy of dialysis) among patients of a dialysis center.

  • Histogram of hematocrits among patients of a dialysis center.

Development

These indicators have been developed by a team of professional societies, patient representatives, HCFA, and ESRD network personnel. They rest on consensus statements and guidelines developed by NIH and the Renal Physicians' Association.

Data Sources

The indicators are currently abstracted from dialysis center records by the ESRD networks and consolidated by HCFA If extended to all patients, they would likely be abstracted by the dialysis centers as part of routine management.

Implementation and Progress

Indicators have been developed and tested on a nationally representative sample of 6,100 patients. The results have been returned to the ESRD networks to serve as the basis for developing improved collaborations between networks and dialysis centers.

Medicaid Managed Care

Managed care is the strategically critical area for developing Medicaid QIs because Medicaid is very rapidly shifting from fee for service to managed care; at the beginning of 1994, 23 percent of Medicaid enrollees were in managed care (Health Care Financing Administration, 1995). HCFA's Medicaid Managed Care Quality Assurance Reform Initiative (QARI) (Health Care Financing Administration, 1993) creates a structure for managed-care quality indicators. Initially, QARI started by providing examples to States (National Committee for Quality Assurance, 1994). HCFA is now participating in development of a version of HEDIS for Medicaid. This indicator set will have single indicators for a variety of conditions.

Examples:

  • Percentage of child enrollees with full immunization.

  • Rate of hospitalizations or emergency room visits for patients with asthma.

  • Rate of followup after admission for depression.

Development

HCFA, the National Committee for Quality Assurance (the developer of HEDIS), and other stakeholders are developing a version of HEDIS appropriate to Medicaid enrollees in managed care with support from the Packard Foundation.

Data Sources

Indicator collection will be determined by the States that manage Medicaid. HCFA recommends an auditing process external to the managed-care plans.

Implementation and Progress

Medicaid HEDIS indicators are in an advanced stage of development; they will probably be piloted in the next year and implemented as part of QARI.

Concluding Note

The driving force behind HCFA's commitment to measuring quality of health care is the conviction that measurement is the foundation of quality improvement. The HCFA QI system is a diverse system adapted to the needs of different health care sectors but united by a common vision of measurement and validation. Ten years ago, the measurability of clinical quality was very much in debate; today, the measurement debate has largely been resolved by accomplishments, and the challenge is to implement measures and prove that systematic measurements can support national improvement.

Technical Note

Indicators For Improving Care of Myocardial Infarction

The Cooperative Cardiovascular Project (CCP) is a national effort by HCFA to improve care for Medicare patients hospitalized for heart attack (Ellerbeck, 1995). The QIs for the CCP were developed from the clinical guideline for heart attack that was published by the American College of Cardiology and the American Heart Association in 1991. Initially, HCFA extracted more than 20 indicators from the guideline and then reviewed them with representatives of the groups that had created the guideline. With the assistance of those groups, the guidelines were updated (for example, evidence on use of thrombolytic agents and on the effectiveness of counseling for smoking cessation had changed since the guidelines were written). HCFA then developed a pilot involving PROs and hospitals in four States that was directed at validating the indicators. The PROs abstracted about 500 data elements from each of 17,000 medical records and calculated performance for each hospital on each of the draft indicators. The pilot PROs assessed the reliability of their data collection and performed clinical validation of the indicators. They found substantial opportunities for improvement: Even among “ideal” candidates for several life-saving treatments, 70 percent of patients received thrombolytic drugs, 45 percent received beta blockers at discharge, and 77-83 percent received aspirin; the median time from arriving in the emergency room to starting thrombolytic drugs was more than 1 hour as against recommendations that all patients who receive drugs get them within half an hour to an hour. PROs then presented these, as well as hospital-specific results, to representatives of every hospital in their State and asked the hospitals to explore the data systematically and report on specific areas in which they would commit to achieving improvements.

Hospitals reported that the data were useful, and more than one-half of hospitals in Alabama (data from other States were recorded in other ways) committed to achieving improvement. The four PROs are now returning to the hospitals to assess progress and to promote improvement in areas where hospitals did not originally deem it a priority.

On the basis of this pilot, HCFA reduced the number of indicators to 12 and the number of data elements to about 200. National clinical data abstraction centers are now collecting data from medical records for the rest of the Nation, and all PROs will be collaborating with hospitals to improve care for heart attack victims by the end of 1995.

The CCP is more complex and entails more data collection than any other set of acute-care indicators contemplated for the near future (but is comparable to the diabetes unit for chronic disease care). This reflects both the extraordinary richness of research evidence and the amount of clinical consensus on the management of heart attack. The steps taken illustrate, however, the process of indicator development and validation that HCFA uses in more streamlined form for other QIs.

Acknowledgments

The work reported in this article has been carried out by many members of the Offices of Quality Improvement Programs and of Survey and Certification in the Health Standards and Quality Bureau as well as in the Office of Managed Care, HCFA In addition, Harvey Brook, Joseph Chin, Mavis Conolly, Edward Ellerbeck, Barbara Fleming, Helene Fredeking, Pam Frederick, Barbara Gagel, Barbara Greenberg, Michael McMullan, Wayne Smith, and Cynthia Wark provided helpful critiques of this paper.

Footnotes

Stephen F. Jencks is with the Health Standards and Quality Bureau, HCFA The opinions expressed are those of the author and do not necessarily reflect HCFA's views or policy positions.

Reprint Requests: Stephen F. Jencks, M.D., Health Quality and Standards Bureau, Health Care Financing Administration, Mail Stop S-2-11-07, 7500 Security Boulevard Baltimore, Maryland 21244-1850.

References

  1. Delmarva Foundation for Medical Care. External Review Performance Measurement of Medicare Health Maintenance Organizations/Competitive Medical Plans: Final Report. Salisbury, MD.: Delmarva Foundation; Aug, 1994. Health Care Financing Administration Contract Number 500-93-0021. [Google Scholar]
  2. Ellerbeck EF, Jencks SF, Radford MJ, et al. Treatment of Medicare Patients With Acute Myocardial Infarction: Report on a Four-State Pilot of the Cooperative Cardiovascular Project. Journal of the American Medical Association. 1995;273:1509–1514. [PubMed] [Google Scholar]
  3. Gagel B. Health Care Quality Improvement Program: A New Approach. Health Care Financing Review. 1995 Summer;16(4):15–23. [PMC free article] [PubMed] [Google Scholar]
  4. Goldman RL. The Reliability of Peer Assessments of Quality of Care. Journal of the American Medical Association. 1992;267:958–960. [PubMed] [Google Scholar]
  5. Hannan EL, Kilburn H, O'Donnell JF, et al. Adult Open Heart Surgery in New York State: An Analysis of Risk Factors and Hospital Mortality Rates. Journal of the American Medical Association. 1990;264:2768–2774. [PubMed] [Google Scholar]
  6. Health Care Financing Administration. A Health Care Quality Improvement System for Medicaid Managed Care. Washington, DC.: 1993. [Google Scholar]
  7. Health Care Financing Administration. State Operations Manual: Provider Certification. Appendix R: Resident Assessment Instrument for Long-Term Care Facilities (MDS 2.0) Baltimore, MD.: 1995. [Google Scholar]
  8. Krakauer H, Bailey RC, Skellan KJ, et al. Evaluation of the Model Used by the Health Care Financing Administration for the Analysis of Mortality Following Hospitalization. Health Services Research. 1992 [PMC free article] [PubMed] [Google Scholar]
  9. Lohr K, editor. Medicare: A Strategy for Quality Assurance. Washington, DC.: Institute of Medicine, National Academy Press; 1990. [Google Scholar]
  10. National Committee for Quality Assurance. Health Maintenance Organization Employer Data Information Set 2.5: Updated Specification for Health Maintenance Organization Employer Data Information Set 2.0. Washington DC.: 1995. [Google Scholar]
  11. National Committee for Quality Assurance. Health Care Quality Improvement Studies in Managed-Care Settings. Washington DC.: 1994. [Google Scholar]
  12. National Heart Attack Alert Program Coordinating Committee 60 Minutes to Treatment Workgroup. Emergency Department: Rapid Identification and Treatment of Patients With Acute Myocardial Infarction. Bethesda, MD.: National Institutes of Health; 1994. [PubMed] [Google Scholar]
  13. Rubin HR, Rogers WH, Kahn KL, et al. Watching the Doctor-Watchers: How Well Do Peer Review Organization Methods Detect Hospital Care Quality Problems? Journal of the American Medical Association. 1992;267:2349–2354. doi: 10.1001/jama.267.17.2349. [DOI] [PubMed] [Google Scholar]
  14. Shaughnessy PW, Crisler KS, Schlenker RE, et al. Measuring and Assuring the Quality of Home Health Care. Health Care Financing Review. 1994 Fall;16(1):35–67. [PMC free article] [PubMed] [Google Scholar]
  15. U. S. Public Health Service. Healthy People 2000: National Health Promotion and Disease Prevention Objectives: Midcourse Revisions (Draft for Public Comment and Review) Washington, DC.: Department of Health and Human Services; Sep, 1994. [Google Scholar]
  16. Vladeck BC. From the Health Care Financing Administration: The Consumer Information Strategy. Journal of the American Medical Association. 1994;272:196. [Google Scholar]
  17. Weiner JP, Parente ST, Barnick DW, et al. Variation in Office-Based Quality: A Claims-Based Profile of Care Provided to Medicare Patients with Diabetes. Journal of the American Medical Association. 1995;273:1503–1508. doi: 10.1001/jama.273.19.1503. [DOI] [PubMed] [Google Scholar]
  18. Zimmerman DR, Arling G, Collins T, et al. Development and Testing of Nursing Home Quality Indicators. Health Care Financing Review. 1995 Summer;16(4):107–128. [PMC free article] [PubMed] [Google Scholar]

Articles from Health Care Financing Review are provided here courtesy of Centers for Medicare and Medicaid Services

RESOURCES