Abstract
Since 1981, States have been experimenting with Medicaid managed care programs to improve access and continuity of care and to contain costs by reducing inappropriate and unnecessary utilization. To determine the impact of primary care case management (PCCM) on utilization, the authors examine data from the Kentucky Patient Access and Care program (KenPAC). Using monthly utilization data from 1984 to 1989 and an interrupted time-series research design, the authors find that PCCM reduces the use of independent laboratory, physician, emergency department, and outpatient hospital services. PCCM does not appear to affect utilization of inpatient hospital services or prescription drugs.
Introduction
Federal and State expenditures for Medicaid have been escalating since the late 1970s (Grannemann and Pauly, 1983; Holahan and Cohen, 1986; Davidson, Cromwell, and Schurman, 1986; Chang and Holahan, 1989). For example, the average annual rate of growth in these expenditures from 1975 to 1985 was 12 percent (Congressional Research Service, 1988). At the same time, Medicaid continues to have problems with access, continuity, and appropriateness of care (Freund and Neuschler, 1986; Freund, 1984; Davidson, 1982; Hurley, 1986). The underlying problem of Medicaid reflects the problem of the health care system at large: increasing costs coupled with unmet needs and unevenness of care. Managed care (i.e., controlling access and coordinating care) has been proposed by both Federal and State Medicaid policymakers as a way of reconciling improved access with cost containment.
Escalating Medicaid expenditures led to the passage of the Omnibus Budget Reconciliation Act of 1981 (specifically, sections 1915[b] and [c]), which granted States considerable latitude to experiment with alternative payment and delivery systems. Consequently, a number of States began experimenting with Medicaid managed care. From 1981 to 1987, the number of Medicaid managed care programs increased from 54 to 177, and the number of Medicaid recipients in these programs increased from 282,000 to 1.6 million. The Health Care Financing Administration (HCFA) estimates 1991 enrollment at 2.5 million—10 percent of all Medicaid recipients. More recently, however, growth in managed care initiatives, particularly risk-based plans, has slowed. Obstacles to implementing risk-based managed care programs have moved States toward PCCM, coupled with traditional fee-for-service (FFS) Medicaid.
States have implemented Medicaid managed care programs with the goals of increasing access to care, improving continuity of care, containing costs through the reduction of unnecessary and inappropriate services, and (to a lesser extent) improving provider participation in Medicaid. At the Federal level, executive budgets have proposed financial and regulatory incentives to encourage States to enroll Medicaid beneficiaries in managed care. However, increased utilization coming from improved access to care, in addition to the administrative costs of managed care programs, can result in more, rather than less, Medicaid spending. Cost-containment occurs only if unnecessary and inappropriate service utilization is reduced.
In this article, we examine the impact of managed care on the use of medical services by Medicaid-eligible persons. (We do not address program cost effectiveness directly. Program cost effectiveness requires that savings from reduced utilization in some services offset costs from increased utilization in other services and program administration.) Although Medicaid managed care programs vary along a number of dimensions, fundamental to each is the role of a case manager with the responsibility to coordinate and control care (Freund, 1987). The case manager can be an individual physician or an institution such as a health maintenance organization (HMO) or clinic. Virtually all case management plans require the beneficiary to choose a primary care provider; these plans also attempt to modify beneficiary utilization patterns through restricted access and coordinated service delivery. Some plans attempt to alter provider behavior through financial incentives and utilization review.
Thus, there are many models of managed care. The specific model we examine is the coupling of PCCM with traditional FFS Medicaid. Case management refers to the assignment of the patient to a physician (or institution) who provides primary care and must authorize additional services. The PCCM/FFS model is a good model to study because it is the smallest departure from the traditional Medicaid program and thus defines one end of a spectrum of Medicaid managed care models undertaken by the States. Furthermore, the gatekeeping function of the case manager is a fundamental utilization control mechanism used in most models of managed care.
Literature Review
Medicaid managed care began to receive attention in the health policy literature when States first began experimenting with it. Most often this literature reports problems encountered with and lessons learned from specific case studies of managed care programs (Spitz and Abramson, 1987; Anderson and Fox, 1987; Temkin-Greener, 1986; Aved, 1987; Rowland and Lyons, 1987). These studies indicate a number of obstacles to Medicaid managed care programs, particularly capitated (risk-based) plans (e.g., high membership turnover resulting from loss of eligibility; regulatory requirements governing the ratio of privately insured to Medicaid-insured enrollees). The literature also provides overviews and typologies of managed care innovations (Hurley, 1986; Freund, 1987; Freund and Neuschler, 1986; Hurley and Freund, 1988).
Some quantitative research building on the experiences of the States has appeared in the literature. Three recent examples are Long and Settle (1988), Hurley, Freund, and Taylor (1989), and Freund et al. (1989). Long and Settle studied Utah's PCCM program, which required Medicaid clients to enroll but had no direct provider incentives for case managers to control utilization. The utilization of primary care physician services was expected to increase, while all other services (emergency department, hospital outpatient department, specialists, and prescription drugs) were expected to decrease. Because enrollment was phased in, Long and Settle designed their research as a 1-month, cross-sectional comparison of “case-managed” beneficiaries and beneficiaries eligible for case management but not yet enrolled. (Because the enrollment process was not entirely random, Long and Settle also used multivariate methods to control for other differences between the two groups.) These authors concluded that use of primary care physician services, specialist services, and drugs increased, while hospital outpatient services declined. (Long and Settle's data did not distinguish between emergency department and routine outpatient hospital services.) Overall access was increased, but the program did not contain costs.
Freund et al. report the results of evaluating Medicaid managed care demonstrations in six States. These demonstration programs represented both capitated programs and PCCM. (These programs varied along a number of other dimensions as well: enrollment [mandatory versus voluntary]; eligible populations [Aid to Families with Dependent Children (AFDC)-only versus all categorically eligible]; and organizational structure [risk-assuming intermediary versus direct State contracting].) The impact of these programs on utilization was usually estimated by comparing service use by program enrollees with that of non-enrollees in comparison sites 1 year before program implementation and during the first year of program implementation. For example, managed care enrollees in Monterey County were compared with non-managed care enrollees in Ventura County. Controls were included for individual level characteristics expected to affect use.
A number of programs showed significant impacts on utilization, most notably emergency room use. (Using a subset of these sites, Hurley, Freund, and Taylor [1989] examined the impact of PCCM on emergency department use. The authors concluded that case management lowers emergency department utilization and, perhaps more interestingly, does so regardless of the financial incentives for case managers.) Programs also appeared to reduce the use of physician and ancillary services. Less clear impacts were found for inpatient and specialist services.
These three studies are essentially cross-sectional in design, although Hurley, Freund, and Taylor and Freund et al. do have a pre/post comparison.1 The cross-sectional nature of these designs raises two concerns. First, such designs are sensitive to the time period chosen for examination; for example, these studies all appear to include the period of mass enrollment in the managed care program. (Long and Settle's analysis is confined to a comparison of enrolled and not-yet-enrolled beneficiaries during 1 month in the midst of the mass enrollment period.) During the period of mass enrollment, beneficiaries must choose primary care providers, which can result in reduced utilization because of disruptions in traditional patterns of care.2 Including the enrollment period as part of the post period may result in finding larger reductions in utilization that are not sustained after full implementation. The second concern has to do with the length of the post period. The post periods of the Hurley, Freund, and Taylor and Freund et al. studies are 1 year, which may be too short to obtain a complete picture of program effects. Freund et al. point out that data beyond the implementation year may reflect “learning curve” effects on the part of beneficiaries and providers and the “steady state” of the program's utilization experience.
In this article, we analyze the impact of Kentucky's Medicaid PCCM program, using a longitudinal data base and an interrupted time-series research design. Our research adds to previous literature in several ways. First, a longitudinal analysis of the eligible population in a statewide program before and after implementation of a PCCM provides a new perspective on the impact of managed care initiatives and avoids some of the limitations of cross-sectional designs. Second, a longitudinal analysis allows us to examine the impacts of a mature program. Third, we isolate the enrollment period during which the effects of PCCM per se cannot be distinguished from the disruptive effects of implementing a new program. Fourth, our research design and methods are easily replicable in most program settings.
Kentucky Patient Access and Care Program
The State of Kentucky was originally granted a waiver by HCFA to run the Ken-PAC program for a 2-year period beginning in January of 1986. KenPAC is currently operating under a renewal of that original waiver. KenPAC is a PCCM/FFS program for approximately 200,000 recipients of AFDC cash grants and related groups. The program is designed to reduce costs by reducing inappropriate and unnecessary use of services and to ensure access to primary care and continuity of care. KenPAC operates statewide, including rural areas, with the exception of 12 counties that had insufficient physician participation.
Prior to KenPAC, Kentucky had experimented with another managed care initiative known as Citicare, a pilot program in Jefferson County (Louisville). Citicare was implemented in State fiscal year (FY) 1984 to improve access for Medicaid recipients and to reduce inappropriate use of emergency room care. Kentucky contracted with a private health insuring organization to provide case management services to AFDC recipients under a capitated payment system. Physicians were allotted a prepayment of $44 per month for each enrollee. As a result of political resistance and a revenue shortfall, the program was allowed to expire at the end of FY 1984.
The State has documented to the satisfaction of HCFA that the KenPAC program is cost effective and has not substantially impaired the quality of or access to services for Medicaid recipients. An independent evaluation (Roeder, 1987) concluded that the program was cost effective and improved access to services; further, high levels of recipient satisfaction were noted. Cost effectiveness was determined, in part, by measuring utilization reductions. Utilization reductions were determined by forecasting expected monthly service units without KenPAC and comparing them with actual service units under KenPAC. To measure expected utilization, Roeder used an exponential smoothing technique to forecast monthly services before KenPAC into the period of the KenPAC program.
Despite this evaluation work, there are three reasons why the impact of KenPAC on utilization should be examined. First, the methods used to estimate the utilization impacts are sensitive to assumptions made by the researcher.3 Second, following from the first point, the methods used provide no direct statistical test of the difference between pre- and post-period utilization. It is important to note that the purpose of Roeder's analysis was to determine cost effectiveness, not to test hypotheses, and in that regard the methods employed were not inappropriate. Third, we have additional data allowing us to examine the impacts of a more mature program.
Under KenPAC, Medicaid participants are required to select a physician to provide primary care services and to authorize in advance all other services (except bona fide emergency services). Enrollment is mandatory for the AFDC population. Medicaid recipients who do not choose a primary care physician are assigned to one. The State contracts directly with case managers. Some clinics and specialists are allowed to act as case managers, but the majority of case managers are office-based primary care physicians. Case managers receive a monthly fee of $3 for each beneficiary under their case management. Case managers are required to provide managed care and arrange virtually all other services. There are no risk-based financial incentives. All medical services provided by the case manager and other providers, including drugs, are reimbursed on a regular Medicaid FFS basis.
There are two reasons why KenPAC should reduce overall utilization. The first reason is the gatekeeping function of the case manager: Care is obtained either directly from the case manager or through the case manager (i.e., by referral and prior authorization). Providers other than the case manager who render unauthorized care risk not being reimbursed. The second reason is the combined effect of utilization review and financial benefits on case managers. All case managers are provided monthly information concerning their utilization patterns and are encouraged to provide care consistent with that of their colleagues. Outlier physicians are subject to further review and, ultimately, the State can remove a case manager from the program. The case manager receives added income from the monthly fee and a guaranteed market share (i.e., the panel of beneficiaries) and presumably values this income.
Methods
Data were obtained from the State of Kentucky's Medicaid Management Information System (MMIS). Five services were chosen for examination because of their contribution to total Medicaid spending and their generalizability to other States. The five services, which account for about 95 percent of acute care spending for the AFDC population in Kentucky, are physician, laboratory, inpatient and outpatient hospital services, and prescription drugs. Note that physician services refers to services provided by both primary care physicians and specialists and encompasses all service settings (e.g., office, hospital). Laboratory services refers to tests performed by independent laboratories (i.e., for patients referred by an office-based physician). If a patient receives laboratory services in another setting, such as a hospital outpatient department, those services are counted in that setting. Outpatient hospital services refers to routine services and emergency department services (although we analyze these services separately using a slightly different period of data).
The measure of utilization is average units of service per enrollee in each month. Because the utilization measure uses units of service rather than other measures, such as expenditures, it controls for changes in medical care prices over time. Because the measure is expressed per enrollee, it controls for changes in enrollment. The units of service for the inpatient services and prescription drugs are hospital days and number of prescriptions, respectively. For all other services, the unit of service is the actual service or procedure. If one thinks in terms of the Current Procedural Terminology, Fourth Edition (CPT-4) coding scheme, “service or procedure” can refer to a surgical procedure, an office visit, a radiology procedure, a laboratory test, or other medical service (e.g., removal of cast). For example, in the laboratory, a routine urinalysis and a blood count constitute two different tests and thus count as two service units.
The strength of our utilization measure is that it allows us to measure changes in service volume. For example, before case management was implemented, one might have a visit to the hospital outpatient department during which two services were provided. After the implementation of case management, that same visit might involve only one service. Analysis at the visit level would show no change in utilization, but analysis at the service level would show a reduction in utilization. On the other hand, the weakness of this measure is that we cannot ascertain whether reductions in utilization result from fewer visits, fewer services per visit, or both.
The period of observation is 60 months (5 complete years) from July 1984 through June 1989. To determine the impact of the PCCM program, we used an interrupted time-series research design. This design is a widely accepted framework for assessing the impact of an event in time on the behavior of a variable (Campbell and Stanley, 1963; Draper and Smith, 1966; Cook and Campbell, 1979; McDowall et al., 1980; Lewis-Beck, 1986). This method allows us to compare utilization during a later time period with utilization during an earlier time period, while controlling for secular trends in utilization. That is, a time variable measures the trend in utilization and, as such, serves as a proxy for all influences on utilization. The interruption variables (measuring the implementation of the program) measure changes in the utilization trend.
For the purpose of this analysis, we were interested in identifying three distinct time periods (i.e., two interruptions). The pre-KenPAC period is 20 months long and covers the period from July 1984 to February 1986. During this period no effects from the PCCM program are expected; it serves as our baseline period for measuring subsequent changes. The second period is the enrollment period and runs 6 months from March 1986 to August 1986. Mass enrollment in the Ken-PAC program was tied to routine 6-month case reviews for AFDC benefits. During the enrollment period, PCCM effects are confounded with the disruptive effects of mass enrollment in the PCCM program. During the enrollment period, the enrollee is required to choose a case manager, which may result in some delay in obtaining care as contact is re-established between provider and patient (particularly for those beneficiaries who are assigned, rather than choose, a case manager). Furthermore, there may be some confusion on the part of providers as to how services are to be provided, reported, and billed. The third period is the post-implementation period (hereafter referred to as the “post-KenPAC” period) and runs from September 1986 to June 1989 (34 months). During this period changes in utilization should be attributable to PCCM. We expect the incentives inherent in PCCM to result in reductions across all five services examined.
Conventional econometric methods are used to estimate changes in utilization across the three time periods (Gujarati, 1988; Lewis-Beck, 1986; Draper and Smith, 1966). For reasons noted, we cannot reach clear conclusions regarding utilization during the enrollment period because the effects of PCCM and mass enrollment are entangled. Thus, we are interested in comparing only the post-KenPAC period with the pre-KenPAC period. Binary variables representing the three time periods yield differential parameter estimates for the enrollment period and post-Ken PAC period relative to the pre-KenPAC period. Beyond controlling for time and program effects, we also include a binary variable to capture seasonal variations in utilization. There are separate regression equations for each of the five services. Each equation takes the following form:
where:
| Yt | = | service units per enrollee |
| TIME | = | time variable (coded 0,1,2,3…N) |
| ENROLL | = | binary variable measuring difference in utilization during enrollment relative to pre-KenPAC (coded 1 if enrollment period; 0 if otherwise) |
| POST | = | binary variable measuring difference in utilization during post-KenPAC relative to pre-KenPAC (coded 1 if post-period; 0 if otherwise) |
| SEASON | = | binary variable measuring difference in utilization during winter (coded 1 if December, January, or February; 0 if otherwise) |
This interrupted time-series design allows us to measure deviations from the general trend in utilization. The credibility of this approach rests on the assumption that other plausible explanations for post-implementation changes can be ruled out. The inclusion of the TIME variable and the SEASON variable mitigates many of these concerns. The TIME variable captures any systemic, underlying forces driving utilization changes during this 5-year period. For example, if the population were aging in such a fashion as to increase utilization, the TIME variable would capture this effect. The SEASON variable captures the shortrun fluctuations in utilization that one might expect in a monthly data series.
Our design has a number of strengths. The enrollment period, in which program impacts are entangled with the mechanics of choosing a case manager, is isolated, allowing us to consider the true pre- and post-implementation periods. Our 34-month post period allows us to examine the impact of a program well into maturity. Distinct from previous literature that has generally used visits or days, our measure of utilization allows us to examine changes in the volume of services per enrollee. Finally, our methods can be easily replicated by Medicaid program managers using data readily available to them.
Nonetheless, there are certain weaknesses in our research design. Ideally, some control for differences in individual-level characteristics (i.e., age, gender, urban or rural location) would be included. The level of aggregation in our data does not provide information at the claim level to make these adjustments. However, as previously noted, the inclusion of the TIME variable mitigates this concern somewhat by controlling for underlying forces driving utilization over time. In certain service settings, our measure of utilization does not allow us to determine whether reductions are attributable to fewer visits or fewer services per visit. Finally, as with most previous studies, our research is based on data from one State and may be limited in its generalizability.
Results
Table 1 reports the means for the utilization variables overall and in the three time periods. Table 2 reports the results of our regression analysis. (Regression diagnostics are discussed in the Technical Note.) With the exception of inpatient services, the post-KenPAC differential intercepts are negative, suggesting reductions in utilization. However, utilization reductions are statistically significant only for laboratory and outpatient hospital services. Our data do not allow us to ascertain whether the reduction in the outpatient hospital setting is the result of fewer visits, fewer services per visit, or both.
Table 1. Mean and Standard Deviation Monthly Utilization per 10,000 KenPAC Enrollees: Kentucky, July 1984-June 1989.
| Service | Pre-KenPAC1 | Enrollment2 | Post-KenPAC3 | All Periods |
|---|---|---|---|---|
| Inpatient Hospital | 817 (111) |
692 (122) |
661 (119) |
716 (136) |
| Laboratory | 226 (64) |
240 (79) |
302 (95) |
270 (91) |
| Outpatient Hospital | 3,579 (664) |
3,408 (626) |
4,053 (858) |
3,815 (805) |
| Prescription Drugs | 4,988 (664) |
4,852 (477) |
5,271 (776) |
5,135 (724) |
| Physician | 11,761 (2,133) |
12,958 (1,686) |
11,940 (1,827) |
11,982 (1,920) |
July 1984-February 1986.
March 1986-August 1986.
September 1986-June 1989.
NOTES: KenPAC is Kentucky Patient Access and Care Program. Standard deviations are in parentheses.
SOURCES: Kentucky Medicaid Management Information System; Miller, M., The Urban Institute, Washington, DC, and Gengler, D., Office of the Governor, Helena, Montana, 1993.
Table 2. Time-Series Regression Results of Impact on Service Utilization Before and After Case Management1.
| Service | Pre-KenPAC Intercept (t-values) |
Post-KenPAC Differential Intercept (t-values) |
Adjusted R2 | Durbin-Watson |
|---|---|---|---|---|
| Inpatient Hospital |
*886.20 (29.68) |
15.90 (0.25) |
0.39 | 2.49 |
| Laboratory |
*172.50 (9.06) |
*−130.73 (−3.25) |
0.45 | 2.41 |
| Outpatient Hospital2 |
*3327.57 (24.38) |
*−1349.05 (−3.57) |
0.30 | 2.63 |
| Prescription Drugs |
*4687.76 (24.33) |
−231.18 (−0.57) |
0.11 | 2.49 |
| Physician |
*11486.66 (21.06) |
−1042.56 (−0.91) |
0.00 | 2.22 |
Significant at 99-percent confidence level using two-tailed test.
Complete model includes a time-counter variable, a seasonal binary variable, and two binary variables to measure the differences in the level of utilization during the enrollment and post-KenPAC periods. All parameter estimates expressed as units per 10,000 enrollees.
Model corrected for autocorrelation (see Technical Note).
NOTE: KenPAC is Kentucky Patient Access and Care Program.
SOURCES: Kentucky Medicaid Management Information System; Miller, M., The Urban Institute, Washington, DC, and Gengler, D., Office of the Governor, Helena, Montana, 1993.
The results in Table 2 suggest that further analysis should be undertaken. For physician services, the adjusted R-square value is extremely low, suggesting that the simple, differential intercept model we use may be inadequate. Models of this type can measure differences in the level of utilization (i.e., the intercept) and/or the rate of change in utilization (i.e., the slope). Thus, there are four possible outcomes between time periods: no difference; a difference in the level (intercept) of use but not in the rate (slope) of use; a difference in the rate of use but not in the level of use; and differences in both the level and rate of use. We began with the simplest model: measuring differences in the level of use. However, the physician service results suggest that a more elaborate model allowing both the level and rate of utilization to change may provide a clearer picture.
For each service, models testing for differences in both the level and rate were estimated. In this instance the model takes the form:
where all variables are as before and:
| ENROLLSLP | = | variable measuring difference in rate of change of utilization during enrollment relative to pre-KenPAC (coded 1,2,3…N if enrollment period; 0 if otherwise) |
| POSTSLP | = | variable measuring difference in rate of change of utilization during post-KenPAC relative to pre-KenPAC (coded 1,2,3…N if post-KenPAC period; 0 if otherwise) |
With the exception of physician services, the regression results were unaffected by changing the model. That is, the adjusted R-square values did not increase significantly, and no additional significant parameters were found. However, in the physician service equation, the R-square value increased and both parameters (i.e., the differential intercept and slope) for the post-KenPAC period were negative and statistically significant (Table 3). This suggests that PCCM reduced the level of physician services and that the rate of utilization was still falling during the post-Ken-PAC period.
Table 3. Time-Series Regression Results (Elaborated Model1) of Impact on Physician Services Utilization Before and After Case Management.
| Adjusted Service | Pre-KenPAC | Post-KenPAC Differential | R2 | ||
|---|---|---|---|---|---|
|
|
|
||||
| Intercept (t-value) |
Slope (t-value) |
Intercept (t-value) |
Slope (t-value) |
||
| Physician |
*9985.43 (17.32) |
*205.53 (3.89) |
*−4212.22 (−3.25) |
*−198.45 (−3.45) |
0.12 |
Significant at 99-percent confidence level using two-tailed test.
Complete model includes a time-counter variable, a seasonal binary variable, two binary variables to measure differences in the level of utilization during the enrollment and post-KenPAC periods, and two variables to measure differences in the rate of change of utilization during the enrollment and post-KenPAC periods. All parameter estimates expressed as units per 10,000 enrollees. Model corrected for autocorrelation (see Technical Note).
NOTE: KenPAC is Kentucky Patient Access and Care Program.
SOURCES: Kentucky Medicaid Management Information System; Miller, M., The Urban Institute, Washington, DC, and Gengler, D., Office of the Governor, Helena, Montana, 1993.
One final elaboration of the analysis was undertaken. As mentioned earlier, outpatient hospital services include both routine services and emergency department services. The literature is replete with anecdotal and empirical evidence that emergency departments are often used for non-emergency care, particularly by the Medicaid population (Davidson, 1978, 1982; Freund, 1984; Freund and Neuschler, 1986; Lavenhar, Ratner, and Weinerman, 1968; Ullman, Block, and Stratman, 1975; Scherzer, Druckman, and Alpert, 1980; Kelman and Lane, 1976). The emergency department represents a conspicuous opportunity to reduce inappropriate utilization. It is important for State policymakers to understand more completely the observed reduction in outpatient hospital use: Is it emergency department services, routine services, or both?
Unfortunately, our 60-month data base does not report routine outpatient department and outpatient emergency department services separately. However, an identical data base that disaggregates routine outpatient department and emergency department services was available from the State, except that it covered a shorter time period (27 months, running from January 1985 through March 1987). Using these data, the pre-KenPAC period runs from January 1985 through February 1986 and the post-KenPAC period runs from September 1986 to March 1987. The enrollment period is unchanged (March 1986 to August 1986). Using the same methods previously outlined, we estimate separate models for routine outpatient department services and emergency department services.
Table 4 reports the regression results for routine outpatient department and emergency department services using the 27-month data series. Even though the pre- and post-Ken PAC periods are considerably shorter, the results are compelling. The adjusted R-square values for both models are high, and the post-Ken PAC parameters are negative and highly significant. These results indicate that PCCM reduced utilization in both the emergency department and in the outpatient department.
Table 4. Time-Series Regression Results of Impact on Emergency Department and Outpatient Department Services1 Utilization Before and After Case Management.
| Adjusted Service | Pre-KenPAC Intercept (t-values) |
Post-KenPAC Differential Intercept (t-values) |
R2 |
|---|---|---|---|
| Emergency Department |
*639.12 (24.79) |
*−199.87 (−3.49) |
0.66 |
| Outpatient Clinic |
*2616.20 (32.38) |
*−555.38 (−2.98) |
0.47 |
Significant at 99-percent confidence level using two-tailed test.
Analysis based on 27 months of data of service claims. Complete model includes a time-counter variable, a seasonal binary variable, and two binary variables measuring differences in the level of utilization during the enrollment and post-KenPAC periods. All parameter estimates expressed as units per 10,000 enrollees.
NOTE: KenPAC is Kentucky Patient Access and Care Program.
SOURCES: Kentucky Medicaid Management Information System; Miller, M., The Urban Institute, Washington, DC, and Gengler, D., Office of the Governor, Helena, Montana, 1993.
Discussion
The KenPAC PCCM program had no impact on inpatient hospital and prescription drug utilization. We attribute the lack of an impact on inpatient utilization to State policies and national trends already limiting inpatient hospital utilization. During the entire period of observation, inpatient utilization for this population was declining. The Kentucky Medicaid program imposes limitations on pre-operative days, weekend admissions, procedures that can be provided in an ambulatory setting, and optional procedures in the inpatient setting. These limitations would tend to dampen any additional effect from PCCM.
General trends affecting inpatient hospital utilization would also tend to dampen PCCM effects. Advances in technology have allowed many inpatient procedures to be moved to the ambulatory setting. Nationally, the number of ambulatory surgery centers (ASCs) increased dramatically from 1983 to 1987, from 240 to 780. In the Medicaid population under study here, the number of ASC procedures increased from 2 per 10,000 enrollees in July 1984 to 96 per 10,000 enrollees in June 1989. Also, the implementation of PPS in Medicare has made hospitals more sensitive to lengths of stay.
If the Kentucky experience is generalizable to other States, policymakers should not look for significant savings in the inpatient setting. This conclusion is contrary to findings in studies of managed care programs for the privately insured (Manning et al., 1984). However, as noted, Freund et al. (1989) found that major reductions in inpatient utilization do not occur in Medicaid managed care programs.
We had expected that KenPAC would lead to lower drug use for two reasons. First, under KenPAC, Medicaid recipients establish stable relationships with a single primary care physician, which might reduce excessive use resulting from “doctor shopping.” Second, the State's utilization review and program emphasis on reducing utilization should influence physician behavior. The empirical results suggest either that pre-KenPAC prescription drug utilization was at an appropriate level or that utilization review was ineffective in changing physicians' tendency to prescribe drugs. Other studies do not clarify this outcome. Long and Settle (1988) found that prescription drug utilization increased under PCCM. Studying the Medicaid physician capitation program in Kentucky that predated KenPAC, Bonham and Barber (1987) found no effect on drug use but noted that drugs were not directly subject to capitation. The impact of PCCM on prescription drug use requires further examination.
PCCM appears to have reduced utilization of independent laboratories, emergency departments, hospital outpatient departments, and physician services. This evidence, coupled with case study evidence in the literature, suggests that the reduction in laboratory services results from both an ongoing relationship between physician and patient and fewer specialist referrals. Establishing a patient with one primary care physician reduces the number of different providers seen and, consequently, may reduce the number of diagnostic tests a patient receives.
The reduction in emergency department use is consistent with that found in other research. Given the historical utilization of emergency departments by this population, this setting represents an obvious opportunity for utilization reduction. The impact on emergency department use may have been influenced by two other factors. As previously noted, the managed care program that predated KenPAC (Citicare) concentrated on reducing emergency department utilization. KenPAC program staff also indicated that more aggressive efforts to reduce emergency department use were made. KenPAC staff met with hospital staff to discuss program objectives and to relay the reimbursement consequences of providing unauthorized, nonemergency care in the emergency department.
KenPAC's more aggressive efforts to contain emergency department use may have had a spill-over effect on the outpatient department. That is, hospital staff may have become more careful about getting the case manager's approval for both emergency department and outpatient department utilization. The reduction in independent laboratory services suggests that PCCM reduces ancillary service utilization. Reduced numbers of ancillary procedures may also account for some of the utilization reduction seen in the hospital outpatient setting.
PCCM could be expected to increase the number of visits to physicians as other institutional sources of care are abandoned in favor of the office-based physician setting. Long and Settle (1988) found that primary care physician services increase. Hurley, Freund, and Taylor (1989) found that primary care visits do not increase but that there is a greater concentration of services provided by the case manager. Freund et al. (1989) found reductions in the percentage of enrollees with physician visits as well as the number of visits in some programs but not in others. We find a reduction in both the level and rate of change in physician utilization. These findings might be expected given our measure of utilization: services per enrollee. Even if the number of visits to case managers increases, the volume of services per enrollee can decline as case managers become more conscious of their care through utilization review (e.g., case managers' use of specialists). Finally, eliminating doctor shopping should reduce both visits and volume.
From the Kentucky experience, we conclude that a PCCM/FFS program with aggressive utilization review, particularly in the emergency department setting, can significantly reduce utilization. Policymakers can expect utilization to decline in independent laboratory, emergency department, outpatient department, and physician services. Although other studies have found increases in physician services, we would argue that the physician service findings depend on the utilization measure (visits versus services per enrollee). We cannot directly assess Medicaid cost effectiveness without estimates of program administration costs, which offset utilization savings. However, reductions in these four services, which account for roughly one-half of the acute care spending for the AFDC population, without increases in other services, suggest that PCCM/FFS programs can be cost effective.
Technical Note
There are two estimation problems that models of this kind can encounter: multicollinearity and autocorrelation. Both are problems of efficiency, rather than bias, in the parameter estimates.
Multicollinearity was diagnosed using a condition index (Belsey, Kuh, and Welsch, 1980). Condition index values of 10 indicate moderate multicollinearity, values of 30 indicate strong multicollinearity, and values above 30 indicate severe multicollinearity. When condition index values did not exceed 9.44 in our models, we determined the degree of multicollinearity to be minor. Even if multicollinearity were present, it is likely that the standard errors are inflated. Highly inflated standard errors reduce t-values, which has the effect of rendering statistically insignificant results (i.e., Type II errors). All statistically significant results in this article are at the 99-percent confidence level.
Time-series models can encounter autocorrelation problems. The effect of its presence on the estimated standard errors depends on the direction of autocorrelation. Durbin-Watson tests were run on all models. Only two models (that for outpatient services in Table 2 [DW = 2.63], and that for physician services in Table 3 [DW = 2.64]) clearly exhibited autocorrelation. In both instances, the direction of autocorrelation was negative, which tends to inflate standard errors. These models were re-estimated by deriving the value of rho from the Durbin-Watson statistic and using it in a generalized difference equation (Gujarti, 1988).
This correction reduced the standard errors making the post-KenPAC parameter estimates significant at the 99-percent confidence level rather than the 95-percent confidence level.
Acknowledgments
We would like to acknowledge the excellent research assistance of Maria Perozek and to thank Janie Miller and Mark Birdwhistell of the Kentucky Department for Medicaid Services for their patient and thorough explanations of the KenPAC program.
Footnotes
Mark E. Miller is with The Urban Institute and Daniel J. Gengler is with the Office of the Governor of Montana. The statements and opinions expressed are the authors' own and do not reflect the opinions or policies of The Urban Institute, the Office of the Governor of Montana, or the Health Care Financing Administration.
Two other points are worth noting. Hurley, Freund, and Taylor, and Freund et al. both depend on non-enrollee comparison groups from different sites. Non-equivalent comparison groups raise the possibility of comparing two systematically different populations, although we hasten to point out that both studies employ controls for population differences expected to affect use. Nonetheless, there can be differences in supply or market characteristics (e.g., beds per capita; physicians per capita) that influence utilization.
A counterargument is that requiring Medicaid enrollees to choose a primary provider results in establishing new patient-provider relationships, which, in turn, results in a shortrun increase in diagnostic services. Holahan, Bell, and Adler (1987) find only weak support for increased use of diagnostic services during program enrollment and startup. However, our analysis found statistically significant reductions in outpatient and laboratory utilization during the enrollment period. No significant changes in utilization of inpatient, physician, and drug services were found during the enrollment period.
Exponential smoothing techniques are sensitive to two fundamental assumptions: the time trend (constant, linear, or quadratic), which models the long-term underlying trend of the time series, and the smoothing weight, which determines how short-term fluctuations are estimated. Obviously, the assumption made regarding the underlying trend is important. However, even relatively minor changes in the smoothing parameter can produce marked differences in expected utilization (and estimated cost effectiveness). The time trend and smoothing assumptions are not explicit in the report, and evaluative statistics (e.g., relative mean standard percentage error) assessing the accuracy of the models are not provided.
Reprint requests: Mark E. Miller, Ph.D., Senior Research Associate, The Urban Institute, Health Policy Center, 2100 M Street, NW, Washington, DC 20037.
References
- Anderson MD, Fox PD. Lessons Learned From Medicaid Managed Care Approaches. Health Affairs. 1987 Spring;6:71–86. doi: 10.1377/hlthaff.6.1.71. [DOI] [PubMed] [Google Scholar]
- Aved BM. The Monterey County Health Initiative. Medical Care. 1987 Jan;25:35–45. doi: 10.1097/00005650-198701000-00005. [DOI] [PubMed] [Google Scholar]
- Belsey DA, Kuh E, Welsch RE. Regression Diagnostics, Identifying Influential Data and Sources of Collinearity. New York: Wiley and Sons Inc.; 1980. [Google Scholar]
- Bonham GS, Barber GM. Use of Health Care Before and During Citicare. Medical Care. 1987 Feb;25:111–119. doi: 10.1097/00005650-198702000-00004. [DOI] [PubMed] [Google Scholar]
- Campbell DT, Stanley JC. Experimental and Quasi-Experimental Designs for Research. Boston: Houghton Mifflin Company; 1963. [Google Scholar]
- Chang D, Holahan J. Medicaid Spending in the 1980s: The Access-Cost Containment Trade-Off Revisited. Washington, DC.: The Urban Institute; 1989. Working Paper 3836-01. [Google Scholar]
- Congressional Research Service. Medicaid Source Book: Background Data and Analysis. Washington, DC.: U.S. Government Printing Office; 1988. [Google Scholar]
- Cook TD, Campbell DT. Quasi-Experimentation: Design and Analysis Issues for Field Settings. Boston: Houghton Mifflin Company; 1979. [Google Scholar]
- Davidson S. Understanding the Growth in Emergency Department Utilization. Medical Care. 1978;16:122. doi: 10.1097/00005650-197802000-00004. [DOI] [PubMed] [Google Scholar]
- Davidson S. Physician Participation in Medicaid. Journal of Health Politics, Policy, and Law. 1982 Winter;7:703–717. doi: 10.1215/03616878-6-4-703. [DOI] [PubMed] [Google Scholar]
- Davidson S, Cromwell J, Schurman R. Medicaid Myths: Trends in Medicaid Expenditures and the Prospects for Reform. Journal of Health Politics, Policy, and Law. 1986;10:699–728. doi: 10.1215/03616878-10-4-699. [DOI] [PubMed] [Google Scholar]
- Draper NR, Smith H. Applied Regression Analysis. New York: John Wiley; 1966. [Google Scholar]
- Freund DA. Medicaid Reform: Four Studies of Case Management. Washington, DC.: American Enterprise Institute; 1984. [Google Scholar]
- Freund DA. Competitive Health Plans and Alternative Payment Arrangements for Physicians in the United States: Public Sector Examples. Health Policy. 1987;7:163–173. doi: 10.1016/0168-8510(87)90029-7. [DOI] [PubMed] [Google Scholar]
- Freund DA, Neuschler E. Overview of Medicaid Capitation and Case-Management Initiatives. Health Care Financing Review 1986 Annual Supplement. 1986 Dec;:21–30. [PMC free article] [PubMed] [Google Scholar]
- Freund DA, Rossiter LF, Fox PD, et al. Evaluation of the Medicaid Competition Demonstrations. Health Care Financing Review. 1989 Winter;11:81–97. [PMC free article] [PubMed] [Google Scholar]
- Grannemann T, Pauly M. Controlling Medicaid Costs: Federalism, Competition, and Choice. Washington, DC.: American Enterprise Institute; 1983. [Google Scholar]
- Gujarati DN. Basic Econometrics. New York: McGraw-Hill Book Company; 1988. [Google Scholar]
- Holahan J, Bell J, Adler GS. Medicaid Program Evaluation: Final Report. Washington, DC.: U.S. Department of Health and Human Services; 1987. [Google Scholar]
- Holahan J, Cohen J. Medicaid: The Trade-Off Between Cost Containment and Access to Care. Washington, DC.: The Urban Institute Press; 1986. [Google Scholar]
- Hurley RE. Status of Medicaid Competition Demonstrations. Health Care Financing Review. 1986 Winter;8:65–75. [PMC free article] [PubMed] [Google Scholar]
- Hurley RE, Freund DA. A Typology of Medicaid Managed Care. Medical Care. 1988 Aug;26:764–774. doi: 10.1097/00005650-198808000-00003. [DOI] [PubMed] [Google Scholar]
- Hurley RE, Freund DA, Taylor DE. Emergency Room Use and Primary Care Case Management: Evidence from Four Medicaid Demonstration Programs. American Journal of Public Health. 1989 Jul;79:843–846. doi: 10.2105/ajph.79.7.843. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kelman H, Lane D. Use of the Hospital Emergency Room in Relation to Use of Private Physicians. American Journal of Public Health. 1976;66:891. doi: 10.2105/ajph.66.9.891. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lavenhar M, Ratner R, Weinerman E. Social Class and Medical Care: Indices of Non-Urgency on Use of Hospital Emergency Services. Medical Care. 1968;6:368. [Google Scholar]
- Lewis-Beck MS. Interrupted Time Series. In: Barry WD, Lewis-Beck MS, editors. New Tools for Social Scientists. Beverly Hills, CA.: Sage Publications; 1986. [Google Scholar]
- Long SH, Settle RF. An Evaluation of Utah's Primary Care Case Management Program for Medicaid Recipients. Medical Care. 1988 Nov;26:1021–1032. doi: 10.1097/00005650-198811000-00001. [DOI] [PubMed] [Google Scholar]
- Manning WG, Leibowitz A, Goldberg GA, et al. A Controlled Trial of the Effect of a Prepaid Group Practice on Use of Services. New England Journal of Medicine. 1984;(310):1505–1510. doi: 10.1056/NEJM198406073102305. [DOI] [PubMed] [Google Scholar]
- McDowall D, McCleary R, Meidinger EE, Hay RA., Jr . Interrupted Time Series Analysis. Beverly Hills, CA.: Sage Publications; 1980. [Google Scholar]
- Roeder PW. The Kentucky Patient Access and Care Program Medicaid Waiver Program Evaluation. Lexington, KY.: University of Kentucky, James W. Martin School of Public Administration; 1987. Conducted for the Commonwealth of Kentucky Cabinet of Human Resources, Department of Medicaid Services Contract MS 86-87-6015. [Google Scholar]
- Rowland D, Lyons B. Mandatory HMO Care for Milwaukee's Poor. Health Affairs. 1987 Spring;6:87–100. doi: 10.1377/hlthaff.6.1.87. [DOI] [PubMed] [Google Scholar]
- Scherzer L, Druckman R, Alpert J. Case-Seeking Patterns of Families Using a Municipal Hospital Emergency Room. Medical Care. 1980;18:289. doi: 10.1097/00005650-198003000-00004. [DOI] [PubMed] [Google Scholar]
- Spitz B, Abramson J. Competition, Capitation, and Case Management: Barriers to Strategic Reform. The Milbank Quarterly. 1987;65(3):349–370. [PubMed] [Google Scholar]
- Temkin-Greener H. Medicaid Families Under Managed Care: Anticipated Behavior. Medical Care. 1986 Aug;24:721–732. doi: 10.1097/00005650-198608000-00007. [DOI] [PubMed] [Google Scholar]
- Ullman R, Block J, Stratman W. An Emergency Room's Patients: Their Characteristics and Utilization of Hospital Services. Medical Care. 1975;13:1011. doi: 10.1097/00005650-197512000-00003. [DOI] [PubMed] [Google Scholar]
