Skip to main content
Health Services Research logoLink to Health Services Research
. 2005 Jun;40(3):647–668. doi: 10.1111/j.1475-6773.2005.00378.x

Impacts of Managed Care Patient Protection Laws on Health Services Utilization and Patient Satisfaction with Care

Frank A Sloan, John R Rattliff, Mark A Hall
PMCID: PMC1361161  PMID: 15960684

Abstract

Objective

To assess effects of patient protection laws implemented by the vast majority of states during the 1990s on the public's satisfaction and trust relating to health care, and on key utilization measures.

Data Sources/Study Setting

Measures of individuals' health care utilization and satisfaction, and control variables, came from three waves of the Community Tracking Study (CTS) Household Surveys conducted in 1996–1997, 1998–1999, and 2000–2001. The CTS was conducted in 60 randomly selected communities, throughout the U.S. In addition, a supplemental national sample of households from CTS was also included, resulting in a combined sample with cases from 48 states and the District of Columbia. After applying exclusion restrictions, the analysis sample was 149,688 adults.

Study Design

Using a fixed-effects methodology, we assessed the influence of patient protection laws on satisfaction with care and utilization of services for the entire sample and for subsamples of persons in poor health, with low income, and who were enrolled in HMOs.

Data Collection/Extraction Methods

One of the authors (Hall) compiled relevant laws in all U.S. states through 2001 from primary legal sources, checking for accuracy by conducting independent research on statutory changes and by asking three to five regulators in each state to verify that the information was correct.

Principal Findings

Overall, patient protection laws had little or no effect on either trust, satisfaction with care, or utilization. Significance was found postenactment of a state patient protection law only for emergency room visits in the general sample, and only for physician trust in the low-income sample. Because of the number of possible associations examined, occasional findings of significance could occur by chance.

Conclusions

Enactment of managed care patient protection laws did not generally increase utilization of health services or improve patient satisfaction with care.

Keywords: Managed care, HMO, patient bill of rights, satisfaction with care, health services utilization


During the 1990s, both patients and health care providers voiced increasing dissatisfaction with managed care. Such dissatisfaction has been documented empirically (Blendon et al. 1998; Lake 1999/2000; Kemper et al. 1999/2000; Dudley and Luft 2001; Mechanic 2001; Kemper et al. 2002), but more importantly, became a general consensus and provided an impetus for regulation. Since 1995, most states have enacted some form of managed care patient protection legislation (Marsteller and Bovbjerg 1999; Noble and Brennan 1999; Sloan and Hall 2002).

Patient protection laws include (in various combinations): (1) liability provisions (right to bring a tort suit against a health plan), (2) provisions governing the process and standards for making and reviewing coverage decisions (criteria for medical necessity, “prudent layperson” standard for emergency care, and external review), (3) provisions affecting choice or access to providers (e.g., point-of-service options, direct access to specialists, and due process for providers terminated from a plan), (4) financial incentive and disclosure requirements (e.g., limiting physician incentives, banning “gag clauses,” and disclosing how plans reward physicians for cost savings); and (5) specific coverage mandates (e.g., for minimum maternity stays). Similar patient protection proposals have been considered by Congress.

There is widespread public debate about the need for these laws (Miller 1997; Altman et al. 1999; Symposium 1999; Agrawal and Billi 2001). Proponents insist they are needed to protect vulnerable patients and consumers from market-dominated forces that do not serve consumers' best interests, resulting in denial of needed care, inferior service, or profiteering (Rodwin 1996a,Rodwin 1996b;Families USA 1997). Opponents insist these protections are unnecessary because the alleged abuses are not widespread, or the industry is correcting problems on its own, in response to market pressures. Others, from a more neutral perspective, have observed that, in theory, these laws may help correct certain market defects, but may also result in less competitive or efficient markets (Korobkin 1999; Encinosa 2001; Sloan and Hall 2002).

This national debate has been hampered by a lack of empirical evidence, especially on the managed care patient protection laws taken as a package. Specific laws, such as those placing lower limits on length of stay for obstetrical delivery have been analyzed, showing some increase in length of stay and charges attributable to the statutory change (Dato et al. 1996; Udom and Betley 1998; Raube and Merrell 1999; Liu et al. 2004). One reason effects of these packages of laws have not been assessed previously is that they have been enacted only recently. Patient protection laws vary appreciably among states, and these laws were enacted at different times since 1995. This intertemporal and cross-sectional variation creates a natural experiment suitable for empirical analysis. This study reports on the effects of patient protection laws on the public's satisfaction and trust relating to health care, and on key utilization measures, as measured in three rounds of the Community Tracking Survey.

Methods

The Community Tracking Study (CTS)

Measures of individuals' health care utilization and satisfaction, and control variables, come from three waves of the CTS Household Surveys. These surveys were conducted in 60 randomly selected communities, including both metropolitan and nonmetropolitan areas, well-distributed throughout the U.S., with a good representation of demographic and market conditions. In addition to the main community sample, a supplemental national sample of households was also included, resulting in a combined sample with cases from 48 states and the District of Columbia. Three rounds were conducted, in 1996–1997, 1998–1999, and 2000–2001, with each including approximately 60,000 individuals in 32,000 families.

The Household Survey instrument, which has maintained the same core content over all three rounds, included questions on health insurance, health services use, access to care, satisfaction with care, physician trust, and health status. A family informant provided information on health insurance and health services utilization for all family members as well as family income and demographic information. Each adult was asked about health status, access to care, last physician visit, satisfaction with care, and trust in the person's physician.

We pooled observations from the three rounds to create a data set of 147,977 adults (persons aged 18+). The state laws apply to privately insured persons who do not obtain such insurance from self-insured employment-based (ERISA) plans. The CTS Household Survey did not ascertain whether the person obtained employment-based insurance through an ERISA or non-ERISA plan; thus, we had to approximate this. Since self-insured plans are more prevalent in large enterprises, we excluded families with private coverage from employment at an establishment with 1,000 or more employees. Also, because these laws do not consistently apply to health plans sponsored by government employees, government workers and their families were also excluded. Applying these restrictions resulted in an analysis sample of 49,668 adults.

Legal Variables

This study avoided shortcomings in use of legal variables from secondary sources by independently compiling relevant laws from primary legal sources (statutes, regulations), and by surveying state regulators about various enforcement activities such as fines, investigations, advisory bulletins, and increased agency staffing.1 State managed care patient protection laws enacted through December 31, 2001 were identified first by researching existing compilations, primarily from the National Conference of State Legislatures, National Association of Insurance Commissioners, and Blue Cross/Blue Shield Association. Then, the original statutory or regulatory source for each law was obtained and reviewed to confirm or revise the classification. Independent legal research was done to fill in gaps in these compilations and to determine whether particular laws had been struck down by courts. Also, three to five regulators in each state with relevant authority over these laws were sent a questionnaire in late 2001 to confirm and supplement this information with any missing information. At least one such survey was completed in each state.

Based on this information, 11 specific legal provisions (shown in Table 1) were coded as either absent entirely or present at a specific effective date. Also, we recorded an effective date for each state based on when it first adopted a bundle of these provisions in a law that was called either a “patient bill of rights” or a “managed care patient protection act.” States with these acts had at least four of the specific legal provisions we studied. We then constructed a time series of the legal provisions, using July to June “fiscal years” (hereafter “years”), since these periods correspond most closely to CTS interview dates. Summary data describing the implementation patterns of the various legal provisions are shown in Table 1. Any willing provider (AWP) laws were excluded from the package since these laws were almost all implemented before this study's observational period.

Table 1.

State Implementation of Patient Protection Laws

Provision Effective as of July 1

<1995 1995 1996 1997 1998 1999 2000 2001 2002 2003 never
Patient protection act 0 2 5 9 16 9 4 2 0 1 3
Individual provisions
 “Gag-clause” ban 1 0 3 22* 18 3 1 1 0 1 1
 Restrict financial   incentives 0 0 3 4 12 2 3 4 0 1 22
 Require point of service 2 1 2 3 4 6 3 1 0 0 29
 Direct access to ob/gyn 2 5 7 8 10 3 4 2 0 0 10
 Direct access, other   specialists 1 0 0 2 8 11 8 3 1 0 17
 Continuity with provider 1 1 2 6 8 6 4 6 2 1 14
 “Prudent layperson”  standard” 2 2 3 8 14 8 5 3 2 0 4
 Minimum stay for   deliveries 0 1 9 15 26 0 0 0 0 0 0
 External review process 5 1 1 2 9 5 12 4 1 2 9
 Define “medical necessity” 2 1 0 1 6 2 4 5 4 0 26
 Liability of MCOs 0 0 0 0 1 0 1 5 2 1 41

Entries represent the number of states in which the provision first went into effect during the period shown. Data shown are for all 50 states plus the District of Columbia. For the analyses, we omitted DC, Alaska, and Hawaii.

*

Maine repealed its gag-clause ban on June 1, 1999.

Nebraska repealed the gag-clause ban and the financial incentives restriction on July 15, 1998.

Tennessee repealed its direct-access law on July 1, 2001.

To assess time path of effects of the laws, we created three mutually exclusive variables for each type of law in each state to indicate: (a) years prior to the effective date, the omitted reference group, (b) the year of the effective date, or (c) years after the year in which the law became effective. Level of enforcement was indicated by a sum of each of the following five enforcement activities that was reported in each state in our survey of regulators (Sloan and Hall 2002): (1) enforcement directives or bulletins specific to these laws; (2) creation of additional staff positions to enforce these laws; (3) investigations of compliance with these laws; (4) fines issued for violation of these laws; and (5) any fines against health insurers over $10,000 over the past 10 years for violation of any law.

We tested two alternative specifications. In one, we included individual laws as explanatory variables, matching these to theoretically related outcome variables, but we found no consistent patterns and few results were statistically significant at conventional levels. This may in part have been because of substantial multicollinearity among the individual laws. In a second specification, we included variables for enactment of patient bills of rights as a package and for enforcement levels. Such packages contained several specific provisions, which to some extent varied among states. Some states implemented a package in one year, which was either preceded or followed by implementation of individual laws pertinent to one specific aspect of the care delivery process. The second specification did not recognize implementation of individual laws. Only results from the second approach are shown in the tables.

The legal variables were matched with each person's state of residence at the time of the CTS interview. The interview years roughly correspond to fiscal rather than calendar years. We took this into account in matching laws to individual respondents. We included state fixed effects to avoid confounding because of failure to account for time-invariant state characteristics that may be correlated with state propensity to implement patient protection laws. Year fixed effects measured time-varying factors that would have otherwise been omitted.

We tested the hypotheses that patient protection laws have: (1) increased utilization of hospitals, physicians, and specialist care in particular, and (2) increased patients' satisfaction with their choice of physicians, trust in their physicians, and satisfaction with the care they have received.

Health Care Utilization

Six variables measured health care utilization, all based on respondents' self-reports. Four of these dependent variables referred to use in the 12 months before the interview: number of overnight hospital stays (range: 0–20; 91 percent 0), emergency room visits (0–5; 81 percent 0), outpatient surgical procedures (0–5; 89 percent 0), and office visits, including visits to physicians, nurse midwives, nurse practitioners, and physician assistants (0–20, 20 percent 0, 3 percent 20).

Two other variables measured utilization of specialist care: a binary variable indicating whether or not the persons reported having had a mental health visit during the past year (0–1, 93 percent 0); and a binary variable indicating whether or not the most recent visit was to a specialist (0–1, 64 percent 0).

Satisfaction and Trust

We used four measures of patient satisfaction and trust. One measure referred to overall satisfaction with the respondent's last medical visit, and two other satisfaction measures referred to choice of primary care physician and choice of specialist, respectively. Each measure was based on responses on a 5-point scale ranging from “very satisfied” at 5 to “very dissatisfied” at 1. The specialist choice measure was obtained only from respondents who reported seeing or needing a specialist during the year before the interview.

Survey respondents reporting having a usual physician or having had at least one physician visit in the prior year were instructed to think about the doctor they “usually see when you are sick or need advice about your health” and to respond to several questions using a 5-point scale from strongly agree to strongly disagree. We chose three statements for which responses reflect patients' trust in their physician: (1) “I trust my doctor to put my medical needs above all other considerations when treating my medical problems,” (2) “I think my doctor is strongly influenced by health insurance company rules when making decisions about my medical care,” and (3) “I think my doctor may not refer me to a specialist when needed.” We assigned higher values to indicate more trust. We then used factor analysis to create an index of each respondent's trust in his or her usual physician. Loadings on the first factor were positive (0.48, 0.50, and 0.55).

Persons who had a physician visit in the past year and reported either a checkup or care for an illness were asked to rate the care they received during the visit. The questions were “How would you rate: (1) the thoroughness and carefulness of the examination and treatment you received?” (2) “how well your doctor listened to you?” and (3) “how well the doctor explained things in a way you could understand?” Responses were on a 5-point scale ranged from “poor” (1) to “excellent” (5). Responses to the three questions were highly correlated; thus, we again used factor analysis to obtain an index for “rating of care.” Again, loadings on the first factor were positive (0.83, 0.89), suggesting that we captured a common “satisfaction with care” factor.

Control Variables

We controlled for demographic characteristics—race/ethnicity (black, other nonwhite, Hispanic, and Spanish-language interview), gender, age, and educational level, family income, and language spoken in the household. We included continuous measures of age and its square.

To measure the influence of the person's health on use of services and satisfaction/trust, we included variables for the SF-12 Physical Component and the Mental Component Summary scores (Patrick and Erickson 1993) based on the Health Institute's scoring algorithms (Center for Studying Health System Change 2003). Both variables were continuous and range from approximately 10 to approximately 70, with higher values indicating better health.

Estimation

The utilization, satisfaction and trust outcomes were modeled as

Yjit=α+β1STATEi+β2ROUNDt+β3LEGALit+β3CONTROLSjit+ejit

where STATE represented binary variables for each state and ROUND represented binary variables for each survey round. Outcome Y for person j in state i at time t is a function of a time-invariant state variable, a time-varying indicator for the interview round, state implementation of managed care patient protection laws (LEGAL), and person-specific control variables.

We used SUDAAN (Research Triangle Institute 2001), which estimates variances that reflect the complex weight and design effects of the CTS (Schaefer et al. 2003). The weighted least squares regressions combined cross sections of observations from all three CTS rounds. The CTS is not a panel survey, thus precluding use of individual fixed effects. We tested the joint significance of enactment and enforcement using a Wald F-test.

Certain subgroups of the population are particularly vulnerable and therefore may be particularly affected by the patient protection laws. Thus, we conducted separate analyses of persons in poor health, persons with low-income, and persons enrolled in HMOs. Poor health was defined by an SF-12 score in about the lowest third of our analysis sample (SF-12<51). Similarly, low-income families were those in the lowest third of our sample (family income<$33,000). Finally, we analyzed persons enrolled in an HMO at the time of the interview (about one-half of our sample).

Results

Health Services Utilization

Patient protection laws generally had no statistically significant effects on utilization of health services (Table 2). Exceptions were for hospital utilization and emergency room visits. For hospital use, there was a positive and statistically significant at better than the 0.05 level for the year in which the law took effect. For the postperiod and for enforcement, there were no effects. For emergency room visits, the joint F-test was statistically significant at better than the 0.03 level, but the net effect of the laws on emergency room utilization was practically nil. Overall, signs on the legal parameter estimates were mixed, but parameter estimates had large associated standard errors.

Table 2.

Health Care Utilization

Explanatory Variables Hospital Stays ER Visits Office Visits Outpatient Surgeries Mental Health Visit Saw Specialist Last
Year PP law took effect 0.018* −0.012 0.001 −0.018 0.008 −0.009
(0.009) (0.013) (0.079) (0.011) (0.006) (0.010)
PP law postenactment 0.000 0.045* 0.062 −0.017 0.004 0.021
(0.015) (0.021) (0.126) (0.013) (0.007) (0.019)
State agency enforcement level 0.003 −0.013** 0.010 0.000 −0.001 −0.007
(0.003) (0.005) (0.039) (0.004) (0.002) (0.005)
Round 2 −0.012 0.004 −0.103 0.003 −0.007 −0.001
(0.012) (0.012) (0.066) (0.006) (0.004) (0.009)
Round 3 −0.029* 0.006 −0.061 0.019** −0.005 0.002
(0.012) (0.012) (0.086) (0.007) (0.005) (0.009)
Physical health (SF-12) −0.011** −0.018** −0.173** −0.006** −0.002** −0.006**
(0.001) (0.001) (0.004) (0.000) (0.000) (0.000)
Mental health (SF-12) −0.004** −0.008** −0.070** −0.001** −0.006** −0.001
(0.000) (0.001) (0.004) (0.000) (0.000) (0.000)
Female 0.053** 0.010 1.573** 0.008* 0.016** 0.156**
(0.006) (0.007) (0.040) (0.004) (0.003) (0.006)
Age −0.004** −0.007** −0.073** 0.001 0.003** 0.002
(0.001) (0.002) (0.010) (0.001) (0.001) (0.001)
Age-squared (1/100) 0.005** 0.002 0.086** 0.001 −0.004** −0.004*
(0.002) (0.002) (0.012) (0.001) (0.001) (0.002)
Education level 0.001 −0.008** 0.104** 0.003** 0.006** 0.012**
(0.002) (0.002) (0.008) (0.001) (0.001) (0.001)
Family income ($10,000s) 0.001 −0.008** 0.034** 0.001 0.000 0.010**
(0.001) (0.001) (0.007) (0.001) (0.001) (0.001)
Black 0.022 0.132** −0.246** −0.041** −0.024** −0.048**
(0.024) (0.019) (0.076) (0.010) (0.005) (0.017)
Hispanic 0.006 0.043 −0.314** −0.026* −0.018** −0.010
(0.010) (0.022) (0.100) (0.010) (0.007) (0.016)
Other race −0.026* −0.033 −0.742** −0.048** −0.036** −0.035*
(0.010) (0.018) (0.105) (0.011) (0.006) (0.016)
Spanish-speaking 0.011 −0.118** −0.706** −0.037** −0.014 0.011
(0.033) (0.045) (0.155) (0.014) (0.009) (0.040)
Constant 0.972** 2.014** 14.685** 0.420** 0.322** 0.405**
(0.064) (0.072) (0.425) (0.032) (0.023) (0.049)
Observations 49,668 49,668 49,668 49,668 49,668 36,733
R2 0.05 0.08 0.19 0.02 0.06 0.05
Joint significance test, postlaw & enforcement (Wald F) 0.50 3.70* 0.45 1.44 0.14 1.10

Standard errors in parentheses.

*

Significant at 5%;

**

significant at 1%.

The vast majority of control variables had plausible and statistically significant effects on utilization. Persons in better physical or mental health had lower rates of utilization, while older persons had higher rates. More-educated and higher-income persons also had higher rates of utilization, except for emergency room visits. Blacks and Hispanics had higher rates of hospital and emergency room use, but lower rates of office visits, outpatient surgeries, and visits to mental health providers and other specialists, holding other factors constant. Females had consistently higher levels of utilization than did males.

Patient Trust and Satisfaction with Care

None of the patient protection legal variables had statistically significant effects on patient trust or satisfaction with care (Table 3). For the postenactment period, statistical significance was gauged by a Wald test. By itself, the level of state agency enforcement had a negative impact on patient satisfaction with choice of specialist. But when tested jointly with the binary variable for postenactment, which had a larger but insignificant positive effect, the joint effect was not significant. Most of the other parameter estimates for the laws were positive, suggesting improved patient satisfaction and trust. But these parameter estimates had large associated standard errors.

Table 3.

Patient Attitudes

Explanatory Variables Trust Doctor Satisfied (Last Visit) Satisfied (Doctor Choice) Satisfied (Specialist Choice)
Year PP law to effect 0.023 0.020 0.015 −0.036
(0.014) (0.022) (0.022) (0.034)
PP law postenactment 0.005 0.010 0.029 0.070
(0.022) (0.031) (0.030) (0.050)
State agency enforcement level 0.007 0.009 0.003 −0.032*
(0.006) (0.009) (0.008) (0.015)
Round 2 −0.029* −0.009 −0.010 0.007
(0.012) (0.016) (0.017) (0.027)
Round 3 −0.007 −0.033 −0.009 −0.009
(0.016) (0.020) (0.019) (0.032)
Physical health (SF-12) 0.005** 0.007** 0.006** 0.007**
(0.001) (0.001) (0.001) (0.001)
Mental health (SF-12) 0.009** 0.013** 0.014** 0.014**
(0.000) (0.001) (0.001) (0.001)
Female 0.123** 0.190** 0.051** 0.106**
(0.009) (0.011) (0.011) (0.022)
Age −0.014** −0.004 −0.018** −0.015**
(0.002) (0.003) (0.003) (0.005)
Age-squared (1/100) 0.022** 0.013** 0.027** 0.024**
(0.002) (0.004) (0.004) (0.006)
Education level 0.010** 0.010** −0.007* −0.006
(0.002) (0.003) (0.003) (0.004)
Family income ($10,000s) 0.005** 0.012** 0.003 0.006*
(0.001) (0.002) (0.002) (0.003)
Black 0.139** −0.144** −0.028 −0.028
(0.016) (0.025) (0.024) (0.034)
Hispanic −0.086** −0.120** −0.061* 0.034
(0.018) (0.030) (0.028) (0.040)
Other race −0.165** −0.222** −0.060 −0.099
(0.025) (0.036) (0.038) (0.057)
Spanish-speaking −0.168** −0.332** −0.121* −0.184*
(0.040) (0.055) (0.053) (0.091)
Constant −0.734** −1.342** 3.740** 3.708**
(0.060) (0.089) (0.106) (0.127)
Observations 40,672 36,853 46,451 18,990
R2 0.07 0.07 0.04 0.05
Joint significance test, postlaw  & enforcement (Wald F) 1.15 1.06 1.29 2.30

Standard errors in parentheses.

*

Significant at 5%;

**

significant at 1%.

In contrast to the patient protection laws, the vast majority of other explanatory variables were statistically significant at conventional levels. Persons in good physical and mental health, females, and higher-income persons tended to report higher levels of satisfaction. Blacks, Hispanics, and persons interviewed in Spanish tended to be less satisfied. Patterns by education were mixed. More-educated persons tended to be more satisfied with their last visit and had greater trust in their doctors, but they were less satisfied with their choice of physician. Older persons tended to be more satisfied with their care.

In the analysis of subgroups (Table 4), patient protection laws postenactment and enforcement level had a jointly significant positive effect on trust of physician among low-income persons. At the observational means, low-income persons had a higher level of trust after patient-protection laws were implemented. Coefficients on the other patient-protection law variables were not statistically significant for any of the subgroups; nor did year-of-interview have significant effects. Most other independent variables were statistically significant. Signs on the parameter estimates for the control variables were consistent across the three subgroups and corresponded to the total-sample results in Table 3.

Table 4.

Trust in Physician

Sub-Samples

Trust Doctor

Explanatory Variables Bad Health Low Income HMO Only
Year PP law took effect −0.004 0.032 0.017
(0.026) (0.025) (0.024)
PP law postenactment −0.043 −0.039 0.018
(0.038) (0.039) (0.038)
State agency enforcement level 0.020 0.025** 0.013
(0.011) (0.009) (0.011)
Round 2 −0.020 −0.001 −0.032
(0.018) (0.020) (0.018)
Round 3 −0.004 −0.003 0.008
(0.025) (0.025) (0.022)
Physical health (SF-12) 0.001 0.005** 0.005**
(0.001) (0.001) (0.001)
Mental health (SF-12) 0.009** 0.008** 0.010**
(0.001) (0.001) (0.001)
Female 0.139** 0.128** 0.111**
(0.015) (0.013) (0.012)
Age −0.012** −0.021** −0.015**
(0.003) (0.003) (0.003)
Age-squared (1/100) 0.020** 0.030** 0.022**
(0.004) (0.004) (0.004)
Education level 0.009** 0.018** 0.002
(0.003) (0.003) (0.003)
Family income ($10,000s) 0.005 0.022* 0.002
(0.003) (0.011) (0.002)
Black −0.149** −0.152** −0.107**
(0.023) (0.022) (0.020)
Hispanic −0.076* −0.164** −0.060*
(0.031) (0.030) (0.024)
Other race −0.196** −0.150** −0.147**
(0.038) (0.037) (0.030)
Spanish-speaking −0.187** −0.131** −0.159**
(0.047) (0.046) (0.050)
Constant −0.601** −0.716** −0.747**
(0.100) (0.086) (0.091)
Observations 13,972 12,963 20,136
R2 0.07 0.08 0.06
Joint significance test, postlaw & enforcement (Wald F) 1.67 4.30* 2.23

Standard errors in parentheses.

*

Significant at 5%;

**

significant at 1%.

Discussion

To date, no other study has assessed impacts of patient protection laws using time-series data that cover the entire period of relevant enactments. Other research has used the CTS, but these analyses were restricted to the first one or two waves of the study (Doescher et al. 2000; Reschovsky et al. 2000; Shi et al. 2002; 2003). Also, this study took steps to validate and specify content and effective date of legal enactments, and the relevant enforcement efforts by state agencies.

Overall, patient protection laws had little or no effect on either trust, satisfaction with care, or utilization. For the main bundle of laws in each state, significance was found postenactment only for emergency room visits in the general sample, and only for physician trust in the low-income sample. We also found that patient protection laws increased hospital use initially, but this effect disappeared in the postenactment period. Because of the number of possible associations we examined, these occasional findings of significance could be due mainly to chance.

The increase in emergency visits is consistent with other reports that the prudent layperson rule has had a notable impact on health plans' coverage policies (Hall 2004a, b). However, this possibility was not confirmed when we separately analyzed the prudent layperson laws, apart from, and controlling for, the overall bundle of laws (results not shown). Thus, mechanisms to explain this finding are unclear.

In addition to the analyses presented in the tables, we also examined impacts of each of the 11 particular legal provisions on related outcome variables (e.g., the impact of direct access laws on specialist visits). Most results were not significant, but there was a scattering of significant findings that again might have been because of chance. Moreover, those findings had both negative and positive signs, implying that patient protection laws, if they had any effect, might actually have worsened satisfaction or increased utilization in some instances. These weakly mixed and counteracting effects of individual laws might explain why the bundles of laws have few overall effects.

Liu et al. (2004) examined effects of “drive-through delivery” laws on postpartum length of stay and hospital charges. They found the laws increased both stays and charges, but by a small amount and less than reported by previous case studies (Dato et al. 1996; Miller et al. 1997; Udom and Betley 1998; Raube and Merrell 1999; Volavka 1999). A small effect on a very narrowly defined measure of utilization would not be apparent in the more general measures we included in our analysis.

Our results do not imply that managed care patient protection laws have no effect on patients. Rather, the effects were not sufficiently widespread to be picked up in analysis of the general measures available on the CTS. The CTS has the advantage of providing national data, collected in a fairly consistent format over time, for the period in which the laws were implemented, permitting before and after comparisons. Particularly judged ex post, CTS is not well-suited for analyzing effects of individual patient protection laws.

Results for effects of specific laws are interesting, but the larger effects are even more so. The arguments in the political arena were that state and federal governments needed to act to preserve patient choice of provider and to allow physicians to exercise their clinical judgment on behalf of patients, free of interference from health plans. Our results suggest that the states' legislative initiatives did not improve patient satisfaction with care or increase use of care. Of course, these legislative initiatives could have caused managed care organizations to change their operating procedures even before laws were enacted in states in which they did business. Our analysis could not capture this spillover effect.

We performed several robustness tests to gauge the sensitivity of our findings to changes in specification and sample. First, in our main analysis, we excluded families with private coverage from employment at an establishment with 1,000 or more employees as well as government employees in order to exclude portions of the population that often are not subject to these laws. Since physicians may treat all patients similarly, irrespective of their insurance status, it is possible that the patient protection laws affected all patients in the state, rather than just those specifically covered by the laws. Therefore, we reestimated the equations, eliminating the sample exclusion. Overall, the results were very similar to the ones we report.

Second, laws other than patient protection laws may have affected patient satisfaction and utilization in ways that offset or cloud the effects of patient protection laws. To explore whether this was the case, we included additional explanatory variables for caps on payments for noneconomic damages in medical malpractice lawsuits, and for insurance benefit mandates. Since we used state fixed effects and our analysis was limited to 1996–2001, the only relevant changes were those that occurred during that time span. We included two variables for noneconomic damage caps, one for the states that implemented a cap during the observational period, and the other for states that dropped caps. Only three states adopted caps or dropped caps during these years (ATRA 2004).

Mandated benefits have increased greatly in the states over the past 25 years (Jensen and Morrisey 1999). We added explanatory variables for five benefit mandates. These mandates were selected on the basis of having changed during the observational period as well as prior research indicating which mandates had substantial effects on HMO and indemnity premiums (Henderson et al. 2003). Only a few statistically significant effects were found. Of the 120 new coefficients estimated, only six were significant (p<0.05). Most of these effects, several of which were nonintuitive, were probably chance occurrences. More importantly, adding these variables had no material impact on the coefficients of greatest interest, those for the patient protection laws.

Third, we experimented with alternative lag structures since the effects of these laws may not appear for several years after enactment. We included a variable for the time since the law was implemented, replacing the postimplementation binary variable. The new variable measured the number of years since implementation, and was zero otherwise. The results were mixed, in that one law which previously had a statistically significant effect was no longer significant, but another law that was previously insignificant became significant. None of the joint significance tests with the new specification rejected the null hypothesis of no relationship. In addition, we ran the regressions using a set of seven mutually exclusive binary variables to estimate effects of the legal changes on a year-by-year basis (e.g., 1-year-before; year-of; 1-year-after; etc.).

Fourth, the states' enactment of AWP laws, which had the stated purpose of improving patient choice, preceded our observational period. Eight states (among states in our sample) had AWP laws covering physicians. No AWP statute with physician coverage was enacted after 1994. Since we used fixed effects, we could not assess the role of AWP per se. But we did test the hypothesis that patient protection laws had a greater impact on patient satisfaction in those states without AWP laws covering physicians. The test involved interacting a binary variable for no-physician AWP with the patient protection law variables. The vast majority of interaction terms were statistically insignificant. The single exception implied that patient satisfaction improved more in states with physician AWP. We do not attach importance to this isolated result.

In sum, our sensitivity analysis showed our findings to be robust, both to changes in equation specification and to changes in the sample. This gives greater confidence to the conclusion that patient protection laws did not substantially affect patient satisfaction with care or utilization of services. These nonfindings are reassuring to the extent that some commentators had expressed concern that managed care protection regulation is subject to the risk of capture by those who are regulated (Epstein 1999) and that the cure might be worse than any problems the laws attempt to address (Hyman 2000). While we find almost no evidence that patient protection legislation has directly improved patients' attitudes relating to managed care, the laws also do not seem to have substantially increased the cost of acute health care services either, at least directly. These nonfindings are somewhat puzzling, however, in view of reports elsewhere that managed care practices such as utilization review and gatekeeping restrictions have diminished substantially over the time period these laws were adopted (Mays et al. 2003). However, a qualitative study, reported elsewhere (Hall 2004b), suggests these changes were driven more by market forces than by legal mandates.

Several possibilities might explain our failure to find more significant results. First, these outcome measures might not have been precise enough, or the effects were too small to detect, using this analytical design. Yet, we had a very large sample, and these measures are state-of-the-art, especially the legal variables. Also, these measures were sufficient to detect a range of other effects not related to these laws. Thus, it appears likely that legal effects truly were mostly nonexistent.

A second reason these laws may have had no effect is that health plans may not be complying with them. However, elsewhere, reports indicate that compliance with these laws is high (Hall 2004a, b).

Third, we treated patient protection laws as exogenous variables. More realistically, adoption of patient protection laws may be endogenous to satisfaction and utilization. However, unlike many if not most state policies, virtually every state adopted patient protection laws. Thus, any approach that accounted explicitly for endogeneity would have to explain the precise timing of adoption rather than variation between adoption and nonadoption. The state of the art in predicting legislative enactments does not offer any clear way for us to explain why state X adopted patient protection in 1996 versus 1998. Thus, we relied on state fixed effects to account for any relevant differences among states at the start of our study period.

Fourth, plans may have changed practices across the board, both in states with and without these laws, and in advance of these laws being enacted. This might have been done out of anticipation that these enactments were becoming widespread, or because the general public and political discussion leading up to these laws may have been sufficient for health plans to change their practices even without being compelled to do so, realizing the level of dissatisfaction that existed. There are strong indications elsewhere that this in fact occurred (Hall 2004a, b).

However, if anticipatory or nondifferentiated changes were the main reason for finding no legal effects, we likely would have seen secular changes in these outcome measures over time. In fact, one report based on all three rounds of the CTS found that consumer confidence in the health care system and trust in physicians rose slightly between 1997 and 2001 (Reed and Trude 2002). The authors suggested that this improvement may have reflected the patient protection laws and a loosening of health plan restrictions. But these were descriptive findings rather than from multivariate analysis. Controlling for other factors, we found no statistically significant trends in trust or satisfaction

We did find changes over time in some of the utilization variables, however. Hospitals stays decreased, and outpatient surgeries increased, in the most recent (2000–2001) survey round. This pattern cannot easily be explained in relation to patient protection laws, but the two changes are more easily understood as being in reaction to each other. The absence of any net changes in utilization, either overall, or in reaction to patient protection laws, may explain why patients have not changed their views about the health care system.

A fifth explanation for lack of effects is that people's perceptions of health plans may differ from the plans' actual structure and behavior (Cunningham et al. 2001). Thus, health plans may have changed their practices, but enrollees may either lack knowledge of these changes or misperceive their nature. Moreover, even to the extent that people perceived changes, this may not have affected the attitudes that we measured because they relate more to experiences with physicians or other care providers than with health plans directly. Although health insurance affects provider behavior, many other factors do as well, so changes in managed care may be swamped by other, counteracting effects (for instance, reduced payment rates), or may be insignificant in view of the more fundamental features of treatment relationships.

Whatever the explanation, it does not appear that these laws, in and of themselves, have affected medical care delivery as experienced by patients. It remains to be seen whether these laws have had any great effect on the conditions of medical practice, as experienced by physicians.

Acknowledgments

Funding was provided by the Robert Wood Johnson Foundation, under its program Changes in Health Care Financing and Organization. These findings and conclusions are solely those of the authors and do not necessarily reflect the Foundation's views.

Footnotes

1

Too often, public policy researchers base their main independent or control variables on lists of laws generated by others, without any scrutiny of the accuracy or suitability of the information in these lists. Existing compilations are often done by industry, advocacy, or trade association groups for purposes other than research, which can introduce bias into how laws are identified and categorized, or can fail to capture the relevant conceptual schema. Also, existing compilations often are inadequate for time-series (longitudinal) studies because they fail to distinguish between enactment versus effective dates. Finally, compilations of statutes almost invariably fail to account for widely varying implementation and enforcement by regulatory agencies (Patterson et al. 1999). Our survey methods are described more fully in Sloan and Hall (2002).

References

  1. Agrawal G, Billi J, editors. The Challenge of Regulating Managed Care. Ann Arbor: University of Michigan Press; 2001. [Google Scholar]
  2. Altman SH, Reinhardt UE, Shactman D. Regulating Managed Care: Theory, Practice, and Future Options. San Francisco: Jossey-Bass Publications; 1999. [Google Scholar]
  3. American Tort Reform Association (ATRA) “Tort Reform Record, July 13, 2004 Edition” [accessed July 22, 2004] 2004. Available at http://www.atra.org/files.cgi/7802_Record6-04.pdf.
  4. Blendon RJ, Brodie M, Benson JM, Altman DE, Levitt L, Hoff T, Hugick L. “Understanding the Managed Care Backlash.”. Health Affairs. 1998;17(4):80–94. doi: 10.1377/hlthaff.17.4.80. [DOI] [PubMed] [Google Scholar]
  5. Center for Studying Health System Change . Washington, DC: 2003. “Community Tracking Study 2000-01 Household Survey Restricted Use File: User's Guide (Release 1).”. Technical Publication No. 43. [Google Scholar]
  6. Cunningham PJ, Denkand C, Sinclair M. “Do Consumers Know How Their Health Plan Works?”. Health Affairs. 2001;20(2):159–66. doi: 10.1377/hlthaff.20.2.159. [DOI] [PubMed] [Google Scholar]
  7. Dato V, Ziskin L, Fulcomer M, Martin RM, Knoblauch K. “Average Postpartum Length of Stay for Uncomplicated Deliveries—New Jersey, 1995.”. Morbidity and Mortality Weekly Report. 1996;45(32):700–5. [PubMed] [Google Scholar]
  8. Doescher MP, Saver BG, Franks P, Fiscella K. “Racial and Ethnic Disparities in Perceptions of Physician Style and Trust.”. Archives of Family Medicine. 2000;9(10):1156–3. doi: 10.1001/archfami.9.10.1156. [DOI] [PubMed] [Google Scholar]
  9. Dudley R, Luft HS. “Managed Care in Transition.”. New England Journal of Medicine. 2001;344(14):1087–92. doi: 10.1056/NEJM200104053441410. [DOI] [PubMed] [Google Scholar]
  10. Encinosa W. “The Economics of Regulatory Mandates on the HMO Market.”. Journal of Health Economics. 2001;20(1):85–107. doi: 10.1016/s0167-6296(00)00064-3. [DOI] [PubMed] [Google Scholar]
  11. Epstein RA. “Managed Care Under Siege.”. Journal of Medicine and Philosophy. 1999;24(5):434–60. doi: 10.1076/jmep.24.5.434.2516. [DOI] [PubMed] [Google Scholar]
  12. Families USA . HMO Consumers at Risk: States to the Rescue. Washington, DC: Families USA Foundation; 1997. [Google Scholar]
  13. Hall MA. “Managed Care Patient Protection or Provider Protection? A Qualitative Assessment.”. American Journal of Medicine. 2004a;117(2):932–7. doi: 10.1016/j.amjmed.2004.06.042. [DOI] [PubMed] [Google Scholar]
  14. Hall MA. “The ‘Death’ of Managed Care: A Regulatory Autopsy.”. 2004b doi: 10.1215/03616878-30-3-427. Unpublished. [DOI] [PubMed] [Google Scholar]
  15. Henderson JW, Seward JA, Taylor BA. “State-Level Health Insurance Mandates and Premium Costs.”. 2003 Unpublished. [Google Scholar]
  16. Hyman DA. “Regulating Managed Care: What's Wrong with a Patient Bill of Rights?”. Southern California Law Review. 2000;73:221–75. [PubMed] [Google Scholar]
  17. Jensen GA, Morrisey MA. “Employer-Sponsored Health Insurance and Mandated Benefit Laws.”. Milbank Quarterly. 1999;77:425–59. doi: 10.1111/1468-0009.00147. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Korobkin R. “The Efficiency of Managed Care ‘Patient Protection’ Laws: Incomplete Contracts, Bounded Rationality, and Market Failure.”. Cornell Law Review. 1999;85:1–102. [PubMed] [Google Scholar]
  19. Kemper P, Reschovsky JD, Tu HT. “Do HMOs Make a Difference? Summary and Implications.”. Inquiry. 1999/2000;36:419–25. [PubMed] [Google Scholar]
  20. Kemper P, Tu HT, Reschovsky JD, Schaefer E. “Insurance Product Design and Its Effects: Trade-Offs along the Managed Care Continuum.”. Inquiry. 2002;39:101–17. doi: 10.5034/inquiryjrnl_39.2.101. [DOI] [PubMed] [Google Scholar]
  21. Lake T. “Do HMOs Make a Difference? Consumer Assessments of Health Care.”. Inquiry. 1999/2000;36:411–8. [PubMed] [Google Scholar]
  22. Liu Z, Dow WH, Norton EC. “Effect of Drive-through Delivery Laws on Postpartum Length of Stay and Hospital Charges.”. Journal of Health Economics. 2004;23:129–55. doi: 10.1016/j.jhealeco.2003.07.005. [DOI] [PubMed] [Google Scholar]
  23. Marsteller JA, Bovbjerg BR. Federalism and Patient Protection: Changing Roles for State and Federal Government. Washington, DC: Urban Institute; 1999. [Google Scholar]
  24. Mays GP, Hurley RE, Grossman JM. “An Empty Toolbox? Changes in Health Plans' Approaches for Managing Costs and Care.”. Health Services Research. 2003;38(1):375–93. doi: 10.1111/1475-6773.00121. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Mechanic D. “The Managed Care Backlash: Perceptions and Rhetoric in Health Care Policy and Potential for Health Care Reform.”. The Milbank Quarterly. 2001;79(1):35–54. doi: 10.1111/1468-0009.00195. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Miller TE. “Managed Care Regulation in the Laboratory of the States.”. Journal of the American Medical Association. 1997;278(13):1102–9. doi: 10.1001/jama.278.13.1102. [DOI] [PubMed] [Google Scholar]
  27. Miller MJ, O'Connor ME, Carroll-Pankhurst C. “Impact of Short-Stay Legislation on Length of Stay, Cost of Care, and Rehospitalization for Infants Born Vaginally.”. Pediatrics Research. 1997;41:205A. [Google Scholar]
  28. Noble AA, Brennan TA. “The Stages of Managed Care Regulation: Developing Better Rules.”. Journal of Health Politics, Policy and Law. 1999;24(6):1275–305. doi: 10.1215/03616878-24-6-1275. [DOI] [PubMed] [Google Scholar]
  29. Paterson R, Dallek G, Hadley E, Pollitz K, Tapay N. Implementation of Managed Care Consumer Protections in Missouri, New Jersey, Texas and Vermont. Georgetown University. The Henry J. Kaiser Family Foundation; 1999. [Google Scholar]
  30. Patrick DL, Erickson P. Health Status and Health Policy: Quality of Life in Health Care Evaluation and Resource Allocation. New York: Oxford University Press; 1993. [Google Scholar]
  31. Raube K, Merrell K. “Maternal Minimum-Stay Legislation: Cost and Policy Implications.”. American Journal of Public Health. 1999;89(6):922–3. doi: 10.2105/ajph.89.6.922. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Reed MC, Trude S. “Why Do You Trust? Americans' Perspectives on Health Care, 1997–2001.”. 2002 Tracking Report No. 3. Center for Studying Health System Change. [PubMed] [Google Scholar]
  33. Reschovsky J, Kemper P, Tu HT. “Does Type of Health Insurance Affect Health Care Use and Assessments of Care among the Privately Insured?”. Health Services Research. 2000;35(1):219–37. [PMC free article] [PubMed] [Google Scholar]
  34. Research Triangle Institute . SUDAAN User's Manual, Release 8.0. Research Triangle Park, NC: Research Triangle Institute; 2001. [Google Scholar]
  35. Rodwin MA. “Consumer Protection and Managed Care: The Need for Organized Consumers.”. Health Affairs. 1996a;15(3):110–23. doi: 10.1377/hlthaff.15.3.110. [DOI] [PubMed] [Google Scholar]
  36. Rodwin MA. “Consumer Protection and Managed Care: Issues, Reform Proposals, and Trade-Offs.”. Houston Law Review. 1996b;32(4):1321–81. [Google Scholar]
  37. Schaefer E, Potter F, Williams S, Diaz-Tena N, Reschovsky JD, Moore G. Washington, DC: 2003. “Community Tracking Study: Comparison of Selected Statistical Software Packages for Variance Estimation in the CTS Surveys.”. Technical Publication No. 40. Center for Studying Health System Change. [Google Scholar]
  38. Shi L, Forrest C, Von Schrader S, Ng J. “Vulnerability and the Patient–Practicioner Relationship: The Roles of Gatekeeping and Primary Care Performance.”. American Journal of Public Health. 2003;93(1):138–44. doi: 10.2105/ajph.93.1.138. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Shi L, Starfield B, Politzer R, Regan J. “Primary Care, Self-Rated Health, and Reductions in Social Disparities in Health.”. Health Services Research. 2002;37(3):529–50. doi: 10.1111/1475-6773.t01-1-00036. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Sloan FA, Hall MA. “Market Failures and the Evolution of State Regulation of Managed Care.”. Law & Contemporary Problems. 2002;65(4):169–206. [Google Scholar]
  41. Symposium “The Managed Care Backlash.”. Journal of Health Politics, Policy & Law. 1999;24(5):873–1218. doi: 10.1215/03616878-24-5-1159. [DOI] [PubMed] [Google Scholar]
  42. Udom NU, Betley CL. “Effects of Maternity-Stay Legislation on Drive-through Deliveries.”. Health Affairs. 1998;17(5):208–15. doi: 10.1377/hlthaff.17.5.208. [DOI] [PubMed] [Google Scholar]
  43. Volavka MP. “Minimum Maternity-Stay Legislation: Changes in Hospital Length of Stay for Childbirth.”. The Pennsylvania Health Care Cost Containment Council Report. 1999. #99–10/01–03. Available at http://www.phc4.org/reports/cdlos/Default.htm.

Articles from Health Services Research are provided here courtesy of Health Research & Educational Trust

RESOURCES