Skip to main content
Health Services Research logoLink to Health Services Research
. 2012 Sep 21;48(2 Pt 1):435–454. doi: 10.1111/j.1475-6773.2012.01465.x

California's Minimum Nurse Staffing Legislation: Results from a Natural Experiment

Barbara A Mark 1, David W Harless 2, Joanne Spetz 3, Kristin L Reiter 4, George H Pink 4
PMCID: PMC3626342  PMID: 22998231

Abstract

Objective

To determine whether, following implementation of California's minimum nurse staffing legislation, changes in acuity-adjusted nurse staffing and quality of care in California hospitals outpaced similar changes in hospitals in comparison states without such regulations.

Data Sources/Study Setting

Data from the American Hospital Association Annual Survey of Hospitals, the California Office of Statewide Health Planning and Development, the Hospital Cost Report Information System, and the Agency for Healthcare Research and Quality's Health Care Cost and Utilization Project's State Inpatient Databases from 2000 to 2006.

Study Design

We grouped hospitals into quartiles based on their preregulation staffing levels and used a difference-in-difference approach to compare changes in staffing and in quality of care in California hospitals to changes over the same time period in hospitals in 12 comparison states without minimum staffing legislation.

Data Collection/Extraction Methods

We merged data from the above data sources to obtain measures of nurse staffing and quality of care. We used Agency for Healthcare Research and Quality's Patient Safety Indicators to measure quality.

Principal Findings

With few exceptions, California hospitals increased nurse staffing levels over time significantly more than did comparison state hospitals. Failure to rescue decreased significantly more in some California hospitals, and infections due to medical care increased significantly more in some California hospitals than in comparison state hospitals. There were no statistically significant changes in either respiratory failure or postoperative sepsis.

Conclusions

Following implementation of California's minimum nurse staffing legislation, nurse staffing in California increased significantly more than it did in comparison states' hospitals, but the extent of the increases depended upon preregulation staffing levels; there were mixed effects on quality.

Keywords: Nurse staffing ratios, hospitals, quality of care, California AB394


In 1999, California became the first state in the United States to pass legislation requiring minimum nurse-to-patient staffing ratios in acute care hospitals. The legislation, for which nursing unions were outspoken advocates, was, in part, a response to a reported decline in hospitals' nurse staffing and skill mix induced by pressures from increasing managed care penetration. California Assembly Bill (AB 394) required the California State Department of Health Services to establish unit-specific minimum staffing levels for licensed nurses (registered nurses [RNs] and licensed vocational nurses [LVNs]) in acute care hospitals. The draft regulations were released in January 2002 and, after a period of highly contentious public comment, implemented in January 2004. The ratios were phased beginning January 1, 2004, the staffing ratio for medical-surgical areas was set at 1 : 6; in March 2005, the ratio was enriched to 1 : 5. In 2008, additional specialty units were subject to the regulations. Up to 50 percent of licensed nursing hours could be provided by LVNs (Spetz 2004).

Studies have concluded that the legislation led to increases in nurse staffing (Donaldson et al. 2005; Spetz et al. 2009; Aiken et al. 2010; Donaldson and Shapiro 2010; Serratt et al. 2011; Cook et al. 2012). McHugh et al. (2011) found that California hospitals increased their nurse staffing significantly more after the legislation than did hospitals in other states and did not simultaneously reduce their skill mix—the ratio of RNs to total nursing staff. However, their study included hospitals in states that had adopted other approaches, for example, hospital-specific staffing requirements, public reporting of nurse staffing, potentially minimizing the extent to which the effects of the legislation could be seen (American Nurses Association [ANA] 2011). Although these studies demonstrated improvements in aggregate RN staffing, due to wide variability in hospitals' prelegislation staffing levels, it is likely that there were heterogeneous hospital responses to the regulations based on their prelegislation staffing levels, and that some hospitals met “minimum” requirements by differentially increasing their use of LVN/LPNs.

The conclusions of research investigating whether quality improved following the legislation are mixed (Donaldson and Shapiro 2010). Neither Donaldson et al. (2005) nor Bolton et al. (2007) found significant decreases in falls, decubitus ulcers, or restraint use following implementation of the regulations. Using the Agency for Healthcare Research and Quality's (AHRQ) Patient Safety Indicators (PSIs), Spetz et al. (2009) found no significant improvement in postoperative sepsis, deep vein thrombosis, decubitus ulcers, mortality following pneumonia, or failure to rescue (FTR, death following a complication). Cook et al. (2012), using a panel of California hospitals from 2000 to 2006, found no statistically significant decrease in FTR or decubitus ulcers following implementation of the ratios, and Aiken et al. (2010), using cross-sectional data from California, New Jersey, and Pennsylvania, concluded that hospitals with staffing levels consistent with those mandated in California had significantly better nurse-reported quality and lower levels of mortality and FTR. More recently, Spetz et al. (2011), also using data from California hospitals from 2000 to 2006, found statistically significant decreases in postoperative respiratory failure and pressure ulcers, but no reductions in FTR or selected infections due to medical care, postoperative pulmonary embolism, or deep vein thrombosis. Finally, a trend analysis of state snapshots in AHRQ's most recent National Health Care Quality Report (http://www.ahrq.gov/qual/measureix.htm#quality) reveals that, from 2000 to 2007, overlapping the period of our study, rates of postoperative sepsis and infections due to medical care increased significantly more in California than in 25 other states (authors' unpublished analysis, 2012; see Appendix Table 1). However, the conclusions of these studies are limited by convenience sampling, cross-sectional designs, and failure to use a measure of nurse staffing that takes account of patients' needs for nursing care.

Table 1.

Variable Means (Standard Deviations) for California and Comparison States, by Preregulation Staffing Quartile*

Quartile 1 Quartile 2 Quartile 3 Quartile 4




Variable CA 12 States CA 12 States CA 12 States CA 12 States
Licensed nurse FTEs per 1,000 IPD 2.98 (0.65) 2.51 (0.75) 3.40 (0.77) 3.00 (0.84) 3.66 (0.59) 3.34 (0.75) 4.11 (0.74) 4.05 (1.07)
RN FTEs per 1,000 IPD 2.53 (0.69) 2.28 (0.78) 2.99 (0.77) 2.72 (0.84) 3.21 (0.70) 3.06 (0.80) 3.65 (0.82) 3.72 (1.08)
LPN/LVN FTEs per 1,000 IPD 0.45 (0.33) 0.23 (0.19) 0.41 (0.33) 0.28 (0.24) 0.45 (0.42) 0.28 (0.29) 0.45 (0.45) 0.33 (0.33)
NIW-adjusted licensed nurse FTEs per 1,000 IPD 3.01 (0.63) 2.63 (0.86) 3.44 (0.77) 3.12 (0.99) 3.58 (0.60) 3.46 (0.94) 3.98 (0.72) 4.29 (1.32)
NIW-adjusted RN FTEs per 1,000 IPD 2.56 (0.66) 2.39 (0.87) 3.02 (0.76) 2.82 (0.95) 3.12 (0.59) 3.17 (0.94) 3.52 (0.70) 3.93 (1.27)
NIW-adjusted LPN/LVN FTEs per 1,000 IPD 0.45 (0.34) 0.25 (0.22) 0.42 (0.35) 0.30 (0.26) 0.46 (0.45) 0.30 (0.33) 0.46 (0.48) 0.36 (0.38)
NIW index 0.99 (0.09) 0.96 (0.11) 0.99 (0.08) 0.98 (0.11) 1.03 (0.12) 0.98 (0.12) 1.04 (0.14) 0.96 (0.11)
Risk-adjusted percent PSI 4 13.20 (15.46) 14.05 (16.42) 13.99 (14.59) 11.32 (8.81) 14.73 (22.85) 11.75 (11.11) 11.54 (11.88) 11.62 (13.03)
Observed incidents PSI 4 2.0 (2.3) 3.7 (4.8) 2.7 (2.8) 4.3 (4.8) 4.2 (5.6) 5.2 (6.2) 4.1 (5.1) 3.6 (5.5)
Expected incidents PSI 4 2.4 (2.4) 4.0 (4.7) 2.8 (2.7) 5.2 (5.3) 5.6 (7.9) 6.2 (6.1) 5.2 (6.5) 4.7 (7.1)
Risk-adjusted % PSI 7 0.19 (0.15) 0.18 (0.12) 0.16 (0.10) 0.21 (0.15) 0.19 (0.13) 0.21 (0.13) 0.21 (0.13) 0.19 (0.13)
Observed incidents PSI 7 8.7 (8.1) 12.8 (14.3) 10.5 (9.5) 15.6 (16.2) 14.1 (16.1) 18.5 (18.8) 15.8 (17.0) 11.6 (13.7)
Expected incidents PSI 7 9.8 (6.5) 12.9 (8.9) 12.6 (8.7) 13.8 (9.3) 13.0 (9.7) 16.5 (12.0) 13.1 (8.6) 11.1 (9.1)
Risk-adjusted percent PSI 11 1.15 (1.51) 1.02 (0.78) 0.97 (0.69) 1.06 (0.89) 0.90 (0.67) 0.97 (0.61) 0.81 (0.56) 0.92 (0.70)
Observed incidents PSI 11 6.4 (6.2) 11.3 (14.5) 7.8 (7.0) 13.3 (15.4) 12.8 (18.9) 15.7 (16.4) 12.3 (16.4) 11.2 (15.6)
Expected incidents PSI 11 6.4 (7.0) 10.3 (11.0) 7.9 (7.2) 11.4 (10.5) 11.5 (14.7) 14.7 (12.7) 14.7 (17.0) 10.8 (14.7)
Risk-adjusted percent PSI 13 1.37 (3.19) 1.11 (1.33) 1.18 (1.51) 1.25 (1.25) 1.26 (2.52) 1.15 (1.24) 1.11 (1.44) 1.00 (1.05)
Observed incidents PSI 13 2.1 (2.6) 3.5 (5.5) 2.4 (2.7) 4.6 (5.3) 3.5 (4.8) 4.8 (5.5) 3.4 (3.8) 3.3 (6.5)
Expected incidents PSI 13 2.0 (1.9) 3.3 (4.1) 2.4 (2.1) 4.0 (3.7) 3.6 (4.2) 4.6 (4.4) 3.9 (3.9) 3.4 (4.4)
Operating margin −0.05 (0.20) 0.03 (0.13) −0.07 (0.30) 0.00 (0.13) −0.08 (0.37) −0.02 (0.30) −0.03 (0.18) 0.01 (0.14)
% Medicare IPD 31.0 (12.2) 41.7 (11.9) 28.7 (11.7) 40.9 (13.1) 28.4 (12.6) 39.3 (13.4) 30.8 (10.8) 40.3 (13.4)
% Medicaid IPD 14.3 (10.7) 10.0 (5.9) 17.8 (12.1) 10.0 (7.2) 14.1 (11.3) 8.9 (6.9) 12.7 (8.2) 9.0 (7.1)
Number of beds 150.5 (80.8) 194.0 (104.2) 200.1 (138.0) 201.6 (116.4) 201.4 (121.6) 225.5 (166.6) 203.3 (126.8) 154.6 (124.9)
Saidin Index 2.96 (2.22) 4.29 (2.16) 3.90 (2.39) 4.58 (2.11) 4.27 (2.64) 5.00 (2.43) 4.24 (2.65) 4.09 (2.23)
Number of hospitals 44 109 44 106 44 107 43 103
Number of observations 161 499 163 490 176 533 189 505
*

California staffing data come from the Office of Statewide Health Planning and Development; as explained in the text, comparison state staffing data are derived from the American Hospital Association (AHA). Quality of care data and measures come from the Health Care Cost and Utilization Project's State Inpatient Databases (SID). NIW index calculated using data from SID and Nurse Intensity Weights from R. Knauf (personal communication, 2007). Data on operating margin, percent Medicare IPD, percent Medicaid IPD, and number of beds come from the Hospital Cost Report Information System. The Saidin Index is calculated using information in AHA on technologies present at hospitals.

We address these concerns by relying on the natural experiment provided by the passage of California's minimum nurse staffing legislation and using a large panel of hospitals within a before-after comparison group design. We develop measures of acuity-adjusted nurse staffing, using Nursing Intensity Weights (NIWs) (ANA1997; ANA 2000; Needleman et al. 2002; Mark and Harless 2011) that provide a more accurate measure of staffing relative to patient need. We examined the following research questions:

  1. Following implementation of the nurse staffing legislation, did acuity-adjusted nurse staffing increase significantly more in California hospitals than in hospitals in comparison states?

  2. Did some California hospitals, especially those most affected by the staffing regulations, rely more on LVNs to meet the minimum staffing requirements than other hospitals?

  3. If California hospitals increased their acuity-adjusted nurse staffing significantly more than hospitals in comparison states, did quality of care improve significantly more in California hospitals than in hospitals in comparison states?

Linking Nurse Staffing and Quality of Care

Although no empirical research has identified the underlying mechanisms linking nurse staffing to quality, conceptually, increasing nurse staffing might lead to improvements in quality by enhancing nurse surveillance (Fagin 2001). Such surveillance involves direct patient observation, recognition of an actual or impending problem, and mobilization of an intervention that requires coordination of others' actions to save a patient's life. In addition, nurses also have a critical role with patient and family teaching. This may increase the likelihood that patients and their families will be active participants in the patient's care, heighten their awareness of potentially dangerous signs and symptoms, and increase their comfort in reporting their concerns to a nurse. When nurse staffing is less than adequate, nurse workload increases and surveillance and patient and family teaching can be compromised, contributing to “missed nursing care”—errors of omission in providing care—with potentially deleterious effects on quality (Kalisch, Landstrom, and Williams 2009; Kalisch and Lee 2010). Increased workload due to poor staffing may also lead to errors. Finally, if low staffed hospitals rely on LVNs/LPNs to work outside their scope of practice, expecting them to engage in surveillance and patient assessment—roles for which they are not trained—missed care and errors may increase. Thus, increasing nurse staffing might enhance nurse surveillance and reduce missed care and errors, leading to improvements in quality of care.

Methods

Sample

The sampling frame was hospitals in states that participated in AHRQ's Health Care Cost and Utilization Project (HCUP) State Inpatient Database (SID) for 2000 through 2006. The SID includes the universe of hospital discharges in each state. We included hospitals if they were general, short-term acute care hospitals. We excluded military hospitals, as well as hospital observations for which we were unable to match provider numbers across years and/or databases, for which average daily census was less than 20 (Needleman et al. 2002), and for which the reporting period was less than 360 days in the Center for Medicare and Medicaid Services Hospital Cost Report Information System (HCRIS). Our final sample contained observations from 175 hospitals in California and 425 hospitals in 12 other states that did not have minimum nurse staffing legislation (Arizona, Colorado, Florida, Iowa, Kentucky, Maryland, North Carolina, New Jersey, Utah, Washington, Wisconsin, and West Virginia). Depending upon the quality measure (discussed below), we had between 2,380 hospital-year observations comprising 32.6 million patient discharges for failure to rescue and 2,716 hospital-year observations reflecting 34.7 million discharges for PSI 7 (infection due to medical care).

Variables and Data Sources

The staffing measures included registered nurse full-time equivalents per 1,000 patient days (RN FTEs) and licensed vocational/practical nurse full-time equivalents per 1,000 patient days (LVN/LPN FTEs). The minimum staffing legislation mandated a fixed rule for the minimum number of nurses and applied that rule to hospitals with patients having substantially different average acuity levels. Therefore, a 1 : 5 nurse-patient ratio could result in patients' needs for nursing care being comparatively undemanding in some hospitals and more substantial in others. Acuity-adjusted staffing levels represent what might be considered “effective” staffing levels, and it is these acuity-adjusted staffing levels that are important in considering how the change in staffing affected quality of care.

To account for this, we used Nursing Intensity Weights (NIWs) (ANA 1997, 2000; Needleman et al. 2002; R. Knauf, personal communication, 2007; Mark and Harless 2011) to develop our measures of acuity-adjusted staffing. NIWs are based on patient needs for nursing care, including assessment, planning, patient/family teaching, emotional support for the patient and/or family, medical needs, and physical needs. NIWs incorporate the assigned APR-DRG severity level, as well as the proportion of hospital days spent in acute care and in intensive care. These patient-level weights are then summed over a hospital's patient population to obtain an average NIW for a hospital in a given year and converted into an index based on the average NIW for the entire sample of hospitals in the preregulation period. Normalizing the NIW values results in an index where 1.0 indicates a hospital with average nursing intensity; values greater (less) than 1.0 indicate higher (lower) intensity. In our sample, NIWs ranged from 0.74 (representing patient acuity 26 percent lower than average) to 1.54 (representing patient acuity 54 percent higher than average).

We grouped hospitals in each state into quartiles based on their prelegislation nurse staffing levels for medical-surgical and pediatric services, which provide care for the majority of hospital patients. This provides some control for potential idiosyncratic differences in hospitals related to their initial staffing levels. We chose quartiles in an attempt to balance fit (grouping similar hospitals) and parsimony (avoiding excess parameters and divisions), an approach previously used in the study of the effects of California's minimum nurse staffing (Reiter et al. 2011a,b). Thus, California hospitals in the lowest preregulation staffing quartile (Quartile 1) were expected to require the greatest increase in staffing to comply with the regulation, and those in the highest preregulation staffing quartile (Quartile 4) the least. This approach excludes intensive care units from staffing calculations; since California hospitals' intensive care units already had minimum staffing ratios of one nurse for every two patients, they were not subject to the new minimum staffing regulations, and their inclusion might have artificially increased the observed staffing levels.

To form staffing quartiles in California, we used Office of State Health Planning and Development (OSHPD) annual disclosure reports which contain RN and LPN/LVN productive hours by cost center. For comparison state hospitals, we obtained staffing data from the American Hospital Association (AHA) Annual Hospital Survey, but the AHA data do not distinguish between inpatient and outpatient staffing nor distinguish ICU staffing. Therefore, a method to allocate staffing to the appropriate services had to be applied. A typical approach is to use the “adjusted patient day” method, which assumes that outpatient visits can be normalized to inpatient days using the ratio of gross outpatient and inpatient revenues, but this approach has been found to be biased (Needleman et al. 2001). Therefore, we relied on nurse staffing data from the California OSHPD from the preregulation period as validation data, and estimated the proportion of nurses assigned to medical-surgical and pediatric units using a regression model for proportions based on the beta distribution (Ferrari and Cribari-Neto 2004). We then used this model to estimate the proportion of RNs and LPN/LVNs caring for adult and pediatric inpatients in comparison states.

From AHRQ's Patient Safety Indicators (PSIs), we selected those that professional organizations such as the National Quality Forum and the ANA have endorsed as being appropriate nursing quality indicators and for which there is research supporting the relationship (Kane et al. 2007). The measures were as follows: failure to rescue [PSI 4], infection due to medical care, particularly those due to intravenous lines and catheters [PSI 7], postoperative respiratory failure [PSI 11], and postoperative sepsis [PSI 13]. We applied AHRQ's PSI software to patient-level data obtained from the HCUP SID, which contains all necessary data elements. We excluded patients who either transferred into the hospital or were discharged to another facility, or patients whose duration of stay was 0, was unknown or was longer than 365 days. We utilized AHRQ's built in PSI risk-adjustment approach to calculate the expected number of adverse incidents.

Control variables included the number of beds, percentage of Medicare, and percentage of Medicaid discharges and operating margin, since research has found that operating margins decreased significantly for some California hospitals following passage of the legislation (Reiter et al. 2011a). These variables were obtained from HCRIS. We also included the Saidin index, a measure of technological sophistication (Spetz and Maiuro 2004), which was derived from AHA data.

Analytic Approach

We examined changes in acuity-adjusted staffing levels from prior to the legislation's passage to the transition period (the period between when draft regulations were announced [2002] and implementation began [2003]); the initial regulatory period (from January 2004 to March 2005); and the final regulatory period when the enhanced ratios were implemented (from April 2005 to December 2006).

To examine whether the increase in staffing in California was significantly greater compared to staffing increases in comparison states, and to examine whether there were heterogeneous responses to the staffing regulations based on hospitals' preregulation staffing levels, we examined mean percentage difference calculated across hospitals; that is, we calculated percentage difference for each hospital, then took the mean of the percentage differences and used a two-sample test for differences in means assuming unequal variances for the two groups. We examined changes in the numbers of FTE licensed nurses (i.e., the total of RN and LVN/LPN FTEs), RNs, and LVN/LPNs, all per 1,000 inpatient days (IPDs) and all adjusted for acuity.

We then employed a difference-in-difference (DD) approach to examine changes in quality of care. DD estimation involves identifying a specific change (the staffing regulations) and then comparing the difference in outcomes (the PSIs) before and after the implementation of the regulations for hospitals affected by the policy (California hospitals) to the before-after difference in hospitals in states unaffected by the policy (the comparison states). The parameters of interest are the differences in the change in California hospitals' quality relative to their level of quality prior to implementation of the legislation minus the same difference for hospitals in comparison states. Estimates for the DD models were obtained using Poisson-fixed effects regressions (Cameron and Trivedi 1998), which rely only on information from within-hospital variation in the count of adverse outcomes and the expected count of adverse outcomes across time periods in which staffing changed in California relative to other states, other regressors, and quality measures. We control for differential risk of adverse incidents using the expected counts from the PSIs as a measure of exposure (that is, we include the natural log of the expected number of adverse incidents as a regressor with the coefficient for the variable fixed to 1.0). In addition, we control for fixed effects for time. All measured or unmeasured differences between hospitals that do not vary in time are accounted for in the fixed effects by incorporating a multiplicative, hospital-specific term, which is equivalent to including a dummy variable for each hospital (Cameron and Trivedi 1998).

Findings

Description of Study Sample

Descriptive statistics are presented in Table 1 for both California and comparison state hospitals by preregulation staffing quartile. Staffing and quality are discussed in detail below. With regard to other measures, operating margin was worse in California hospitals than in comparison state hospitals; and California hospitals treated a lower percentage of Medicare patients but a higher percentage of Medicaid patients than comparison hospitals. California Quartile 1 hospitals had fewer beds than Quartile 1 comparison state hospitals but in Quartile 4, California hospitals were larger. With the exception of Quartile 4 hospitals, California hospitals provided less technologically sophisticated services (measured with the Saidin index) than comparison state hospitals.

Question 1

Did acuity-adjusted nurse staffing increase significantly more in California hospitals than in hospitals in comparison states? Table 2 provides summary information, by staffing quartile, on the mean level of acuity-adjusted nurse staffing by regulatory period, the mean percentage change (as a percentage of preregulation licensed nurse staffing), and the differential percentage change between California and comparison state hospitals. Although the table presents percentage difference from the preregulatory period for the initial and final regulatory period, we focus the discussion primarily on changes for the final regulatory period. (We also examined changes in unadjusted staffing: the results are consistent with the analysis of NIW-adjusted staffing.) There are several narratives that emerge from this table. The first relates to a pattern that is consistent with expectations that California hospitals with the largest pre-regulation staffing shortfalls would increase staffing the most. Table 2 demonstrates progressively smaller absolute increases in licensed nurse and RN staffing in California hospitals in the higher preregulation staffing quartiles. However, the pattern for change in LPNs/LVNs is mixed.

Table 2.

Mean Acuity-Adjusted Nurse Staffing FTEs per 1,000 Inpatient Days and Mean Percentage Change (as a Percentage of Preregulation Licensed Nurse Staffing), by Preregulation Staffing Quartile and Regulatory Period

Quartile 1 Quartile 2 Quartile 3 Quartile 4




Period CA 12 States Δ CA 12 States Δ CA 12 States Δ CA 12 States Δ
Licensed nurse Pre 2.27 2.33 2.79 3.05 3.51 3.52 3.88 4.77
Trans 2.75 2.51 3.27 3.04 3.36 3.49 3.83 4.31
Initial 3.21 2.73 3.53 3.15 3.68 3.37 3.94 4.15
Final 3.32 2.90 3.80 3.29 3.79 3.49 4.21 4.19
Difference from pre-regulatory period Initial 31.5% 17.1% 14.4 20.3% 5.7% 14.6 15.6% −0.8% 16.4$ 3.5% −5.8% 9.3
Final 38.3% 23.3% 15.0 30.2% 10.9% 19.2 18.9% 0.8% 18.1$ 9.7% −5.4% 15.0$
RN Pre 2.06 2.04 2.41 2.70 3.01 3.17 3.03 4.29
Trans 2.22 2.26 2.87 2.71 2.90 3.16 3.39 3.92
Initial 2.73 2.49 3.09 2.87 3.18 3.11 3.51 3.84
Final 2.90 2.67 3.35 3.03 3.38 3.23 3.81 3.91
Difference from pre-regulatory period Initial 27.3% 19.0% 8.3 18.2% 7.4% 10.8 12.9% 1.9% 11.0$ 7.5% −4.2% 11.7
Final 35.0% 25.9% 9.1 26.7% 13.3% 13.4 18.9% 3.6% 15.3$ 15.1% −3.0% 18.0$
LPN/LVN Pre 0.21 0.30 0.38 0.35 0.50 0.35 0.85 0.48
Trans 0.52 0.25 0.40 0.33 0.46 0.33 0.44 0.40
Initial 0.48 0.24 0.44 0.27 0.50 0.27 0.43 0.31
Final 0.42 0.23 0.46 0.25 0.41 0.26 0.40 0.28
Difference from preregulatory period Initial 4.2% −1.9% 6.1 2.1% −1.7% 3.8 2.7% −2.7% 5.4 −4.1% −1.6% −2.4
Final 3.3% −2.6% 5.9 3.5% −2.3% 5.8 −0.0% −2.9% 2.8 −5.4% −2.4% −3.0

Columns under the heading Δ show the difference in mean percentage between CA and comparison state hospitals; markers indicate that the difference in mean percentage change is statistically significant at p < .05, p < .01, or §p < .001 levels.

The second narrative relates to absolute changes in staffing. At the pre-regulatory period, California Quartiles 1 and 2 hospitals' licensed staffing was below comparison state hospitals in the same quartiles. Although both California hospitals and comparison state hospitals increased their staffing, California hospitals' increase was larger, so that at the final period, licensed nurse staffing for California Quartile 1 and 2 hospitals exceeded staffing in comparison state Quartile 1 and 2 hospitals. Quartile 3 hospitals had preregulation licensed staffing levels that were essentially the same, with California hospitals increasing their licensed staffing and comparison state hospitals remaining stable. For Quartile 4 hospitals, California hospitals had lower licensed staffing in the pre-regulatory period and increased their staffing during the period of the study, whereas comparison state Quartile 4 hospitals decreased their staffing slightly so that staffing in California Quartile 4 hospitals and in Quartile 4 comparison state hospitals was similar in the final period.

In terms of RN staffing at the preregulatory period, California hospitals were either the same (Quartile 1) or lower (Quartiles 2, 3, and 4) than RN staffing in comparison state hospitals. At the final period, California Quartile 1, 2, and 3 hospitals had higher RN staffing than comparison state hospitals, but California Quartile 4 hospitals' RN staffing was somewhat lower than comparison state Quartile 4 hospitals.

Finally, we examine the differential percentage changes in staffing. In both the initial and final regulatory period, California's Quartile 2, 3, and 4 hospitals had a higher mean percentage change in RN staffing compared to comparison state hospitals in the same quartiles. In Quartile 1, however, even though the mean percentage increases in RN staffing in California hospitals were larger than those for California hospitals in the other three quartiles, the differential change compared to comparison state hospitals was smaller and not statistically significant. For example, among Quartile 1 hospitals in California, the mean percentage change in acuity-adjusted RN staffing in the final period was 35 percent, but the mean percentage change among comparison state hospitals was 25.9 percent, so that the differential change was just 9.1 percentage points, and not statistically significant.

Question 2

Did some California hospitals, especially those most affected by the staffing regulations, rely more on LVNs to meet the minimum staffing requirements than other hospitals? Table 2 also demonstrates that during the final regulatory period, California Quartile 1 and 2 hospitals increased their LVN/LPN staffing while comparison state Quartile 1 and 2 hospitals decreased their LVN/LPN staffing, and the differential percentage change was statistically significant. In contrast, there was no statistically significant differential percentage change in LVN/LPN staffing levels in the final regulatory period in California Quartile 3 and 4 hospitals compared to hospitals in the same quartiles in comparison states. This pattern suggests that hospitals with the largest preregulation staffing shortfalls did rely more on LPNs/LVNs to meet the regulatory requirements than hospitals with smaller preregulation staffing shortfalls, but, as discussed earlier, these hospitals also increased RN staffing more than comparison state hospitals.

Question 3

If California hospitals increased their acuity-adjusted nurse staffing significantly more than hospitals in comparison states, did quality of care improve significantly more in California hospitals than in hospitals in comparison states? Table 3 provides the DD Poisson-fixed effects estimates representing the difference in percentage change in the adverse event incidence, by staffing quartile, relative to the pre-regulation period, for the initial period and the final regulatory period. (The full regression results can be found in Appendix Table 2; DD results using unadjusted staffing were similar.)

Table 3.

Difference-in-Difference Fixed Effects Poisson Model Estimates of Percentage Change in Incidence of PSIs in California Relative to Comparison State Hospitals in the Same Preregulation Staffing Quartile in the Initial and Final Regulatory Periods

Quartile Regulatory Period PSI 4 PSI 7 PSI 11 PSI 13
1 Initial −37.1* (16.2) 10.2 (20.9) 1.1 (21.5) 17.1 (57.2)
Final −30.7* (14.1) 40.1 (22.5) 4.4 (24.9) 52.4 (72.0)
2 Initial −10.1 (15.6) 0.2 (17.2) −16.4 (15.0) −14.0 (24.9)
Final −11.6 (21.3) 11.5 (18.1) −10.6 (18.0) 21.3 (35.1)
3 Initial −19.6 (14.9) 34.1* (15.3) −9.6 (13.6) 20.4 (35.6)
Final −12.2 (17.9) 25.0 (17.2) −15.9 (12.6) 21.1 (27.6)
4 Initial −10.5 (19.5) 15.1 (13.5) −20.2 (13.4) 4.0 (23.1)
Final −32.9* (12.9) −2.7 (13.8) −14.4 (17.0) −5.1 (23.6)
Number of observations 2,380 2,716 2,589 2,431
Number of hospitals 534 600 573 538

Note. The standard errors (in parentheses) beneath coefficient estimates are robust to heteroskedasticity and possible correlation or errors within-hospital observations. We incorporated the differing hospital reporting period dates to match reporting periods and the regulatory periods as closely as possible. The DD Poisson regression contains dummy-like variables, defined as the proportion of days of the HCRIS reporting period falling in a regulatory period. These are included for the initial and final regulatory periods; the interaction of these dummy-like variables with dummy variables for quartiles; and the interaction of these period-quartile dummy variables with an interaction for location in California. Other control variables include operating margin, percent Medicare inpatient days, percent Medicaid inpatient days, Saidin index, and natural log of number of beds.

*

p < .05.

Failure to rescue (PSI 4) decreased significantly more in California Quartile 1 hospitals than in comparison state hospitals in both the initial and final periods (differential reductions of 37.1 and 30.7 percent, respectively, p < .05). FTR also decreased significantly more in California Quartile 4 hospitals than in comparison state hospitals in the final period (differential reduction of 32.9 percent, p < .05). Differential change in PSI 7 (infections due to medical care) was statistically significant only for California Quartile 3 hospitals in the initial period and the sign on the coefficient was positive. There were no statistically significant differential changes in either PSI 11 (respiratory failure) or PSI 13 (postoperative sepsis). The signs on the coefficients for PSI 11 were consistently negative except for Quartile 1 hospitals, where they were positive but extremely small; no pattern in signs can be observed for PSI 13.

Discussion

The goals of California's minimum nurse staffing legislation were to increase licensed nurse staffing and to improve the quality of care. Our results indicate that, by the time the enhanced staffing ratios were implemented (i.e., the final regulatory period), there were both large absolute and statistically significant differential increases in licensed nurse staffing across all quartiles in California hospitals. It is worth noting, however, that even in the final regulatory period, California's Quartile 1 hospitals' NIW-adjusted licensed and RN staffing were still lower (3.31 and 2.90 FTEs per 1,000 inpatient days, respectively) than California's Quartile 4 hospitals prior to implementation of the legislation (3.88 and 3.03 FTEs, respectively). And, although California Quartile 1 hospitals increased their RN staffing substantially, the increase was not significantly greater than in Quartile 1 comparison hospitals. In addition, in California Quartile 4 hospitals, the differential increase in both licensed and RN staffing was significantly greater than in Quartile 4 comparison state hospitals. This raises the question of why hospitals with the smallest preregulation staffing shortfalls chose to increase RN staffing to such an extent. It may be that, with competing hospitals raising their staffing levels, California Quartile 4 hospitals needed to respond similarly to maintain the effectiveness of their recruitment and retention strategies.

Change in quality of care was more limited. We found a significantly greater reduction in FTR in California Quartile 1 hospitals than in Quartile 1 comparison state hospitals at both the initial and final regulatory periods. Two possible explanations come to mind. First, the differential increase in licensed staffing in California Quartile 1 hospitals was significantly greater than in Quartile 1 comparison state hospitals, but the differential change was not statistically significant for RN staffing. However, this lack of statistical significance results from the fact that, although California Quartile 1 hospitals increased their RN staffing 35 percent by the final regulatory period, Quartile 1 comparison hospitals also increased their staffing. In other words, statistically significant or not, the 35 percent increase is a very large absolute increase in RN staffing.

Second, it is conceivable, particularly for Quartile 1 hospitals, that staffing with more LVNs/LPNs, even given their limited scope of practice, might have allowed RNs additional time to engage in enhanced surveillance—a critical clinical process in preventing patients' deteriorating conditions from worsening, reducing missed care and ultimately decreasing FTR. From a policy perspective, this suggests that when hospitals have limited RN staffing, the strategic deployment of LVNs/LPNs may be important in reducing FTR.

We also saw a statistically significant differential decrease in FTR in California Quartile 4 hospitals, where the differential increase in RNs was slightly greater than in any other quartile.

This finding, taken together with a statistically significant differential increase in staffing in California Quartiles 2 and 3 hospitals without a corresponding differential reduction in FTR, might indicate that the reduction in FTR in California Quartile 4 hospitals may be due to unmeasured work environment enhancements. For example, the Institute of Medicine (IOM) published its report Keeping Patients Safe: Transforming the Work Environment of Nurses in 2003 (Page 2003). The report focused not only on staffing but made recommendations about work design that promotes patient safety, organizational culture that continuously strengthens patient safety, and mechanisms that promote interdisciplinary coordination. If in the face of increased competition to recruit and retain RNs when other hospitals were increasing their staffing levels, California hospitals with the highest preregulation staffing levels implemented such nonstaffing work enhancements to a greater extent than in Quartile 2 or 3 hospitals, these may have contributed to the observed reductions in FTR.

Our finding that PSI 7 (infections due to medical care) increased more in California Quartile 3 hospitals than in comparison state Quartile 3 hospitals may reflect increased detection of these events, rather than an actual increase in their numbers. If early detection improves and prompt treatment begins, then length of stay for patients with these complications should not increase. In support of this, both Mark and Harless (2010) and Spetz et al. (2011) found that when nurse staffing was associated with more complications, it was also associated with shorter lengths of stay.

The lack of observed effects of the legislation on adverse events may also be related to the limitations of our study. Perhaps the most critical is that the study was undertaken prior to Medicare's requirement that all secondary diagnosis codes in the patient discharge record be coded as to whether they were present on admission. Thus, even though AHRQ's PSI software has explicit inclusion and exclusion criteria for determining whether an adverse patient safety event has occurred, if patients were admitted with any of these infections, then the inclusion of these events in incidence calculation is a source of error (Mark and Harless, 2010). Interestingly, it is in failure to rescue, a patient safety event that does not rely on present-on-admission status, where we did see statistically significant and consistent results. However, all the outcomes we investigated are rare events. Even though we applied a statistical model appropriate in exactly that circumstance, this rarity is reflected in, for example, the differential number of adverse events included in the count data regression. This means that it is possible that hospitals with poor staffing or unsafe environments could still experience zero adverse events.

Another limitation is reliance on nurse staffing data from the American Hospital Association, which do not distinguish staffing for inpatient and outpatient services (Jiang, Stocks, and Wong 2006). And, although we used California's OSHPD nurse staffing data from before staffing regulations were announced as a validation data set, our measurement of inpatient nurse staffing may still be prone to error. In addition, because of the manner in which the minimum nurse staffing regulations were promulgated, even the OSHPD data do not allow a determination of whether any individual hospital was in compliance with the regulations.

Our dependence on NIWs to adjust for patient acuity is another limitation. NIWs, which quantify patients' needs for nursing care, is an improvement over the Medicare case mix index, which was designed to measure intensity of resource use (Mark and Harless 2011). Although NIWs have been used in several landmark studies (ANA 1997, 2000; Needleman et al. 2002; Mark and Harless 2011), none of these studies has evaluated their reliability.

Finally, although we described a promising potential conceptual linkage underlying the association of nurse staffing to quality of care—that of nurse surveillance, missed care, and errors—these important constructs were not measured in this study and cannot be measured with secondary data. Research using primary data collection methods would facilitate a more micro-level theoretical understanding of the ways in which nurse staffing might contribute to quality of care.

Conclusion

Legislation, such as that of mandating minimum nurse staffing ratios, is often a blunt instrument aimed at solving complex problems, and often results in unintended consequences. Recent research has found that following implementation of the minimum nurse staffing legislation, RN wages increased in California more than in states without such legislation (Mark, Harless, and Spetz 2009), some California hospitals' operating margins declined significantly (Reiter et al. 2011a) and some California hospitals significantly decreased the amount of uncompensated care they provided (Reiter et al. 2011b). These studies suggest that there was a measurable cost to the minimum nurse staffing legislation in California. However, the larger and so far unanswered question is whether the incremental increases in quality are worth the cost. A full assessment of the effects of minimum nurse staffing legislation on health system access, equity, quality, and costs is critical before other states or the federal government implements such legislation.

Acknowledgments

Joint Acknowledgment/Disclosure Statement: Project was funded through AHRQ grant 2R01HS10153. We wish to acknowledge the data management expertise of Jane Darter (UNC) and Carolina Herrera (UCSF).

Disclosures: None.

Disclaimers: None.

SUPPORTING INFORMATION

Additional supporting information may be found in the online version of this article:

Appendix S1: Author Matrix.

hesr0048-0435-SD1.doc (79KB, doc)

Table S1: Regression Results from State-Level PSI Rate Data from the AHRQ National Health Care Quality Reports, State Snapshots on Hospital Care.

Table S2: Complete Difference-in-Difference Poisson Fixed Effects Estimation Results.

hesr0048-0435-SD2.docx (19.5KB, docx)

Please note: Wiley-Blackwell is not responsible for the content or functionality of any supporting materials supplied by the authors. Any queries (other than missing material) should be directed to the corresponding author for the article.

References

  1. Aiken LH, Sloane DM, Cimiotti JP, Clarke SP, Flynn L, Seago JA, Spetz J, Smith HL. “Implications of the California Nurse Staffing Mandate for other States”. Health Services Research. 2010;45(4):904–21. doi: 10.1111/j.1475-6773.2010.01114.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. American Nurses Association. Nurse Staffing and Patient Outcomes in the Inpatient Hospital Setting. Washington, DC: American Nurses Association; 2000. [Google Scholar]
  3. American Nurses Association. Nurse Staffing Plans and Ratios. 2011. [accessed on June 24, 2011]. Available at http://www.nursingworld.org/MainMenuCategories/ANAPoliticalPower/State/StateLegislativeAgenda/StaffingPlansandRatios_1.aspx. [Google Scholar]
  4. American Nurses Association [ANA] Implementing Nursing's Report Card: A Study of RN Staffing, Length of Stay and Patient Outcomes. Washington, DC: American Nurses Publishing; 1997. [Google Scholar]
  5. Bolton LB, Aydin CE, Donaldson N, Brown DS, Sandhu M, Fridman M, Aronow HU. “Mandated Nurse Staffing Ratios in California: A Comparison of Staffing and Nursing-Sensitive Outcomes Pre- and Post-Regulation”. Policy, Politics, and Nursing Practice. 2007;8(4):238–350. doi: 10.1177/1527154407312737. [DOI] [PubMed] [Google Scholar]
  6. Cameron AC, Trivedi PK. Regression Analysis of Count Data. Cambridge, England: Cambridge University Press; 1998. [Google Scholar]
  7. Cook A, Gaynor M, Stephens M, Taylor L. “The Effect of a Hospital Nurse Staffing Mandate on Patient Health Outcomes: Evidence from California's Minimum Staffing Regulation”. Journal of Health Economics. 2012;31(2):340–8. doi: 10.1016/j.jhealeco.2012.01.005. [DOI] [PubMed] [Google Scholar]
  8. Donaldson N, Shapiro S. “Impact of California Mandated Acute Care Hospital Nurse Staffing Ratios: A Literature Synthesis”. Policy, Politics & Nursing Practice. 2010;11(3):184–201. doi: 10.1177/1527154410392240. [DOI] [PubMed] [Google Scholar]
  9. Donaldson N, Bolton LB, Ayden C, Brown D, Elashoff JD, Sandhu M. “Impact of California's Licensed Nurse-Patient Ratios on Unit-Level Nurse Staffing and Patient Outcomes”. Policy, Politics, and Nursing Practice. 2005;6(3):198–210. doi: 10.1177/1527154405280107. [DOI] [PubMed] [Google Scholar]
  10. Fagin C. When Care Becomes a Burden: Diminishing Access to Adequate Nursing. New York: Milbank Memorial Fund; 2001. [Google Scholar]
  11. Ferrari SLP, Cribari-Neto F. “Beta Regression for Modeling Rates and Proportions”. Journal of Applied Statistics. 2004;31(7):799–815. [Google Scholar]
  12. Harless DW, Mark BA. “Addressing Measurement Error Bias in Nurse Staffing Research”. Health Services Research. 2006;4:2006–24. doi: 10.1111/j.1475-6773.2006.00578.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Jiang HJ, Stocks C, Wong CJ. “Disparities between Two Common Data Sources on Hospital Nurse Staffing”. Journal of Nursing Scholarship. 2006;38(2):187–93. doi: 10.1111/j.1547-5069.2006.00099.x. [DOI] [PubMed] [Google Scholar]
  14. Kalisch B, Landstrom GA, Williams R. “Missed Nursing Care: Errors of Omission”. Nursing Outlook. 2009;57(1):3–9. doi: 10.1016/j.outlook.2008.05.007. [DOI] [PubMed] [Google Scholar]
  15. Kalisch B, Lee K. “The Impact of Teamwork on Missed Nursing Care”. Nursing Outlook. 2010;58(5):233–41. doi: 10.1016/j.outlook.2010.06.004. [DOI] [PubMed] [Google Scholar]
  16. Kane R, Shamliyan TA, Mueller C, Duval S, Wilt TJ. “The Association of Registered Nurse Staffing Levels and Patient Outcomes: Systematic Review and Meta-Analysis”. Medical Care. 2007;45(12):1195–204. doi: 10.1097/MLR.0b013e3181468ca3. [DOI] [PubMed] [Google Scholar]
  17. Mark BA, Harless DW. “Nurse Staffing and Post-Surgical Complications Using the Present on Admission Indicator”. Research in Nursing and Health. 2010;33(1):35–47. doi: 10.1002/nur.20361. [DOI] [PubMed] [Google Scholar]
  18. Mark BA, Harless DW. “Adjusting for Patient Acuity in Measurement of Nurse Staffing: Two Approaches”. Nursing Research. 2011;60(2):107–14. doi: 10.1097/NNR.0b013e31820bb0c6. [DOI] [PubMed] [Google Scholar]
  19. Mark BA, Harless DW, Spetz J. “California's Minimum Nurse Staffing Legislation and Nurses' Wages”. Health Affairs. 2009;28(2):2326–34. doi: 10.1377/hlthaff.28.2.w326. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. McHugh MD, Kelly LA, Sloane DM, Aiken LH. “Contradicting Fears, California's Nurse-To-Patient Mandate Did Not Reduce the Skill Level of the Nursing Workforce in Hospitals”. Health Affairs. 2011;30(7):1299–306. doi: 10.1377/hlthaff.2010.1118. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Needleman J, Buerhaus P, Mattke S, Stewart M, Zelevinsky K. Nurse Staffing and Patient Outcomes in Hospitals. Washington, DC: US Department of Health and Human Services, Health Resources and Service Administration; 2001. Contract Number 230-99-0021. [Google Scholar]
  22. Needleman J, Buerhaus P, Mattke S, Stewart M, Zelevinsky K. “Nurse-Staffing Levels and the Quality of Care in Hospitals”. New England Journal of Medicine. 2002;346:1715–22. doi: 10.1056/NEJMsa012247. [DOI] [PubMed] [Google Scholar]
  23. Page A. Keeping Patients Safe: Transforming the Work Environment of Nurses. Washington, DC: Institute of Medicine; 2003. [PubMed] [Google Scholar]
  24. Reiter KL, Harless DW, Pink GH, Mark BA. “Minimum Nurse Staffing Legislation and the Financial Performance of California Hospitals”. Health Services Research. 2011a;47:1030–50. doi: 10.1111/j.1475-6773.2011.01356.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Reiter KL, Harless DW, Pink GH, Spetz J, Mark B. “The Effect of Minimum Nurse Staffing Legislation on Uncompensated Care Provided by California Hospitals”. Medical Care Research and Review. 2011b;68(3):332–51. doi: 10.1177/1077558710389050. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Serratt T, Harrington C, Spetz J, Blegen M. “Changes before and after Mandated Nurse-To-Patient Ratios in California's Hospitals”. Policy, Politics and Nursing Practice. 2011;12:133–40. doi: 10.1177/1527154411417881. [DOI] [PubMed] [Google Scholar]
  27. Spetz J. “California's Minimum Nurse-To-Patient Ratios: The First Few Months”. Journal of Nursing Administration. 2004;34(12):571–8. doi: 10.1097/00005110-200412000-00007. [DOI] [PubMed] [Google Scholar]
  28. Spetz J, Maiuro L. “Measuring Levels of Technology in Hospitals”. Quarterly Review of Economics and Finance. 2004;44(3):430–47. [Google Scholar]
  29. Spetz J. “Nurse Staffing Ratios: Policy Options”. In: Mason D, editor. Policy and Politics in Nursing and Health Care. 6th Edition. Philadelphia: W.B. Saunders Company; 2011. [Google Scholar]
  30. Spetz J, Chapman S, Herrera C, Kaiser J, Seago JA, Dower C. Assessing the Impact of California's Nurse Staffing Ratios on Hospitals and Patient Care. Oakland, CA: California HealthCare Foundation; 2009. [accessed on June 15, 2011]. Available at http://www.chcf.org/publications/2009/02/assessing-the-impact-of-californias-nurse-staffing-ratios-on-hospitals-and-patient-care. [Google Scholar]
  31. Spetz J, Mark BA, Harless DW, Herrera C. “Using Minimum Nurse Staffing Regulations to Measure the Relationship between Nursing and Hospital Quality”. Paper presented at the Interdisciplinary Research Group on Nursing Issues, AcademyHealth; 2011. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

hesr0048-0435-SD1.doc (79KB, doc)
hesr0048-0435-SD2.docx (19.5KB, docx)

Articles from Health Services Research are provided here courtesy of Health Research & Educational Trust

RESOURCES