Skip to main content
Health Services Research logoLink to Health Services Research
. 2004 Apr;39(2):279–300. doi: 10.1111/j.1475-6773.2004.00228.x

A Longitudinal Examination of Hospital Registered Nurse Staffing and Quality of Care

Barbara A Mark, David W Harless, Michael McCue, Yihua Xu
PMCID: PMC1361008  PMID: 15032955

Abstract

Objective

To evaluate previous research findings of the relationship between nurse staffing and quality of care by examining the effects of change in registered nurse staffing on change in quality of care.

Data Sources/Study Setting

Secondary data from the American Hospital Association (AHA)(nurse staffing, hospital characteristics), InterStudy and Area Resource Files (ARF) (market characteristics), Centers for Medicare and Medicaid Services (CMS) (financial performance), and Healthcare Cost and Utilization Project (HCUP) (quality measures—in-hospital mortality ratio and the complication ratios for decubitus ulcers, pneumonia, and urinary tract infection, which were risk-adjusted using the Medstat® disease staging algorithm).

Study Design

Data from a longitudinal cohort of 422 hospitals were analyzed from 1990–1995 to examine the relationships between nurse staffing and quality of care.

Data Collection/Extraction Methods

A generalized method of moments estimator for dynamic panel data was used to analyze the data.

Principal Findings

Increasing registered nurse staffing had a diminishing marginal effect on reducing mortality ratio, but had no consistent effect on any of the complications. Selected hospital characteristics, market characteristics, and financial performance had other independent effects on quality measures.

Conclusions

The findings provide limited support for the prevailing notion that improving registered nurse (RN) staffing unconditionally improves quality of care.

Keywords: Quality of care, HCUP, nurse staffing


The relationship between hospital nurse staffing and quality of care continues to be a significant concern for health services researchers, health care executives, policymakers, and consumers. Several early studies that included nurse staffing as a hospital characteristic found that higher levels of nurse staffing were associated with reduced mortality (Scott, Forrest, and Brown 1976; Hartz et al. 1989; Kuhn et al. 1991; Manheim et al. 1992). At least two studies, however, found no significant relationship between nurse staffing and adverse events (Wan and Shukla 1987) or mortality (Al-Haider and Wan 1991). Other studies have reported mixed results, depending on the quality measure. For example, Silber et al. (1995) found that hospitals with high nurse-to-bed ratios had higher than expected complications rates, but lower than expected mortality rates. In another study, Silber, Rosenbaum, and Ross (1995) found that a high ratio of registered nurses (RNs) to beds was associated with lower mortality and failure to rescue (death following a complication), but more complications than expected.

Recent studies have been designed specifically to examine the relationship between nurse staffing and quality of care (American Nurses Association 1997, 2000; Kovner and Gergen 1998; Lichtig, Knauf, and Milholland 1999; Kovner et al. 2002; Needleman et al. 2002). Reflecting the lack of an agreed upon standard approach to these studies, there are inconsistencies among the studies in terms of measurement of nurse staffing, data sources, risk-adjustment methodologies, quality measures, or statistical approaches to data analysis. For example, the three studies sponsored by the American Nurses Association (ANA) used two separate definitions of nurse staffing: amount of nursing care, calculated as the number of licensed nursing hours per nursing intensity weight (which reflect the relative amount of nursing services required for patients in each DRG [diagnosis related group]) (Ballard et al. 1993); and skill mix, calculated as registered nurse hours as a proportion of total licensed nursing hours (American Nurses Association 1997, 2000; Lichtig, Knauf, and Millholand 1999). Kovner and Gergen's (1998) study measured staffing as the number of full-time equivalent RNs, while a more recent study converted FTE (full-time equivalent) RNs to hours using 2,040 hours per year worked (Kovner et al. 2002). Needleman et al. (2002) likewise calculated the number of hours of nursing care from FTEs, but used 2,080 hours per year worked.

In addition, risk-adjustment methodologies differ among studies. Studies by the ANA and Needleman et al. (2002) were based on New York's nursing intensity weights, while both of Kovner's studies used the Medicare case-mix index. Analytic approaches ranged from simple correlation and ordinary least squares (OLS) regression (American Nurses Association 1997, 2000; Lichtig, Knauf, and Milholland 1999; Kovner and Gergen 1998) to the general estimating equation (Kovner et al. 2002) and negative binomial regression (Needleman et al. 2002).

Finally, all of these studies used cross-sectional data or cross-sectional statistical methods. The conclusions derived from such studies may be biased if there are unobserved, time-invariant factors that affect hospital quality, and these factors are correlated with the explanatory variables of the model. We examined both the more commonly applied static, within-group (fixed effects) model to control for hospital heterogeneity and a dynamic panel model that addresses hospital fixed effects and controls for the influence of past circumstances through inclusion of the lagged value of the dependent variable.

Therefore, the primary purpose of our study was to evaluate previous research findings of the relationship between nurse staffing and quality of care by using panel data to examine the effects of change in nurse staffing on change in quality of care (in-hospital mortality and the nurse-sensitive outcome measures pneumonia, urinary tract infections, and decubitus ulcers) during the years 1990–1995. During that time period, hospitals also experienced increasing financial pressures brought about by increasing managed care penetration, market response to industry overcapacity, more stringent Medicare reimbursement policy, shorter lengths of stay, and an increase in patient acuity requiring the provision of more intensive nursing care. We therefore included a measure of hospital financial performance—operating margin—as a regressor in our model.

Methods

Sample

Our sample was the 422 hospitals in the 1990–1995 longitudinal cohort of the Healthcare Cost and Utilization Project (HCUP) National Inpatient Sample (NIS). These 422 hospitals, 49 percent of the HCUP base year sample, are located in 11 states (Arizona, Colorado, Florida, Illinois, Iowa, Massachusetts, New Jersey, Oregon, Pennsylvania, Washington, and Wisconsin). Due to inability to match hospitals across all datasets, we eliminated 6 hospitals; 2 more hospitals were eliminated because data were for a system rather than an individual hospital, and 2 others were dropped because revenue information was missing from all CMS files. Hospitals with staffing outliers1 or fewer than 15 expected mortalities or complications were excluded.

Measures and Sources of Data

We measured five sets of variables: hospital characteristics (American Hospital Association Annual Survey, CMS case mix index file, CMS cost and capital file), market characteristics (Area Resources File, American Hospital Association Annual Survey, InterStudy data), financial performance (CMS cost and capital files; Solucient data), staffing (American Hospital Association Annual Survey, Online Survey Certification and Reporting System [OSCAR]) and quality of care (Healthcare Cost and Utilization Project data). Variable definitions and sources of data are displayed in Table 1. In general, measurement of these variables was straightforward. However, our approach to several of these variables requires additional explanation.

Table 1.

Variable Definitions, Property, and Sources of Data

Variable Definition Source of Data
Hospital Characteristics
Case-mix index Complexity of Medicare cases treated CMS
High-tech services (Saidin index) High-tech services provided AHA
Payer mix Medicare+Medicaid discharges/total discharges CMS
Beds Number of open and operating beds AHA
Ownership Not-for-profit, for-profit, or public CMS
Location Within an MSA or not CMS
System affiliation System affiliated or not AHA
Market Characteristics
HSA hospital use Inpatient days/1,000 HSA population ARF
Herfindahl index Sum of squared market shares in an HSA AHA
Number of HMOs Number of HMOs in an HSA InterStudy
HMO penetration HMO enrollment as % of total HSA population InterStudy
Financial Performance
Operating margin 100*(1−(operating expense/net patient revenue)) CMS, Solucient
Staffing
RN staffing RN FTEs/1,000 inpatient days AHA, OSCAR
LPN staffing LPN FTEs/1,000 inpatient days AHA, OSCAR
Nonnurse staffing Non-nurse FTEs/1,000 inpatient days AHA, OSCAR
Quality Measures
Mortality Risk-adjusted observed/expected mortality HCUP, Medstat
Pneumonia Risk-adjusted observed/expected pneumonias HCUP, Medstat
Decubitus ulcers Risk-adjusted observed/expected decubitus ulcers HCUP, Medstat
Urinary tract infection Risk-adjusted observed/expected urinary tract infections HCUP, Medstat

Notes: AHA = American Hospital Association; ARF = Area Resource Files; CMS = Centers for Medicare and Medicaid Services; HCUP = Healthcare Cost and Utilization Project; OSCAR = Online Survey Certification and Reporting System; HSA = health service areas; MSA = metropolitan service area.

High Technology Services. We measured high technology services using a “Saidin index” (Spetz and Baker 1999), which is the weighted sum of the number of technologies and services available in a hospital, with the weights being the percentage of hospitals in the country that do not possess the technology or service. Thus, the index increases more with the addition of technologies that are relatively rare than with the addition of technologies that are more common.

Definition of the Relevant Market. We used the health service areas (HSAs) approach developed by Makuc et al. (1991) in which counties are aggregated into geographic regions based on flows of inpatient hospital admissions.

Calendar Year Adjustment. In the CMS files, most hospitals had reporting periods different than calendar years and some hospitals had reporting periods covering a period less than 365 days. To appropriately match data from CMS reports and calendar year data on quality of care, staffing, and other variables, we converted CMS data to calendar year equivalent data using weighted averages. The weights depended on the number of days falling in a particular reporting period and the number of days covered by the report (Needleman, Buerhaus, and Mattke 2001).

Calculation of Hospital RN Staffing. Prior to 1993, the AHA annual survey required hospitals to report staffing separately by hospital unit and nursing home/long-term care unit. After 1993, the reporting was done only for the total facility. Nursing homes, however, are required by CMS to comply with the Online Survey Certification and Reporting system (OSCAR). For 1994 and 1995, we obtained data on hospitals with nursing homes from the OSCAR system, which allowed us to subtract nursing home staffing from total facility staffing to arrive at hospital staffing. The AHA survey does not distinguish nurse staffing for inpatient and outpatient services; without an appropriate allocation method, estimates relating nurse staffing to quality of care would be biased. We followed Kovner and Gergen (1998) and Kovner et al. (2002) in allocating staffing to the inpatient facility based on the ratio of inpatient to outpatient gross revenues.2

Risk-Adjustment of Quality Measures. Risk-adjustment was performed using Medstat's Disease Staging methodology (Gonella, Hornbrook, and Louis 1984). Diseases are “staged” into four substages (no complications through death), based on standard UB-82/UB-92 information, including the patient's age, gender, admission type, admission source, and type of treatment (medical versus surgical). The disease staging methodology generated an estimated probability of death for every discharge, which were summed over a hospital's discharges to yield an estimate of the expected number of deaths for that hospital.

Medstat's complications-of-care (COC) software system uses ICD-9-CM diagnosis and procedure codes and other patient characteristics from a standard UB82/UB92 discharge record. For each complication, the COC algorithm defines conditions that must be present for a patient to be at risk; a patient can fall into more than one risk group and may be at risk for a number of complications. Medstat has constructed severity-adjusted models to predict the probability of each complication being present for a particular patient. The specific complications we examined were decubitus ulcers, pneumonia, and urinary tract infections.

Empirical Specification and Analytic Approach

Our analytic approach addressed three important weaknesses in prior studies of staffing and quality of care. First, in these studies, hospitals were generally assumed to differ only in the values of the measured attributes included as explanatory variables. Hospitals, however, are likely to have unmeasured attributes (e.g., different cultures and traditions) that may affect quality of care. These hospital-specific traits are almost surely correlated with the explanatory variables, and hence their exclusion leads to omitted variable bias. Our study controlled for hospital heterogeneity by incorporating hospital fixed effects (including an intercept for each hospital).

Second, previous studies are subject to another source of omitted variable bias in the assumption that quality of care is static, that is, explained solely by contemporaneous characteristics and circumstances. While contemporary circumstances affect quality of care, the organization's historical circumstances affect it as well—the dynamic effect. We included the lagged value of the dependent variable as a parsimonious way to represent the influence of past circumstances and control for this omitted variable bias. A significant coefficient for the lagged dependent variable indicates both the importance of the organization's history and that improvements in quality in one year resonate into subsequent years as well.

Third, previous studies typically assume that all hospital characteristics are strictly exogenous, that is, uncorrelated with the error term in all time periods. We did assume that market characteristics are strictly exogenous. But for all hospital variables, we allowed for the possibility of “feedback effects” which are most easily thought of as a type of endogeneity across time periods. For example, a change in quality of care in period [t] may feed back to changes in staffing in period [t+1]. Such feedback effects violate the typical assumption of strict exogeneity. We make the weaker assumption that staffing levels, financial performance, and hospital characteristics are “predetermined”: the error term is uncorrelated with the past and current value of the explanatory variable (or potential instruments), but the error term is potentially correlated with future values of the variable. Though we believe that the assumption that hospital variables are predetermined represents an improvement over the assumption of strict exogeneity, we do recognize that quality of care and some hospital variables might be simultaneously determined—exploring potential simultaneity is an important issue for future research.

To address these problems, we applied a generalized method of moments (GMM) estimator (Arellano and Bond 1991) for our dynamic panel data model, which was designed to address problems that arise when the regressors (or their instruments) are not strictly exogenous. A standard panel-data method for addressing heterogeneity across cross-sectional units (that is, eliminating the intercepts for each hospital) is the “within-group” estimator in which the OLS estimator is applied to data transformed by taking deviations from time-series means for each cross-sectional unit. In this circumstance, however, the within-groups estimator is biased because the within transformation yields an error term containing the average error for each cross-sectional unit, and this error term is correlated with the deviations from the time means for the lagged value of the dependent variable (and other predetermined variables as well). Anderson and Hsiao (1981) proposed using a first difference transformation to eliminate the hospital-specific intercepts, which leads to an error term amenable to consistent estimation using past values of the variables as instruments. The first difference transformation is appealing because it relates changes in quality of care to changes in nurse staffing, changes in financial performance, changes in hospital characteristics, and changes in market characteristics.3

Our standardized mortality ratio equals observed in-hospital deaths divided by the expected number of in-hospital deaths (based on risk-adjustment performed by the Medstat Disease Staging risk-adjustment system). Thus, a mortality ratio greater than 1.0 indicates that the actual number of deaths exceeds the expected number, while a mortality ratio less than 1.0 indicates that the actual number of deaths was less than expected (ratios for complications are interpreted similarly). This measure of the dependent variable, however, has an error variance that depends on number of expected and observed deaths. The standard error for the standardized mortality ratio equals (observed deaths)0.5/expected deaths (Breslow and Day 1987). To normalize the error variance, we weighted the data by the mean of the inverse of this standard error within a panel (applying the same weight across years for a given hospital so that we know that variation for a given hospital was due to changes in the variables, not to changes in the weights across years). A similar weighting system was also appropriate for analyzing the complication ratios.

The specification assumed quality of care in the current year was a linear function of the hospital-specific intercepts, the previous year's value of quality of care, the current year's values for staffing levels (measured by FTEs per 1,000 inpatient days) of RNs, LPNs, and nonnurses (as well as their squares and interactions), operating margin, hospital characteristics, and market characteristics. Our specification also included yearly dummy variables to measure secular changes in quality of care common to all hospitals. Teaching status was not included because there were too few changes in teaching status in our sample over the period of the study to evaluate the impact of that change.

We undertook two specification tests developed by Arellano and Bond (1991) to test whether their GMM estimation method was suitably applied. One specification test was for overidentifying restrictions. The second specification test was for second-order autocorrelation in the residuals (if the error term is autocorrelated, then lagged values of the dependent variable cannot serve as proper instruments for the lagged differences of the dependent variables). All our estimation results satisfied these specification tests.

Results

Year-by-year descriptive statistics can be found in the Appendix.

Mortality Ratio

Table 2 displays the coefficients (and standard errors) for the mortality ratio. Coefficients for the dynamic panel data model are presented in the last column with the first two columns of coefficients indicating the results for the OLS and within-group (fixed-effects) models to show the extent to which our conclusions would be different had we assumed hospital homogeneity or had we controlled for heterogeneity but assumed a static model. There were more observations available in the OLS and within-group analyses since the lagged dependent variable was not included and we did not take first-differences.

Table 2.

Estimation Results Measuring Quality of Care with the Mortality Ratioa

Variable OLS Within-Group Dynamic Panel Model
Mortality Ratiot−1 0.196***
(0.038)
RN FTEs per 1,000 IPD −0.023 −0.026 −0.087***
(0.017) (0.017) (0.026)
RN FTEs per 1,000 IPD2 −0.006* 0.002 0.009**
(0.003) (0.002) (0.003)
LPN FTEs per 1,000 IPD −0.071 0.003 0.019
(0.039) (0.044) (0.063)
LPN FTEs per 1,000 IPD2 0.014 0.046** 0.012
(0.016) (0.017) (0.022)
Nonnurse FTEs per 1,000 IPD −0.008 −0.004 −0.006
(0.006) (0.006) (0.009)
Nonnurse FTEs per 1,000 IPD2 −6.1E-5 2.3E-4 4.9E-5
(2.0E-4) (1.8E-4) (2.8E-4)
RN FTEs × LPN FTEs 0.016 −0.003 0.005
(0.009) (0.009) (0.011)
RN FTEs × Nonnurse FTEs 0.003* 8.3E-5 −2.4E-4
(0.001) (0.001) (0.002)
LPN FTEs × Nonnurse FTEs −0.003 −0.007* −0.005
(0.003) (0.003) (0.005)
Operating margin −0.001* 4.3E-4 3.5E-4
(3.5E-4) (3.6E-4) (0.001)
Case-mix index −0.121*** −0.202*** −0.032
(0.028) (0.051) (0.093)
Saidin index 0.010*** −0.007 −0.009
(0.003) (0.004) (0.007)
System 0.021** 0.005 −0.001
(0.008) (0.011) (0.016)
Public −0.054*** −0.016 0.232
(0.012) (0.068) (0.145)
Profit −0.120*** −0.009 0.061
(0.012) (0.039) (0.084)
Payer mix 5.5E-4 3.9E-5 −0.001
(2.8E-4) (0.001) (0.001)
Beds 1.6E-4*** −2.7E-4** 1.0E-4
(2.6E-5) (9.7E-5) (2.2E-4)
Hospital use 5.6E-6 −1.6E-6 −1.6E-5
(1.5E-5) (5.6E-5) (7.2E-5)
Herfindahl index 7.3E-4 0.001 4.1E-4
(4.1E-4) (0.003) (0.004)
Number of HMOs −0.002 −0.012*** −0.011***
(0.001) (0.002) (0.003)
HMO penetration 0.004*** 3.2E-4 −0.001
(0.001) (0.001) (0.001)
HMOs × HMO penetration −2.3E-4*** 3.9E-5 9.3E-5
(5.6E-5) (8.7E-5) (1.1E-4)
Dummy variables for years X X X
Number of observations 2,176 2,176 1,437
R2=0.416 R2=0.570
a

Standard errors (in parentheses) beneath the coefficients.

*

Significant at the .05 level;

**

significant at the .01 level;

***

significant at the .001 level.

The OLS and within-groups models suggest few statistically significant coefficients for nurse staffing, while the dynamic panel data model indicates statistically significant coefficients for the variables RN FTEs per 1,000 inpatient days and its square. These coefficients indicate a nonlinear relationship between RN staffing and the mortality ratio, where increases in the RN staffing level decrease the mortality ratio for staffing levels up to 4.62 RN FTEs per 1,000 inpatient days (the 88th percentile value in this sample).

Of obvious importance in the dynamic panel data model is the dynamic effect observed: the coefficient for the lagged value of the mortality ratio suggests a substantial degree of persistence in the ratio over time. Further, the conclusions concerning hospital characteristics differ substantially between the dynamic panel model and the OLS and within-group models. In Table 3, the OLS estimates suggest that case-mix index, Saidin index, system, public hospitals, for-profit hospitals, and bed size affect the mortality ratio; controlling for hospital-specific effects in the within group model suggests that only bed size and case mix have a significant impact on the mortality ratio. But the dynamic panel data model indicates that changes in these hospital characteristics have no statistically significant impact on the mortality ratio.

Table 3.

Dynamic Panel Model Estimation Results Measuring Quality of Care with the Pneumonia, Urinary Tract Infection, and Decubitis Ulcer Complication Ratios.a,b

Variable Pneumonia Complication Ratio Urinary Tract Infection Complication Ratio Decubitis Ulcer Complication Ratio
Complication Ratiot−1 0.135* 0.276*** 0.154**
(0.056) (0.050) (0.054)
RN FTEs per 1,000 IPD 0.032 0.053 0.094
(0.059) (0.069) (0.052)
RN FTEs per 1,000 IPD2 −0.009 −0.008 −0.017**
(0.007) (0.008) (0.006)
LPN FTEs per 1,000 IPD 0.160 0.196 −0.037
(0.148) (0.170) (0.136)
LPN FTEs per 1,000 IPD2 0.080 −0.007 0.097
(0.051) (0.059) (0.057)
Nonnurse FTEs per 1,000 IPD −0.024 −0.010 −0.011
(0.018) (0.023) (0.017)
Nonnurse FTEs per 1,000 IPD2 0.001 0.001 −2.7E-5
(4.2E-4) (0.001) (4.9E-4)
RN FTEs × LPN FTEs −0.047* 0.007 −0.023
(0.023) (0.025) (0.029)
RN FTEs × Nonnurse FTEs 0.005 0.002 0.005
(0.003) (0.004) (0.003)
LPN FTEs × Nonnurse FTEs −0.006 −0.012 0.001
(0.011) (0.013) (0.012)
Operating margin −0.004*** −0.003 −0.001
(0.001) (0.001) (0.001)
Case-mix index −0.261 0.264 0.424*
(0.198) (0.224) (0.180)
Saidin index −0.031** −0.024 0.005
(0.012) (0.014) (0.011)
System −0.041 −0.062 −0.067*
(0.031) (0.038) (0.030)
Public 0.620 −0.270 −0.365
(0.358) (0.545) (0.212)
Profit 0.071 −0.326 −0.155
(0.164) (0.245) (0.170)
Payer mix 0.001 0.002 0.001
(0.002) (0.002) (0.002)
Beds −2.2E-4 −4.7E-4 −0.001***
(3.6E-4) (4.2E-4) (3.7E-4)
Hospital use 1.6E-7 −2.7E-4 4.7E-5
(1.2E-4) (1.5E-4) (1.2E-4)
Herfindahl index −0.012 0.002 −0.010
(0.009) (0.009) (0.006)
Number of HMOs 0.006 0.032*** 0.016**
(0.006) (0.007) (0.005)
HMO penetration −0.003 0.004 1.5E-4
(0.003) (0.003) (0.002)
HMOs × HMO penetration −1.3E-4 −0.001*** 2.0E-5
(2.1E-4) (2.3E-4) (1.9E-4)
Dummy variables for years X X X
Observations 864 984 945
a

Standard errors (in parentheses) beneath the coefficients.

*

Significant at the .05 level;

**

significant at the .01 level;

***

significant at the .001 level.

The only other statistically significant effect observed in the dynamic panel model is that for the number of HMOs (health maintenance organizations): assuming a mortality ratio of 1.04 and assuming the median HMO penetration level in the sample of 16.5 percent, the estimate suggests that adding an HMO decreased the mortality ratio 1 percent. Note that the within-model implies essentially the same effect for number of HMOs, but the OLS model implies a smaller effect with the effect becoming larger as HMO penetration increases (rather than smaller as in the dynamic panel and within-group models).

Complications

Results for the analyses of individual complications are given in Table 3. The sample sizes differed for each complication because, in addition to the previous exclusions, we excluded observations if the expected number of complications was fewer than 15 in time periods [t] and [t−1].

With regard to staffing, the coefficient for the interaction term indicates that, depending on the level of RN staffing, increasing RN staffing decreases the pneumonia complication more, or increases it less, when LPN staffing is higher. The pattern of coefficients for the RN staffing variables suggests that increasing staffing levels increases complication ratios at lower levels of RN staffing, but decreases complication ratios at higher levels of RN staffing.

The lagged dependent variable is statistically significant across complications, suggesting an important effect of the past on the current complication ratio. In addition, hospital characteristics, financial characteristics, and market characteristics have statistically significant effects, but the effects differ across complications. For example, the coefficient for the system dummy variable is negative in each case, but statistically significant only for the decubitus ulcer complication ratio. Similarly, the coefficient for bed size is negative for all three complications, but statistically significant for the decubitis ulcers complication ratio only. The coefficient for the operating margin is negative in each case and statistically significant for the pneumonia complication ratio (indicating that an increase in the operating margin of 1 percentage point would decrease the pneumonia complication ratio by 0.004).

For the case-mix index and Saidin index, however, there are differences in effects across the complication ratios. The coefficient for case-mix index is positive and statistically significant only for decubitus ulcers, while the coefficient for Saidin index is negative and statistically significant only for pneumonia complications.

Among the market characteristics, the coefficient for the number of HMOs is positive in each case and significant for both the urinary tract infection and decubitis ulcer complication ratio. Again assuming a complication ratio of 1.0 and the median HMO penetration level of 16.5 percent, the coefficients suggest that adding another HMO increases the urinary tract infection complication ratio and the decubitis ulcer complication ratio by 1.6 percent (0.4 percent for the pneumonia complication ratio). The marginal effect is essentially unchanged at different levels of HMO penetration for the decubitis ulcer complication ratio, but is considerably larger at lower levels of HMO penetration (and smaller at higher levels of HMO penetration) for the urinary tract infection complication ratio.

Summary of Marginal Effects

Table 4 illustrates the marginal effect of a one-unit increase in RN FTEs per 1,000 inpatient days at three different levels of RN staffing (the 25th, 50th, and 75th percentile values of RN staffing in the sample for the mortality ratio) implied by the dynamic panel model as well as the OLS and within-group models. Standard errors of the marginal effects are provided as well.5 For example, for the mortality ratio, the dynamic panel model and the within-group model imply essentially the same marginal effect at the median level of RN staffing: 1.4 percent to 1.5 percent decrease in mortality ratio when the RN staffing level increases by one unit (given an initial mortality ratio of 1.0). The magnitude of the dynamic panel model marginal effects are much larger at the 25th percentile and much smaller at the 75th percentile values of RN staffing, unlike the within-group estimates, which change little at different staffing levels. The OLS marginal effects are largest at the 75th percentile value of RN staffing and smallest at the 25th percentile.

Table 4.

Illustration of the Effect of a One-Unit Increase in RN FTEs per 1,000 Inpatient Days across Measures of Quality of Carea,b

Percentile Value for RN Staffing per 1,000 Inpatient Days

25th 50th 75th
Measure of Quality Estimation Method 2.66 3.34 4.02
Mortality Ratio
OLS −0.022*** −0.030*** −0.038***
(0.006) (0.005) (0.007)
Within-group −0.016** −0.014** −0.011
(0.006) (0.005) (0.006)
Dynamic panel model −0.027*** −0.015* −0.002
(0.008) (0.007) (0.008)
Pneumonia Complication Ratio
OLS −0.064*** −0.041*** −0.018
(0.014) (0.011) (0.015)
Within-group −0.012 −0.019 −0.025
(0.013) (0.011) (0.013)
Dynamic panel model 1.6E-4 −0.012 −0.023
(0.017) (0.013) (0.015)
Urinary Tract Infection Complication Ratio
OLS −0.056*** −0.066*** −0.076***
(0.015) (0.012) (0.015)
Within-group 2.1E-4 0.005 0.010
(0.015) (0.012) (0.014)
Dynamic panel model 0.024 0.013 0.002
(0.021) (0.016) (0.017)
Decubitis Ulcer Complication Ratio
OLS −0.050*** −0.045*** −0.040**
(0.012) (0.010) (0.015)
Within-group 0.018 0.014 0.011
(0.011) (0.010) (0.012)
Dynamic panel model 0.023 6.8E-4 −0.024
(0.016) (0.013) (0.015)
a

Standard errors (in parentheses) beneath the estimates of the marginal effects.

b

Marginal effects are calculating assuming 0.55 LPN FTEs per 1,000 inpatient days and 9.28 nonnurse FTEs per 1,000 inpatient days (the median values in our sample).

*

Significant at the .05 level;

**

significant at the .01 level;

***

significant at the .001 level.

Differences in the patterns of marginal effects between the models are even more pronounced for the complications ratios. With the exception of the 75th percentile value of RN staffing for the pneumonia ratio, the OLS marginal effects are highly significant and larger in magnitude than those of the two other models. After controlling for time-invariant hospital specific effects (the within-group or dynamic panel model), the coefficients shrink in magnitude or become positive, and are no longer statistically significant.

Although marginal effects for the complications ratios in the dynamic panel model are not significant, the pattern of results is unexpected. For all three complications, at the 25th percentile of nurse staffing, the marginal effects are positive. For the urinary tract infection ratio, the marginal effect approaches zero as staffing level rises to the 75th percentile, while for the pneumonia and decubitus ulcer complication ratios, the marginal effects become negative and larger in magnitude at higher levels of RN staffing. This unexpected pattern in marginal effects is shared by the within-group model for the pneumonia and decubitus ulcer complication ratios.

Discussion

Using panel data and an econometric model that improved upon prior studies, our finding that increasing nurse staffing reduced mortality ratio is consistent with earlier studies that used cross-sectional data (Scott, Forrest, and Brown 1976; Hartz et al. 1989; Kuhn et al. 1991). Unlike these studies, however, we find that the effect of increased RN staffing depended strongly on the current level of RN staffing; our results suggest there are levels of RN staffing beyond which increases may lead to no measurable decrease in the mortality ratio.

We can only speculate on the organizational processes that might underlie these findings. For example, the diminishing marginal effect on mortality ratio of RN staffing might be explained by the notion that adding RNs to a less-well-staffed facility leads to improved RN surveillance resulting in early recognition and intervention in potential problems to avert deaths. In contrast, adding RNs to well-staffed hospitals leads to no further reduction in mortality ratio, perhaps because of the work dynamics in the well-staffed institution: less pressure and urgency, and an assumption (perhaps erroneous) that other nurses are taking appropriate action. Another potential explanation is that as RN staffing increases, the additional RNs do fewer critical tasks; in other words, the marginal product for adding the second nurse is lower than for adding the first.

Increasing the number of HMOs in the hospital's market was another significant predictor of lowered mortality ratio. This may have resulted from increasing numbers of patients being subject to HMOs' strict utilization management strategies, clinical guidelines, and gate-keeping, which reduced length of stay and put pressure on hospitals to discharge patients to subacute or nursing home facilities, where they then died (Rosenthal and Newhouse 2002).

When we applied statistical models similar to those used in prior studies that assumed hospitals were homogeneous, we found large, statistically significant decreases in complication ratios as RN staffing increased. But these effects may be misleading: the large, statistically significant effects disappear after controlling for hospital-specific effects. In fact, moving from OLS to the dynamic panel methodology actually changes the qualitative results for six of the nine estimates involving the complications ratios. From a methodological perspective, this points out how different assumptions about the nature of the staffing—quality relationship and the selection of an estimation method can influence the findings. It further points out how using a method that controls for an important source of bias—hospital specific effects—leads to different conclusions about the impact of nurse staffing on quality of care.

In contrast to the mortality ratio, where adding HMOs reduced the ratio, adding HMOs increased both the urinary tract infection ratio as well as the decubitus ulcer complication ratio. This finding is counterintuitive, since with reduced length of stay wrought by managed care utilization management, one would expect that complications would become apparent following discharge rather than during increasingly short hospital stays. However, other cost-cutting efforts that were not measured in our study, for example, reduction in the availability of critical support services, may have had an impact on the work environment and on nurses' ability to adequately maintain the vigilance necessary to prevent complications from developing. In addition, the results may also reflect unmeasured case-mix differences due to HMOs more aggressive medical management practices.

Conclusion

There are some limitations of our research. First, our findings need further confirmation in larger studies. Second, because our measure of operating expenses—a key data element in the computation of operating margin—includes costs related to nonpatient care activities, operating expenses may be overstated and hospital profitability may be understated. Finally, we recognize that our findings are conditional on the assumption of a “correct” risk-adjustment model. Although Medstat's disease-staging methodology is a commonly used risk-adjustment strategy, ours is the first study of nurse staffing and quality in which it has been used. The use of secondary sources for administrative data has been criticized—particularly when used to identify patient complications (Lawthers et al. 2000). In particular, standard ICD-9-CM coding practices specify that it is the principal diagnosis that is acknowledged as the condition responsible for hospitalization, while the secondary diagnosis is generally used as a trigger for software logic to designate whether a patient record contains a potential complication of hospital care. While the Medstat complications-of-care algorithm was developed to identify those complications that developed during hospitalization, the HCUP data from which we derived our measures of quality did not distinguish specifically whether a complication was present on admission, in which case it more correctly should be considered a comorbidity. The limitations of the risk-adjustment methodology may have contributed to the unexpected pattern of effects for increasing RN staffing for the complications ratios. Our findings also suggest the need for additional theoretical development in identification of complications that are truly sensitive to nurse staffing. In-depth examination of unit-level nursing care processes, perhaps using qualitative methods, may provide more information through which outcomes more sensitive to nurse staffing may be identified. However, the likelihood is small that data about outcomes generated from this type of study would be available in administrative datasets as they are currently structured. Nevertheless, our study has important methodological and policy implications. The study is the first to document the strong impact of history (i.e., the dynamic model operationalized through the inclusion of a lagged value of the quality variable) on all quality indicators. In addition, by using a longitudinal dataset to evaluate change and by applying a generalized method of moments analytic technique, our study addresses possible omitted variable bias and feedback effects, neither of which have been considered in prior studies of nurse staffing and hospital quality. Further, the results of the study clearly demonstrate that the models tested, and the assumptions underlying those models, have a substantial impact on the findings. For all quality variables, there are stark differences between results of the dynamic panel model and those of either OLS or within-groups (fixed-effects) models.

Although our study is the first to examine the relationship between change in nurse staffing and change in quality of care, like earlier studies, we continue to find mixed results. Our findings indicate the clear benefit of increasing nurse staffing to reduce hospital mortality; results are less clear for complications, and the reasons for the differences are not immediately apparent. Improvements in risk-adjustment methodologies, increasing the availability of more complete and reliable data elements about nurse staffing in large secondary databases, and identification and development of quality measures that are more sensitive to variations in nursing care are critical to advancing knowledge in the field, and may yield more consistent findings in studies examining the relationships between nurse staffing and quality. Further, even with adequate risk-adjustment, “clean” data, and appropriate outcomes, the quality and quantity of the work actually performed by RNs needs to be taken into account in explaining how RNs affect patient care quality. In an environment of a progressively severe nursing shortage, policy decisions related to effective and efficient deployment of an increasingly scarce resource—registered nurses—and how change in nurse staffing affects change in quality of care could not be more important.

Acknowledgments

This research was supported by grant no. 1R01HS10135 by the Agency for Healthcare Research and Quality.

Appendix

Descriptive Statistics for Final Sample for All Years.a

1990 1991 1992 1993 1994 1995
N=373 N=366 N=361 N=357 N=361 N=358
Mean Mean Mean Mean Mean Mean
Hospital Characteristics
Case-mix index 1.25 (0.18) 1.28 (0.21) 1.31 (0.21) 1.31 (0.22) 1.31 (0.22) 1.33 (0.23)
High-tech services 2.93 (2.31) 3.07 (2.34) 3.22 (2.34) 3.41 (2.36) 3.63 (2.44) 3.70 (2.38)
Payer mix (%) 52.0 (13.4) 53.9 (13.2) 55.9 (13.6) 56.9 (14.2) 57.2 (14.1) 57.2 (13.8)
Number of beds 197 (175) 196 (176) 197 (175) 198 (176) 192 (171) 190 (172)
Occupancy (%) 50.2 (17.8) 49.6 (17.9) 48.7 (17.3) 47.3 (16.4) 45.5 (16.1) 44.9 (15.9)
Not-for-profit hospitals (%) 67.02 67.21 67.59 66.94 67.59 68.15
For-profit hospitals (%) 13.94 14.20 14.68 15.68 14.40 14.24
Public hospitals (%) 19.03 18.6 17.7 17.4 18.0 17.6
MSA hospitals (%) 64.34 64.48 65.37 66.11 65.10 65.64
System affiliated (%) 41.82 42.0 43.5 46.2 52.3 55.3
Market Characteristics
HSA hospital use 952 (321) 923 (324) 902 (311) 863 (308) 820 (296) 779 (288)
Herfindahl index 17.8 (15.8) 17.8 (15.7) 18.1 (16.1) 18.1 (16.2) 18.3 (16.3) 18.4 (16.4)
No. of HMOs 7.21 (6.39) 7.32 (6.52) 7.48 (6.48) 7.53 (6.02) 7.27 (5.66) 8.06 (5.33)
HMO penetration 13.9 (10.8) 14.7 (11.2) 15.7 (12.0) 16.6 (12.5) 18.9 (13.) 21.8 (14.2)
Financial Performance
Operating margin −2.38 (13.6) −1.69 (11.9) −0.01 (10.9) −0.07 (11.9) 0.80 (11.5) 0.90 (12.7)
Staff per 1,000 Inpatient Days
RN FTEs 3.02 (1.03) 3.14 (1.07) 3.20 (0.98) 3.36 (1.08) 3.50 (1.05) 3.60 (1.07)
LPN FTEs 0.68 (0.44) 0.69 (0.48) 0.66 (0.45) 0.64 (0.46) 0.59 (0.40) 0.59 (0.41)
Nonnurse FTEs 8.97 (2.76) 9.35 (2.80) 9.59 (2.78) 10.11 (3.20) 10.23 (3.23) 10.76 (3.50)
Quality
Mortality ratio 1.20 (0.24) 1.15 (0.21) 1.09 (0.22) 1.05 (0.22) 0.97 (0.22) 0.90 (0.20)
Pneumonia ratio 0.61 (0.36) 0.65 (0.35) 0.72 (0.39) 0.81 (0.41) 0.90 (0.40) 0.97 (0.45)
Sample size 217 212 214 215 216 219
Decubitus ulcer ratio 0.48 (0.31) 0.51 (0.38) 0.58 (0.42) 0.62 (0.43) 0.69 (0.46) 0.74 (0.42)
Sample size 251 244 242 236 234 233
Urinary tract infection ratio 1.18 (0.43) 1.17 (0.43) 1.17 (0.48) 1.14 (0.46) 1.11 (0.48) 0.98 (0.43)
Sample size 256 245 247 247 246 244
a

Standard deviations are indicated (in parentheses) beside the means.

Note: HSA=health service areas; MSA=metropolitan service area.

Notes

1

We excluded observations where the number of RN FTEs per 1,000 inpatient days exceeded 8.3 (four standard deviations above the mean in our sample and just above the upper range observed in Kovner and Gergen [1998]) and observations where the number of nonnurse FTEs per 1,000 inpatient days exceeded 33 (the upper range of values observed in Kovner and Gergen [1998]).

2

Needleman et al. (2001) derived an allocation rule based on California data, which they applied to hospitals from all states. However, we were reluctant to assume that the California staffing location model applied uniformly to other states.

3

The simple first-difference estimator (not incorporating the lagged value of the dependent variable) may fail to detect effects when the adjustment period is of a longer period than the period in which the first difference is taken (Baker, Benjamin, and Strange 1999). We believe our approach is appropriate because (i) changes to in-hospital mortality and complications should be immediately affected by changes in staffing levels, not after a long adjustment period, and (ii) the influence of the past is incorporated through the lagged value of the dependent variable.

4

Note that given a mortality ratio of 1.0, the marginal effect represents both the percentage change in the mortality ratio and the percentage change in actual mortality.

5

The marginal effects for RN FTEs per 1,000 inpatient days are calculated using median values for LPN FTES per 1,000 inpatients days (0.55) and Nonnurse FTEs per 1,000 inpatient days (9.28) for the mortality ratio sample (Table 3). Given the mortality ratio dynamic panel model, Mortality Ratioti1Mortality Ratiot−12RN FTEs per 1,000 IPD+β3(RN FTEs per 1,000 IPD)24LPN FTEs per 1,000 IPD+β5(LPN FTEs per 1,000 IPD)26Nonnurse FTEs per 1,000 IPD+β7(Nonnurse FTEs per 1,000 IPD)28(RN FTEs per 1,000 IPD)×(LPN FTEs per 1,000 IPD)+β9(RN FTEs per 1,000 IPD)×(Nonnurse FTEs per 1,000 IPD) +…, the marginal effect of a one unit increase in RN FTEs per 1,000 inpatient days from 2.66 to 3.66 equals β23(3.662–2.662)+β80.55+β99.28.

References

  1. Al-Haider S, Wan T T H. “Modeling Organizational Determinants of Hospital Mortality.”. Health Services Research. 1991;26(3):302–23. [PMC free article] [PubMed] [Google Scholar]
  2. American Nurses Association. Implementing Nursing's Report Card: A Study of RN Staffing, Length of Stay and Patient Outcomes. Washington, DC: American Nurses Publishing; 1997. [Google Scholar]
  3. American Nurses Association. Nurse Staffing and Patient Outcomes in the Inpatient Hospital Setting. Washington, DC: American Nurses Association; 2000. [Google Scholar]
  4. American Nurses Association. Analysis of American Nurses Association Staffing Survey. Warwick, RI: Cornerstone Communications Group; 2001. [Google Scholar]
  5. Anderson T, Hsiao C. “Estimation of Dynamic Models with Error Components.”. Journal of the American Statistical Association. 1981;76(375):598–606. [Google Scholar]
  6. Arellano M, Bond S. “Some Tests of Specification for Panel Data: Monte Carlo Evidence and an Application to Employment Equations.”. Review of Economic Studies. 1991;58(2):277–97. [Google Scholar]
  7. Ballard K, Grey R, Knauf R, Uppal P. “Measuring Variations in Nursing Care per DRG..”. Nursing Management. 1993;24(4):33–41. [PubMed] [Google Scholar]
  8. Baker M, Benjamin D, Strange S. “The Highs and Lows of the Minimum Wage Effect: A Time-Series Cross-section Study of the Canadian Law.”. Journal of Labor Economics. 1999;17(2):318–50. [Google Scholar]
  9. Breslow N, Day N. Statistical Methods in Cancer Research. Vol. 2: The Design on Analysis of Cohort Studies. Lyon, France: International Agency for Research on Cancer; 1987. [PubMed] [Google Scholar]
  10. Gonnella J, Hornbrook M, Louis D. “Staging of Disease: A Case-Mix Measurement.”. Journal of the American Medical Association. 215(5):637–44. [PubMed] [Google Scholar]
  11. Hartz A, Krakauer H, Kuhn E, Young M, Jacobson S, Gay G, Muenz L, Katzoff M, Bailey R, Rimm A. “Hospital Characteristics and Mortality Rates.”. New England Journal of Medicine. 1989;321(25):1720–4. doi: 10.1056/NEJM198912213212506. [DOI] [PubMed] [Google Scholar]
  12. Kovner C, Gergen P. “Nurse Staffing Levels and Adverse Event Following Surgery in U.S. Acute Care Hospitals.”. Image. 1998;30(4):315–21. [PubMed] [Google Scholar]
  13. Kovner C, Jones C, Zhan C, Gergen P, Basu J. “Nurse Staffing and Post-Surgical Adverse Events: An Analysis of Administrative Data from a Sample of U.S. Hospitals, 1990–1996.”. Health Services Research. 2002;37(3):611–29. doi: 10.1111/1475-6773.00040. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Kuhn E, Hartz A, Gottlieb M, Rimm A. “The Relationship of Hospital Characteristics and the Results of Peer Review in Six Large States.”. Medical Care. 1991;29(10):1028–38. doi: 10.1097/00005650-199110000-00008. [DOI] [PubMed] [Google Scholar]
  15. Lawthers A, McCarthy E, Davis R, Peterson L, Palmer R, Iezzoni L. “Identification of In-Hospital Complications from Claims Data: Is It Valid.”. Medical Care. 2000;38(8):785–95. doi: 10.1097/00005650-200008000-00003. [DOI] [PubMed] [Google Scholar]
  16. Lichtig L, Knauf R, Milholland D. “Some Impacts of Nursing on Acute Care Hospital Outcomes.”. Journal of Nursing Administration. 1999;29(2):25–32. doi: 10.1097/00005110-199902000-00008. [DOI] [PubMed] [Google Scholar]
  17. Makuc D, Haglund B, Ingram D, Kleinman J, Feldman J. “The Use of Health Service Areas for Measuring Provider Availability.”. Journal of Rural Health. 1991;7(4):347–56. [PubMed] [Google Scholar]
  18. Manheim L, Feinglass J, Shortell S, Hughes F. “Regional Variation in Medicare Hospital Mortality.”. Inquiry. 1992;29(1):55–66. [PubMed] [Google Scholar]
  19. Needleman J, Buerhaus P, Mattke S, Stewart M, Zelevinsky K. “Nurse Staffing and Patient Outcomes in Hospitals.”. 2001. Final Report for Health Resources Services Administration. Contract No. 230-99-0021. Available online at http://bhpr.hrsa.gov/nursing/staffstudy.htm.
  20. Needleman J, Buerhaus P, Mattke S, Stewart M, Zelevinsky K. “Nurse-Staffing Levels and the Quality of Care in Hospitals.”. New England Journal of Medicine. 2002;346(22):1715–22. doi: 10.1056/NEJMsa012247. [DOI] [PubMed] [Google Scholar]
  21. Rosenthal M, Newhouse J. “Managed Care and Efficient Rationing.”. Journal of Health Care Finance. 2002;28(4):1–10. [PubMed] [Google Scholar]
  22. Scott W, Forest W, Brown B. “Hospital Structure and Postoperative Mortality and Morbidity.”. In: Shortell S, Brown M, editors. Inquiry: Organizational Research in Hospitals. Chicago: Blue Cross; 1976. [Google Scholar]
  23. Silber J, Rosenbaum P, Ross R. “Comparing the Contributions of Groups of Predictors: Which Outcomes Vary with Hospital Rather Than Patient Characteristics?”. Journal of the American Statistical Association. 1995;90(429):7–18. [Google Scholar]
  24. Silber J, Rosenbaum R, Schwartz J, Ross R, Williams S. “Evaluation of the Complication Rate as a Measure of Quality of Care in Coronary Artery Bypass Surgery.”. Journal of the American Medical Association. 1995;274(4):317–23. [PubMed] [Google Scholar]
  25. Spetz J, Bakerm L. Has Managed Care Affected the Availability of Medical Technology? San Francisco: Public Policy Institute of California; 1999. [Google Scholar]
  26. Wan T T H, Shukla R. “Contextual and Organizational Correlates of the Quality of Hospital Nursing Care.”. Quality Review Bulletin. 1987;13(2):61–5. doi: 10.1016/s0097-5990(16)30108-7. [DOI] [PubMed] [Google Scholar]

Articles from Health Services Research are provided here courtesy of Health Research & Educational Trust

RESOURCES