Abstract
Objective
To investigate the effects of Medicare's Prospective Payment System (PPS) for skilled nursing facilities (SNFs) and associated rate changes on quality of care as represented by staffing ratios and regulatory deficiencies.
Data Sources
Online Survey, Certification and Reporting (OSCAR) data from 1996–2000 were linked with Area Resource File (ARF) and Medicare Cost Report data to form a panel dataset.
Study Design
A difference-in-differences model was used to assess effects of the PPS and the BBRA (Balanced Budget Refinement Act) on staffing and deficiencies, a design that allows the separation of the effects of the policies from general trends. Ordinary least squares and negative binomial models were used.
Data Collection Methods
The OSCAR and Medicare Cost Report data are self-reported by nursing facilities; ARF data are publicly available. Data were linked by provider ID and county.
Principal Findings
We find that professional staffing decreased and regulatory deficiencies increased with PPS, and that both effects were mitigated with the BBRA rate increases. The effects appear to increase with the percent of Medicare residents in the facility except, in some cases, at the highest percentage of Medicare. The findings on staffing are statistically significant. The effects on deficiencies, though exhibiting consistent signs and magnitudes with the staffing results, are largely insignificant.
Conclusions
Medicare's PPS system and associated rate cuts for SNFs have had a negative effect on staffing and regulatory compliance. Further research is necessary to determine whether these changes are associated with worse outcomes. Findings from this investigation could help guide policy modifications that support the provision of quality nursing home care.
Keywords: Nursing home quality, Medicare, prospective payment, staffing, deficiencies
The Balanced Budget Act of 1997 (BBA) mandated the largest decrease in payments for nursing home residents covered by Medicare since 1965. The BBA fundamentally changed the way Medicare pays skilled nursing facilities for the 9 percent of residents covered by Medicare. The new system—the Prospective Payment System (PPS)—which began in 1998, pays nursing homes on a prospective basis instead of through a retrospective cost-based system. Nursing homes are paid a fixed amount per day, with adjustments for health status, but no extra payments for additional services are made. Like with most prospective systems, the goals were to reduce the rapid growth in spending while maintaining quality of care.
Total Medicare payments to skilled nursing facilities fell in the year after prospective payment was introduced. Reports of financial difficulties began appearing in the media, and more than 10 percent of facilities nationwide filed Chapter 11 bankruptcy (Roadman 2000), including many of the largest chains (e.g., Vencor, Genesis Health Ventures, Mariner Post-Acute Network, Integrated Health Services, Sun Healthcare Group). There are substantial concerns that the change in payment system, combined with a reduction in total payments, may have reduced the quality of care.
We examine the effects of PPS and subsequent rate changes on two measures of nursing home quality of care. The first is a process-based measure—staffing hours per resident-day. Although the evidence is mixed on the exact nature of the relationship (Davis 1991; Zhang and Grabowski 2004), staffing has been shown in some studies to have an effect on resident outcomes, especially professional staffing (Centers for Medicare and Medicaid Services 2001; Cohen and Spector 1996; Castle 2000; Harrington et al. 2000; Johnson-Pawlson and Infeld 1996; Wunderlich and Kohler 2000). The second measure of quality is the number of regulatory deficiencies cited in Medicare and Medicaid recertification surveys. Each certified nursing facility is surveyed by a state agency at least once every 15 months to check for compliance with regulations on care practices and management. A nursing facility is cited with a deficiency if surveyors find it out of compliance with any one of several hundred individual requirements. These deficiencies are recorded in the Online Survey, Certification and Reporting (OSCAR) database used in our analysis. Deficiencies, like many other commonly used measures of quality, are imperfect proxies, but they do indicate changes in resident outcomes that cannot be gleaned from analysis of staffing alone.
Our study focuses on how changes in Medicare payment affect quality of care by comparing staffing and deficiencies in nursing homes before and after the implementation of PPS. We allow for differences between different types of nursing homes, and use a difference-in-differences design to separate the effects of payment changes from concurrent trends in the industry. Furthermore, in contrast to some other studies, we view the nursing facility as a whole, with revenue streams from one type of resident potentially affecting quality of care for all residents.
Background
Between 1986 and 1998, skilled nursing facility (SNF) care became the fastest-growing segment of Medicare expenditures, growing at an average of 30 percent per year nominally (U.S. General Accounting Office 1999). Facilities were reimbursed under a retrospective cost-based system, with limits on routine costs but no limits on ancillary services such as physical and occupational therapy. Although the growth was due in part to a large increase in the number of Medicare beneficiaries using SNF services (an 80 percent increase between 1990 and 1997 alone according to the Prospective Payment Assessment Commission [1997]), the system was widely seen as encouraging excessive use of ancillary therapies while providing few incentives for cost containment. In response, the BBA mandated the implementation of a PPS for skilled nursing facilities.
Accordingly, the Health Care Finance Administration (HCFA), now the Centers for Medicare and Medicaid Services (CMS), began a phased-in implementation of a per diem Medicare PPS for skilled nursing facilities in July 1998. Retrospective, cost-based reimbursement was replaced with prospective per diem rates for each Medicare resident. The case-mix categories used for differential reimbursement are based on the 44 groups in the Resource Utilization Group III (RUG-III) system (Fries et al. 1994). The RUG-III system was developed using time studies of nursing and therapy needs. The facility is responsible for providing appropriate care efficiently (including choosing the appropriate staffing levels) and is not reimbursed for costs incurred beyond the PPS rate.
The SNF PPS system was phased in over four years, with the start date corresponding to each facility's own fiscal year start date on or after July 1, 1998. In the first year, facilities were paid based on 25 percent federal rate and 75 percent facility-specific rate. The federal proportion of the rate increased by 25 percentage points each year until the rate was 100 percent federal. Rates were designed based on average costs per case-mix category in 1995; each facility's 1995 costs determined the facility-specific rate, and the average cost over all facilities determined the federal rate, updated for inflation. Adjustments are made for regional wage differences and rural status.
The PPS was not intended to be budget-neutral. A major goal of the change to prospective rates was to slow the growth in Medicare SNF costs. The PPS rates were set to decrease average payments for the majority of facilities, with total savings in 1999 estimated ex-ante by the Congressional Budget Office at $1.2 billion; instead, expenditures were cut by $3.4 billion in 1999, more than double the intended amount (Lewin Group 2000). Concerns over the industry's financial viability led Congress to enact adjustments to the system in the Balanced Budget Refinement Act (BBRA) of April 2000, slightly increasing the federal rates in all RUG-III groups and by 20 percent in 15 of the groups where rates were thought to be most problematic. The adjustments were seen as a temporary solution, scheduled to end when an improved classification system was developed. The BBRA also allowed facilities with lower facility-specific rates to move to the full federal rate with the start of the next cost-reporting period rather than following the four-year phase-in. Additional adjustments to the system were enacted in the Benefits Improvement and Protection Act of 2000 (BIPA), implemented in 2001. Provisions of BIPA are not considered in our analysis due to data limitations. Parts of the BBRA and BIPA were allowed to expire in 2002, but the adequacy of the rates remains controversial.
Evidence from the Literature
The literature on the relationship between payment and quality in nursing homes focuses mainly on the effect of Medicaid case-mix reimbursement and the effect of Medicaid rate changes. Feder and Scanlon (1989) found that the case-mix system implemented in Maryland resulted in no readily measurable declines in quality. Likewise, Schlenker (1991) found no differences in outcomes (catheter use, urinary tract infections, pressure ulcers, psychotropic drugs, confusion, and physical restraints) in states with case-mix versus other types of reimbursement systems. These studies were largely descriptive, using simple before-and-after comparisons in one or several states. On the other hand, Norton (1992) used a Markov model to find that monetary incentives to providers in a Medicaid-based case-mix experiment had beneficial effects on the quality of nursing home care. No general conclusions can be drawn from these studies as to the effects of prospective case-mix reimbursement for Medicaid residents on quality of care in nursing facilities.
Another segment of the literature on Medicaid reimbursement and quality of care concerns the effect of Medicaid rate increases in the presence of excess demand (Scanlon 1980). In an excess demand framework, facilities have a great deal of choice in which residents they choose to accept or reject; Medicaid residents are accepted last, and Medicaid demand exceeds supply. Under excess demand, some researchers have shown that higher Medicaid rates can actually lead to lower quality (Nyman 1985; Gertler 1989).
Recent studies have established that current market conditions no longer support the excess demand framework in most areas. Occupancy rates have been in steady decline since the mid-1990s (American Health Care Association 2001), forcing facilities to compete for all types of residents. Cohen and Spector (1996), using nationally representative data, found no effect of Medicaid reimbursement on outcomes that would support the excess demand theory. Grabowski (2001a) used national OSCAR data from 1995–1996 to find that an increase in Medicaid reimbursement rates improved quality as measured by professional staffing. In a similar study using other measures of quality (Grabowski 2001b), rates were found to have a small but statistically significant and positive effect on quality, again refuting the excess demand results using more current data. Thus, much of the historical literature on the effect of Medicaid payment changes on quality may no longer apply.
The literature described above on the effects of Medicaid payment changes on quality leaves many gaps in the research as far as assessing the potential effects of skilled nursing facility PPS. Many of the earlier studies used limited methods and have limited generalizability because they used older data from one or several states. In comparison, the newer studies refuting the excess demand framework offer substantive and methodological insights as to the potential effects of Medicare reimbursement changes in nursing facilities in a competitive market. In the context of this analysis they are limited, however, by the focus on Medicaid. While Medicaid residents have generally been considered the least desirable residents due to low rates, they comprise the majority of residents, and Medicare residents have usually been considered a highly desirable minority. Effects on quality across all residents may therefore be very different for changes in payment for these two populations.
The research that directly addresses the effects of skilled nursing facility PPS on quality of care is still quite limited. Hodlewsky and colleagues found trends in professional staffing and deficiencies consistent with negative effects of PPS on quality, but the analysis was preliminary and did not control for competing explanations (Hodlewsky et al. 2001). Angelelli and colleagues used Medicare claims data to look at preliminary effects of PPS on rehospitalization rates, a standard measure of quality for postacute patients, and found no significant changes; the study used a simple pre–post design and was limited to the effect on Medicare residents (Angelelli et al. 2002). Similarly, McCall and colleagues used a pre–post design and claims data for all types of Medicare postacute care patients and found no increases in rehospitalization, emergency room visits, or mortality after the BBA was implemented (McCall et al. 2003). Neither of the latter studies considered potential effects on non-Medicare residents.
In sum, this study fills an important gap and adds to research in the field in several ways: It assesses the behavior of nursing facilities facing changes in Medicare reimbursement, which may differ significantly from behavior in response to changes in Medicaid; it addresses the important and timely policy question of whether the PPS and the BBRA affect quality of care in nursing homes; it looks at the nursing facility as a whole and includes potential effects on non-Medicare residents; and it uses a design and econometric methods that enable identification of the effects of SNF PPS separately from other trends in the industry.
Methods
Conceptual Framework
There are two reasons why nursing facilities would be expected to lower quality of care after the implementation of PPS (Cutler 1995; Norton et al. 2002). First, the elimination of reimbursement beyond the flat payment gives an incentive to minimize costs, which should make facilities more efficient (the intended effect), but it could also lead facilities to decrease quality in order to minimize costs. This negative incentive may be balanced by the need to compete with other providers and maintain reputation. Second, the average payment received also changes, depending on how the facility's previous reimbursements compare to the new rates. Under SNF PPS, the funding cuts led to a decrease in average payments for most facilities. The funding cut results in a separate pathway by which quality may decrease, that of facilities having fewer resources. Thus, quality is expected to decrease due to PPS and increase again due to the BBRA rate increases.
These incentives apply to both for-profit and nonprofit facilities, although the majority of facilities are for-profit (Norton 2000). Our study allows for the possibility that the magnitude of the effects may differ by proprietary status. In nursing facilities, it is often assumed that nonprofit facilities provide a higher level of quality than for-profit facilities, as found by Chou (2002). Nonprofits must still be concerned with staying in operation, however, and may behave similarly to for-profits if financial viability is at stake. Although the directions of the incentives are the same for both types, in our empirical work we allow the magnitude to differ by proprietary status. Similarly, because chain facilities were potentially experiencing greater effects as demonstrated by bankruptcy filings reported by the media, we allow the magnitude to differ between chain and independent facilities. Finally, hospital-based facilities may have been disproportionately affected by PPS, because their costs were traditionally higher and PPS equalized rates between hospital-based and freestanding facilities. Because the underlying incentives remained the same, we analyzed hospital-based and freestanding facilities together but, again, allowed for differing magnitudes of effects.
Study Design
We use a model that can separate the effects of PPS from macro-effects that may be causing changes in the dependent variable over time. A difference-in-differences design generally takes advantage of a natural experiment to simulate the existence of treatment and control groups, thereby measuring effects of a policy only over and above the trend in the control group. It compares the difference over time for the treatment group compared to the difference over time for the control group. The approach here depends on the assumption that facilities with Medicare residents should experience the effects of PPS, while facilities with no Medicare residents (20 percent of facilities in the analysis) should not. Similarly, facilities with a higher proportion of Medicare residents should experience a stronger effect from PPS. Because the majority of residents in most facilities are Medicaid or private-pay, most facilities will experience similar trends in quality associated with Medicaid and other non-PPS causes—these are the pre–post differences. Facilities with Medicare residents may experience an additional shock due to the implementation of PPS, creating a difference in the trend between facilities that have Medicare residents and facilities that do not. This differential trend separates the effects of PPS from other factors.
Our design accounts for industry-wide and national trends unrelated to PPS, because we would see those trends in the reference group as well as in the Medicare facilities. For example, a general staffing shortage might lead to decreased quality over time for the average facility regardless of PPS status. A simple analysis of quality-of-care measures over time could not separate the effect of the staffing shortage from the effect of PPS. The difference-in-differences approach, however, takes into account the general staffing effect within a state or region and attributes only the additional variation in the dependent variable to the effect of PPS. Time-invariant state-to-state differences are accounted for by the state fixed effects. However, time-varying changes in individual state policy—such as cuts in Medicaid or rising costs due to litigation in particular states during the study period—could be problematic. With these types of changes, trends in the reference group become more heterogeneous and the design less valid. Within a particular state, however, the comparison to the reference group should still be valid. We therefore tested for this problem by running the analysis on several of the largest states individually and found that the effects did not change substantially, except that significance levels declined slightly as sample size declined. Thus, within-state changes over time appear not to be a problem.
Our design relies heavily on the use of the percent Medicare in the facility to represent the extent to which a facility is exposed, or vulnerable to, potential changes in Medicare policy. Certainly, the policy changes should affect those facilities more that are more reliant on Medicare revenues. The percent of residents in the facility whose care is paid for by Medicare is a good but imperfect measure of that exposure. Other potential measures would be the percent of revenues from Medicare and the percent of beds certified by Medicare, but these two measures are more problematic in practice: data on revenues from all sources are not readily available at the facility level, and the percent of certified beds has little correlation with reliance on Medicare. Many facilities simply certify all beds for both Medicaid and Medicare, and some facilities with 100 percent Medicare-certified beds consistently have no or few Medicare residents. Our measure is therefore preferable but has two potential drawbacks: the percent of Medicare residents a facility can attract depends not only on facility capacity, but also on market forces; and facilities with the same percent Medicare may have different financial exposure to policy changes depending on case mix. We control for the first by including time-varying measures of market competition and demand; the second remains a limitation of the data.
The design is strengthened slightly by the staggered implementation dates and the phase-in of PPS rates. The majority of facilities were subject to PPS rates beginning January 1, 1999, but an adequate number (approximately 23 percent) implemented PPS at other points between July 1, 1998, and June 30, 1999. If facilities implementing PPS earlier experience changes in quality earlier, the changes are more likely to be due to PPS than to general trends. The effect of PPS should also become stronger as the phase-in moves from the 25 percent federal rate toward the 100 percent federal rate, controlling for the effect of the BBRA; thus the phase-in also helps to identify effects specific to PPS.
Data and Measures
The primary data source for our analysis is CMS's Online Survey, Certification and Reporting Database (OSCAR) for 1996–2000, which provides ownership type, staffing levels, fiscal year, resident census, payer mix, and deficiencies cited during each standard recertification survey. OSCAR includes all facilities in the nation that are certified for Medicare or Medicaid, thus excluding an inconsequential percent of facilities that accept only private-pay residents. Of 65,801 observations in the original dataset, 140 observations were deleted because they were missing the fiscal year starting date, a key variable determining whether or not the facility was under the prospective payment system at any point in time. In addition, 5,378 observations with data deemed to be erroneous for key variables were excluded according to the criteria used by CMS in its report to Congress on staffing (Centers for Medicare and Medicaid Services 2001). Criteria for deletion of a survey were: facilities reporting more residents than beds; facilities reporting no registered nurse (RN) hours and 60 or more beds; facilities reporting more than 12 RN hours per resident day; facilities reporting less than 0.5 total hours per resident day; and facilities reporting zero residents. The final analysis file contained 60,283 surveys from 18,134 facilities, approximately half dated before the implementation of PPS and half after. Each facility had between one and five (average 3.3) observations during the five-year period.
County-level demographic and socioeconomic information were obtained from the Bureau of Health Professions' Area Resource File 2000 (ARF). In addition, 1997 Medicare Cost Report data were used to supplement OSCAR measures of the percent Medicare in each facility.
Dependent Variables
The dependent variables are staffing ratios and regulatory deficiencies. Staffing ratios are defined as the number of staff (RN, LPN [licensed practical nurse], nurse aide) hours worked per resident-day, derived from OSCAR variables. The number of regulatory deficiencies is defined as an unweighted count of health deficiencies recorded in OSCAR. Our design controls for subjective measurement of deficiencies by isolating effects of the PPS and the BBRA from the underlying survey trends and allowing for baseline differences from state to state. A description of all variables in the analysis, including dependent and explanatory variables, can be found in Table 1.
Table 1.
Variable | Mean | Standard Deviation |
---|---|---|
Dependent Variables (N=60,283) | ||
RN hours per resident-day | 0.52 | 0.72 |
Professional staffing (RN+LPN) hours per resident-day | 1.24 | 1.02 |
Nurse-aide hours per resident-day | 2.08 | 0.76 |
Number of deficiencies per survey | 5.43 | 5.65 |
Policy Variables (N=60,283) | ||
PPS (=0, .25, .5, .75 corresponding to phase of PPS rates) | 0.19 | 0.22 |
BBRA (=1 on or after April 1, 2000) | 0.17 | 0.38 |
Time Trend (N=60,283) | ||
Year 1996/97 (reference category) | 0.31 | 0.46 |
Year 1998 (yes=1) | 0.23 | 0.42 |
Year 1999 (yes=1) | 0.23 | 0.42 |
Year 2000 (yes=1) | 0.23 | 0.42 |
Baseline Facility Characteristics (N=18,134) | ||
Zero Medicare (reference category; no Medicare residents) | 0.20 | 0.40 |
Very Low Medicare (more than 0% but less than or equal to 6%) | 0.28 | 0.45 |
Low Medicare (more than 6% but less than or equal to 12%) | 0.30 | 0.46 |
Medium Medicare (more than 12% but less than or equal to 25%) | 0.12 | 0.33 |
High Medicare (more than 25% Medicare residents) | 0.10 | 0.30 |
Nonprofit facility (reference category) | 0.28 | 0.45 |
For-profit facility (yes=1) | 0.66 | 0.47 |
Government facility (yes=1) | 0.06 | 0.24 |
Chain facility (yes=1) | 0.53 | 0.50 |
Hospital-based facility (yes=1) | 0.13 | 0.38 |
Total number of beds | 106.77 | 74.80 |
Percent private-pay (payer other than Medicare or Medicaid) | 25.02 | 21.64 |
Facility offers ventilator care (yes=1) | 0.02 | 0.14 |
Facility offers physical therapy (yes=1) | 0.97 | 0.11 |
Facility offers occupational therapy (yes=1) | 0.95 | 0.21 |
ADL index (range 3–21) | 10.18 | 1.59 |
Skilled services index (range 0–4.4) | 0.20 | 0.27 |
Percent of residents depressed | 0.25 | 0.18 |
Percent of residents with psychiatric diagnosis | 0.12 | 0.14 |
Percent of residents with dementia | 0.42 | 0.20 |
Baseline County Characteristics (N=2,912) | ||
High-competition county (HHI<0.12) | 0.08 | 0.27 |
Low-competition county (HHI>0.5) | 0.49 | 0.50 |
High-demand county (county occupancy rate>0.95) | 0.25 | 0.44 |
Low-demand county (county occupancy rate<0.83) | 0.36 | 0.48 |
Percent of people in poverty | 14.66 | 7.11 |
Population (1,000) per square mile | 0.22 | 1.48 |
Median household income ($1,000s) | 33.45 | 8.30 |
Key Explanatory Variables
Three explanatory variables are of primary interest: PPS, BBRA, and percent Medicare. The PPS variable is a scaled indicator (0, .25, .50, .75, or 1) of the phase of PPS applicable to each facility during the relevant time period. For example, a facility subject to PPS rates starting on July 1, 1998, would have a PPS value of zero for all prior MDS quarters, a value of .25 for July 1998–June 1999, and a value of .50 for July 1999–June 2000. Facilities new to Medicare after 1995, and therefore without 1999 cost reports, were not allowed a staggered entry and were coded as PPS=1 for all surveys after the appropriate fiscal year start date. Scaling this variable reflects the assumption that any financial incentive should grow in proportion to the percentage of Medicare payment that is prospective and not based on facility-specific costs. Since there was no phase-in for BBRA rate adjustments, the BBRA variable is defined as a binary variable equal to one on or after April 1, 2000, and zero otherwise.
Percent Medicare is defined as the percent of total residents in a facility whose primary payer is Medicare, based on pre-PPS calculations, and categorized into five levels. Alternative constructions of the variable are tested in a sensitivity analysis. Categories were tested for homogeneity within groups and heterogeneity of effects across groups; the final categories used in the analysis were defined as 0 percent Medicare (reference category), 0–6 percent Medicare, 6–12 percent Medicare, 12–25 percent Medicare, and more than 25 percent Medicare. Since OSCAR measures of percent Medicare were found to be somewhat unstable, the proportion of resident-days (Total Medicare Inpatient Days [S39] as a percentage of Total Inpatient Days [S45]) from 1997 Medicare Cost Reports was used instead. For those facilities without 1997 Medicare Cost Reports, an average of baseline (pre-PPS) OSCAR values was used.
Control Variables
Other explanatory variables include state fixed effects (50 dummy variables) and time fixed effects (4 year dummy variables) to account for state policies and underlying time trends in staffing, or deficiencies that are unrelated to PPS; facility characteristics such as ownership, size, level of care, and resident case mix; and county economic and demographic factors such as the level of competition and demand for nursing home beds, income, and population density. The late 1996–1997 cycle of surveys serves as the reference category for the year variables.
Ownership characteristics (for-profit, government, chain, hospital-based) may reflect different objectives, resources, and management perspectives. Each characteristic is represented by an indicator variable in the analysis, with nonprofit, independent, freestanding facilities serving as reference categories. The number of beds in the facility is defined continuously and controls for differences in staffing and deficiencies that can be attributed to size or economies of scale. The percent of residents that are private-pay serves as a proxy for non-Medicare resources in the facility, since private-pay residents generally bring in higher profit margins than Medicaid residents. Finally, the level of care available in the facility and the resident case mix inevitably affect the need for staffing and risk of deficiencies. Thus, the outcomes of interest are risk-adjusted through the inclusion of a measure of average functional dependence (activities of daily living [ADL] index); indicators of whether ventilator care, physical therapy, and occupational therapy are available; a measure of other skilled services provided (skilled services index); and percents of residents with depression, psychiatric diagnoses, and dementia. The ADL index is constructed as an average of the percent of residents who are bedfast or chairbound or need assistance with eating, toileting, and transferring, weighted by the amount of assistance needed; although the ADL measures available in OSCAR are at the facility level, the formula used for the index gives an approximate equivalent to the RUG-III ADL index at the resident level (Cowles Research Group 1996). The skilled services index is a sum of the percentages of residents utilizing intravenous therapy, suctioning, respiratory therapy, tracheostomy care, and parenteral feeding.
County economic and demographic factors may affect demand for a facility's services as well as the facility's ability to attract and retain staff. If facilities compete on the basis of quality, facilities in more competitive areas may maintain higher quality than those in low-competition areas. Competition among nursing homes is measured by a standard Herfindahl index, defined as sum of the squared market shares of all homes in a county. We define a high-competition county by a Herfindahl less than .12, and a low-competition county by a score greater than .5. By similar reasoning, facilities in areas with high demand for nursing home beds have little problem filling beds and may therefore maintain lower quality than facilities in low-demand areas. Demand is measured by the county occupancy rate for nursing home beds, with high demand defined as occupancy greater than 95 percent and low demand as occupancy less than 83 percent. We chose cut-points for the competition and demand variables by examining the distribution of the variables and looking for natural breaks or thresholds. County-level nursing facility occupancy rates also serve as proxies for competitive forces in the market, such as the availability of other nursing homes, home health services, and assisted living facilities. These measures are supplemented with additional factors that may affect demand. Counties with higher median incomes, or lower poverty levels, would be expected to have higher demand for nursing facility care at any given level of quality. Finally, more densely populated areas have more potential nursing home residents and should be associated with higher demand.
Endogeneity
An important methodological concern is that the percent Medicare, an important identifying variable, is potentially endogenous. The percent Medicare could be a function of quality and staffing (reverse causality). Facilities with higher quality may attract a higher percent Medicare. Furthermore, the percent Medicare could be a function of the PPS and the BBRA, allowing the bias to spread to the supposedly independent policy variables of interest. For example, if facilities decrease their percent Medicare in response to the PPS, they might exhibit lower staffing and lower deficiencies that do not represent changes in quality but rather changes in case mix. Not dealing with the potential endogeneity of the percent Medicare would not only bias the coefficients on the percent Medicare, but also on the policy variables. To solve both problems, only a baseline, time-invariant measure of percent Medicare is used. The baseline measure is not affected by either future policy changes or contemporaneous changes in quality and staffing. The percent Medicare thereby represents the exposure of a facility to potential Medicare policy changes before they take effect.
One additional source of endogeneity may still persist. Despite using a time-invariant measure, the baseline percent Medicare may be a function of past quality and staffing, which is related to current quality and staffing. However, because the focus of the model is on measuring the effect of policy changes on staffing, any remaining bias to the policy coefficients after using the baseline percent Medicare is likely to be small.
Estimation Procedure
Following a standard difference-in-differences specification, the PPS variable represents the time trend, the percent Medicare variable represents the treatment, and the interaction term between the two represents the differential effect of interest. Staffing and deficiencies are modeled as functions of PPS, BBRA, percent Medicare, interactions between PPS/BBRA and percent Medicare, and facility- and county-level control variables (ownership, resources, case mix, competition, demand). Because potential correlation exists among facilities within a state, the model is estimated using state-level fixed effects. The state fixed effects control for baseline differences in staffing or deficiencies due to state regulatory, fiscal, economic, or other factors. Time fixed effects account for underlying time trends unrelated to the policies of interest. The basic model for facility f at time t has the following form, using the staffing equation as an example:
where t ranges from 1 to 5 depending on the number of surveys the facility had during the study period. The same equation applies in the analysis of deficiencies with a change in the dependent variable. In all cases, robust standard errors are used to account for correlation among observations from the same facility over time.
The base model contains interactions only between the policy variables and the percent Medicare. To assess differential effects of the policies on certain types of facilities such as for-profit, chain, and hospital-based facilities, an additional set of interactions is included. For example, the triple interaction of the for-profit indicator, PPS, and percent Medicare gives the effect of PPS on for-profits over and above the effect on nonprofit and government facilities.
We estimated an ordinary least squares (OLS) regression to predict the facility staffing ratio, which is continuous. Analysis of the number of deficiencies, a count variable, requires a different approach. Negative binomial regression is used in the deficiency analysis because OLS could result in biased coefficients with count data. All analyses are conducted using STATA 7.0 (2001).
We test four main hypotheses: that staffing decreased in response to PPS and increased in response to BBRA, and that deficiencies increased in response to PPS and decreased in response to BBRA. We test both by looking at the interaction terms between the policy variables and the percent Medicare, which give the policy effects over and above the trends in non-Medicare facilities. If the staffing hypotheses are correct, we would expect to see negative coefficients on the interactions between PPS and percent Medicare and positive coefficients on the interactions between BBRA and percent Medicare. Since regulatory deficiencies are a negative measure of quality, we would expect the PPS interactions to be positive and the BBRA interactions to be negative. Furthermore, we would expect the magnitudes of each to be increasing with the percent Medicare in all cases.
Results
The PPS has the strongest negative effect on the sum of RN and LPN hours, a measure of professional staffing (see Table 2). The PPS interactions are negative, significant, and strictly increasing in magnitude with the percent Medicare. The BBRA effects are positive and mostly significant, but the magnitudes do not increase with the percent Medicare. The RN regression shows similar results to the RN+LPN regression except that coefficients at the highest levels of Medicare are less significant and do not conform to the pattern of increasing magnitudes, even changing sign in the case of BBRA effects. The nurse-aide regression shows no significant policy effects from PPS or BBRA.
Table 2.
Variable | RN Hours | RN+LPN Hours | Nurse-Aide Hours |
---|---|---|---|
Constant | 0.393*** | 1.100*** | 1.632*** |
(0.047) | (0.065) | (0.070) | |
Policy Effects | |||
PPS × Very Low Medicare | −0.159*** | −0.254*** | −0.016 |
(0.027) | (0.046) | (0.049) | |
PPS × Low Medicare | −0.160*** | −0.259*** | −0.031 |
(0.027) | (0.046) | (0.049) | |
PPS × Medium Medicare | −0.195*** | −0.266*** | −0.032 |
(0.032) | (0.052) | (0.058) | |
PPS × High Medicare | −0.122* | −0.400*** | −0.086 |
(0.062) | (0.080) | (0.072) | |
BBRA × Very Low Medicare | 0.046*** | 0.080*** | −0.012 |
(0.016) | (0.028) | (0.030) | |
BBRA × Low Medicare | 0.037** | 0.064** | −0.012 |
(0.015) | (0.028) | (0.030) | |
BBRA × Medium Medicare | 0.043** | 0.069** | −0.031 |
(0.018) | (0.031) | (0.037) | |
BBRA × High Medicare | −0.053 | 0.051 | −0.101** |
(0.049) | (0.065) | (0.051) | |
Main Effects/Time Trends | |||
PPS | 0.170*** | 0.247*** | 0.031 |
(0.031) | (0.050) | (0.052) | |
BBRA | −0.032** | −0.041 | 0.060** |
(0.015) | (0.027) | (0.028) | |
Very Low Medicare | 0.0363*** | 0.033*** | 0.001 |
(0.0087) | (0.013) | (0.015) | |
Low Medicare | 0.0610*** | 0.078*** | −0.001 |
(0.0097) | (0.014) | (0.017) | |
Medium Medicare | 0.125*** | 0.163*** | 0.032 |
(0.013) | (0.018) | (0.021) | |
High Medicare | 1.242*** | 1.698*** | 0.277*** |
(0.036) | (0.045) | (0.035) | |
Year 1998 | −0.0051 | 0.0026 | −0.0102 |
(0.0044) | (0.0063) | (0.0066) | |
Year 1999 | −0.0157** | 0.025*** | 0.007 |
(0.0071) | (0.010) | (0.011) | |
Year 2000 | −0.057*** | −0.030** | −0.025 |
(0.011) | (0.015) | (0.016) | |
Facility Characteristics | |||
For-profit facility | −0.0653*** | −0.096*** | −0.212*** |
(0.0080) | (0.011) | (0.012) | |
Government facility | −0.052*** | −0.012 | 0.106*** |
(0.017) | (0.022) | (0.022) | |
Chain facility | −0.0089 | −0.0116 | −0.0833*** |
(0.0063) | (0.0083) | (0.0093) | |
Hospital-based facility | 0.391*** | 0.550*** | 0.039* |
(0.021) | (0.027) | (0.022) | |
Total number of beds | −0.00034*** | −0.00048*** | −0.00018** |
(0.00006) | (0.00008) | (0.00008) | |
Percent private-pay | 0.00080*** | 0.00106*** | 0.00219*** |
(0.00020) | (0.00029) | (0.00029) | |
Facility offers ventilator care | −0.023 | −0.041 | −0.042 |
(0.026) | (0.032) | (0.031) | |
Facility offers physical therapy | −0.022 | −0.066* | 0.052 |
(0.022) | (0.036) | (0.040) | |
Facility offers occupational therapy | 0.031** | 0.033 | 0.024 |
(0.015) | (0.022) | (0.022) | |
ADL index | −0.0217*** | −0.0152*** | 0.0481*** |
(0.0031) | (0.0038) | (0.0038) | |
Skilled services index | 0.358*** | 0.801*** | 0.168*** |
(0.028) | (0.034) | (0.030) | |
Percent depressed | −0.034** | −0.066*** | 0.024 |
(0.014) | (0.019) | (0.020) | |
Percent with psychiatric diagnosis | −0.185*** | −0.178*** | −0.191*** |
(0.017) | (0.024) | (0.028) | |
Percent with dementia | −0.210*** | −0.227*** | 0.059** |
(0.016) | (0.023) | (0.026) | |
County Characteristics | |||
High-competition county | 0.0305*** | 0.0238** | 0.016 |
(0.0074) | (0.0098) | (0.011) | |
Low-competition county | −0.0117 | −0.078*** | 0.004 |
(0.0083) | (0.012) | (0.012) | |
High-demand county | −0.0174*** | −0.0358*** | −0.0179* |
(0.0061) | (0.0085) | (0.0093) | |
Low-demand county | 0.0371*** | 0.0860*** | 0.0501*** |
(0.0062) | (0.0083) | (0.0093) | |
Percent of people in poverty | −0.00132 | −0.0060*** | 0.0014 |
(0.00082) | (0.0011) | (0.0012) | |
Population (1,000) per square mile | 0.00652*** | −0.0009 | −0.0028** |
(0.00092) | (0.0011) | (0.0014) | |
Median household income ($1,000s) | 0.00483*** | 0.00150** | 0.00051 |
(0.00047) | (0.00060) | (0.00067) | |
R-squared | 0.58 | 0.59 | 0.15 |
Notes: Regression uses state fixed effects. Robust standard errors in parentheses.
significant at 10%;
significant at 5%;
significant at 1%.
The magnitudes of the professional staffing results are substantial. Given a mean ratio of 1.2 professional hours per resident day, marginal effects of .2–.4 hours translate roughly to a 17–33 percent reduction attributed to PPS. For the BBRA results, the magnitudes of effects are smaller, increases of .05–.08 hours, or a 4–7 percent increase on average.
Focusing on the professional staffing regression, the main effect and time trend results indicate an underlying trend toward more professional staffing during the time of PPS and a somewhat weaker decrease during the time of BBRA. The policy effects described above are net of these trends. As expected, facilities with higher percent Medicare at baseline started with higher staffing levels. For-profit facilities exhibited lower staffing at baseline than nonprofits, while hospital-based facilities exhibited higher staffing than freestanding facilities. Chain facilities started with lower nurse-aide staffing ratios than freestanding facilities. Other control variables provided results that were largely as expected.
In the negative binomial regression on total number of regulatory deficiencies, the policy effects of PPS are positive, with two out of the four being statistically significant (see Table 3). Policy effects of BBRA show consistently negative coefficients, though only one effect is significant. In both cases, magnitudes of the effects increase with the percent Medicare up to the final category. Main effect and time trend variables show that high Medicare facilities differed significantly in baseline numbers of deficiencies from other facilities, and that there appeared to be a general trend toward increasing deficiencies during the course of the study period. Again, other control variables gave expected results.
Table 3.
Variable | Estimated Effect on Number of Deficiencies |
---|---|
Constant | 1.900*** |
(0.082) | |
Policy Effects | |
PPS × Very Low Medicare | 0.083 |
(0.071) | |
PPS × Low Medicare | 0.126* |
(0.072) | |
PPS × Medium Medicare | 0.258*** |
(0.083) | |
PPS × High Medicare | 0.054 |
(0.086) | |
BBRA × Very Low Medicare | −0.004 |
(0.041) | |
BBRA × Low Medicare | −0.036 |
(0.040) | |
BBRA × Medium Medicare | −0.117* |
(0.047) | |
BBRA × High Medicare | −0.014 |
(0.057) | |
Main Effects/Time Trends | |
PPS | −0.114 |
(0.072) | |
BBRA | 0.022 |
(0.036) | |
Very Low Medicare | 0.021 |
(0.021) | |
Low Medicare | 0.044** |
(0.022) | |
Medium Medicare | 0.023 |
(0.026) | |
High Medicare | −0.438*** |
(0.037) | |
Year 1998 | 0.067*** |
(0.010) | |
Year 1999 | 0.235*** |
(0.015) | |
Year 2000 | 0.312*** |
(0.022) | |
Facility Characteristics | |
For-profit facility | 0.153*** |
(0.014) | |
Government facility | −0.028 |
(0.024) | |
Chain facility | 0.057*** |
(0.011) | |
Hospital-based facility | −0.068*** |
(0.023) | |
Total number of beds | 0.00181*** |
(0.00010) | |
Percent private-pay | −0.00467*** |
(0.00029) | |
Facility offers ventilator care | 0.111*** |
(0.034) | |
Facility offers physical therapy | 0.060 |
(0.049) | |
Facility offers occupational therapy | −0.047* |
(0.027) | |
ADL index | 0.0304*** |
(0.0039) | |
Skilled services index | −0.171*** |
(0.024) | |
Percent of residents depressed | −0.177*** |
(0.026) | |
Percent of residents with psychiatric diagnosis | 0.148*** |
(0.034) | |
Percent of residents with dementia | −0.187*** |
(0.028) | |
County Characteristics | |
High-competition county | 0.042*** |
(0.013) | |
Low-competition county | −0.049*** |
(0.016) | |
High-demand county | −0.037*** |
(0.014) | |
Low-demand county | 0.048*** |
(0.011) | |
Percent of people in poverty | 0.0013 |
(0.0015) | |
Population (1,000) per square mile | −0.0343*** |
(0.0031) | |
Median household income ($1,000s) | −0.00146* |
(0.00084) |
Notes: Regression uses state fixed effects. Robust standard errors in parentheses.
significant at 10%;
significant at 5%;
significant at 1%.
Magnitudes of the policy effects of PPS and BBRA on the number of deficiencies cannot be read directly from the negative binomial regression results but were calculated from them. Average predicted values of the dependent variable were calculated with and without the policy effects and subtracted to get estimated marginal effects. Non-Medicare facilities experienced a decline in deficiencies during the time of PPS, while facilities with Medicare experienced an increase; the marginal effect of PPS is therefore defined as the difference between the two changes. The estimated marginal effect of PPS after the full phase-in is an increase in deficiencies of .64 per survey, or about a 12 percent increase over the mean number of deficiencies (5.4). Under BBRA, the opposite was true: non-Medicare facilities experienced an increase in deficiencies, while facilities with Medicare experienced a decrease. The estimated marginal effect that we can attribute to BBRA is a decrease in deficiencies of .18 per survey, or about a 3 percent decrease.
Finally, Table 4 shows the results of the analysis of differential effects on for-profit, chain, and hospital-based facilities. In each case, the entire sample is used in the analysis, and triple interaction terms give the policy effects of interest; only these effects are shown. Differential effects on a type of facility would be evidenced by a pattern of consistent signs and significant coefficients. No strong pattern is found for any of the three groups, though some indication of potential differences exists. For-profit facilities at the highest levels of Medicare exhibit larger changes in staffing than do nonprofits. Chain facilities exhibit consistently larger decreases in staffing under PPS and also larger increases under BBRA than independent facilities, but the differences are statistically significant only at the lowest levels of Medicare. Hospital-based facilities appear to exhibit slightly larger changes than freestanding facilities, as expected.
Table 4.
Interaction with For-Profit | Interaction with Chain | Interaction with Hospital-Based | |
---|---|---|---|
Differential Policy Effects | |||
PPS × Very Low Medicare | 0.132 | −0.113 | −0.35* |
(0.094) | (0.095) | (0.20) | |
PPS × Low Medicare | 0.083 | −0.093 | −0.64*** |
(0.96) | (0.096) | (0.21) | |
PPS × Medium Medicare | −0.16 | −0.30*** | −0.24 |
(0.11) | (0.11) | (0.26) | |
PPS × High Medicare | −1.00*** | −0.19 | −0.46* |
(0.25) | (0.22) | (0.25) | |
BBRA × Very Low Medicare | −0.067 | 0.069 | 0.09 |
(0.057) | (0.058) | (0.14) | |
BBRA × Low Medicare | −0.060 | 0.079 | 0.31** |
(0.058) | (0.058) | (0.15) | |
BBRA × Medium Medicare | −0.030 | 0.131* | 0.13 |
(0.072) | (0.069) | (0.15) | |
BBRA × High Medicare | 0.35** | 0.08 | 0.51*** |
(0.16) | (0.14) | (0.17) |
Notes: Table displays only a subset of variables included in the regression. Robust standard errors in parentheses.
significant at 10%;
significant at 5%;
significant at 1%.
Sensitivity analyses show that the main results are robust to changes in the construction of the PPS and percent Medicare variables. Similar changes in staffing and deficiencies are evident if a continuous measure of percent Medicare is used, or if smaller or larger categories are defined. If percent Medicare is allowed to vary over time, results appear to be stronger, but the use of a baseline measure is preferred in order to avoid endogeneity bias. Changes in the construction of the PPS variable can lead to different magnitudes of effects but do not change the general pattern and significance.
Discussion
The hypotheses in this analysis—that the prospective payment system and associated Medicare rate cuts would potentially decrease quality of care and that the rate increases under BBRA would potentially increase quality of care—are supported by these results. The results for the PPS interactions suggest that professional staffing has suffered due to the implementation of PPS and the associated rate cuts. Results for the BBRA interactions show evidence of an improvement in professional staffing with the BBRA rate increases, though not as compelling as the evidence for decreases with PPS. There are several reasons why the BBRA results might be weaker. First, the funding increase under BBRA was smaller than the funding decrease under PPS, so a smaller effect would be expected. Second, the BBRA rate increases were designed to be temporary. Facility behavior under uncertainty may be expected to be weaker, as they may be hesitant to make permanent changes in staffing. Finally, the BBRA was implemented in April of the final year of this analysis, resulting in a short window of only nine months in which we could observe an effect.
The analysis of regulatory deficiencies provides weak evidence that the staffing changes translated into changes in quality as measured by regulatory compliance. Signs of effects were generally as expected, but statistical significance and the pattern of magnitudes were not compelling. One potential reason for the lack of strong results is that deficiencies may be too inexact a proxy for real changes in quality; an analysis of relevant outcomes at a resident level would be necessary to make firmer conclusions about the effects of Medicare rate changes on residents. Nonetheless, the deficiency results weakly support and do not contradict findings from the staffing results that PPS led to decreased quality and BBRA led to increased quality.
The staffing findings appear to be fairly consistent across types of facilities. No strong pattern of differential effects was found between for-profits and nonprofits or chain and independent facilities, though the results are not conclusive. There is somewhat stronger evidence that hospital-based facilities reacted more strongly than freestanding facilities. These differences are not very robust to changes in specification and definition of the PPS variable, perhaps due to the relatively rare types of facilities included in some of the categories. We can interpret these results as preliminary evidence that hospital-based facilities may have been affected to a greater extent by PPS and BBRA, consistent with expectations but in need of further research.
The study presents some limitations. First, the percent Medicare residents is a reasonable but imperfect proxy for the exposure of a facility to Medicare policy changes. Second, staffing levels are reported with errors in OSCAR and represent staffing only in the two-week period directly preceding a recertification survey, with variability between surveys unknown. We control for this problem to some extent by removing those facilities with outliers. Third, under the BBRA, facilities were allowed to move directly to the full federal rate in 2000 if they wished. It is not known which facilities, or even how many, chose to take that option, resulting in some measurement error in the scaled PPS variable. If we assume in a sensitivity analysis that all facilities take the option, results become stronger in magnitude and significance; we are therefore probably underestimating the effects of PPS and BBRA due to the missing information. Another limitation is the use of facility-level data, which allow for the analysis of staffing but are weak for the purpose of examining resident-level outcomes. Further research at a resident level could provide more insight into the effects of Medicare rate changes on outcomes of care.
The policy implications of these findings are straightforward. In a nursing home industry that is competitive but relies largely on public funds, payment rates need to be high enough to support a desired level of quality of care, and cost-containment measures must be viewed in terms of the tradeoff with quality. Based on a study of 10 states, CMS recently concluded that 23–56 percent of nursing facilities fail to provide minimum levels of professional staffing below which quality of care may be seriously impaired (Centers for Medicare and Medicaid Services 2001). In this context, cost-containment strategies that result in lower staffing levels may not be worth the tradeoff. Furthermore, Medicare rates cannot be viewed in terms of the effect on Medicare residents only, as effects may be experienced throughout the facility on all types of residents. Because staffing is responsible for the majority of a facility's operating expenses, staffing is likely to be a primary target for cost containment when funding decreases, and staffing ratios affect Medicaid and private-pay residents as well as Medicare residents. A long-term view of quality must look at the entire spectrum of residents and services within the nursing facility.
The policy debate about skilled nursing facility PPS will continue for some time. The evidence presented here suggests that PPS and changes in Medicare rates have a potentially important effect on outcomes of care in nursing homes through the effect on professional staffing. These results could be used to inform the continuing policy debate on adjustments to the SNF PPS system and to anticipate nursing facility behavior in response to future changes.
Acknowledgments
The authors thank Bill Roper for his insights into the Medicare policy development process and Virender Kumar for his help in creating the analysis dataset and early versions of the analysis. We also want to thank two anonymous reviewers for their extraordinarily detailed and helpful comments and suggestions.
Footnotes
This study was funded in part by Beverly Enterprises, Inc. Dr. Konetzka's time was funded by the Royster Foundation of the Graduate School at the University of North Carolina at Chapel Hill.
References
- Angelelli J, Gifford D, Intrator O, Gozalo P, Laliberte L, Mor V. “Access to Post-Acute Nursing Home Care before and after the BBA.”. Health Affairs (Millwood) 2002;21(5):254–64. doi: 10.1377/hlthaff.21.5.254. [DOI] [PubMed] [Google Scholar]
- American Health Care Association. Facts and Trends: The Nursing Facility Sourcebook. Washington, DC: American Health Care Association; 2001. [Google Scholar]
- Castle N G. “Differences in Nursing Homes with Increasing and Decreasing Use of Physical Restraints.”. Medical Care. 2000;38(12):1154–63. doi: 10.1097/00005650-200012000-00002. [DOI] [PubMed] [Google Scholar]
- Centers for Medicare and Medicaid Services. Phase II of Final Report to Congress: Appropriateness of Minimum Nurse Staffing Ratios in Nursing Homes. Baltimore, MD: Centers for Medicare and Medicaid Services; 2001. [Google Scholar]
- Chou S Y. “Asymmetric Information, Ownership and Quality of Care: An Empirical Analysis of Nursing Homes.”. Journal of Health Economics. 2002;21(2):293–311. doi: 10.1016/s0167-6296(01)00123-0. [DOI] [PubMed] [Google Scholar]
- Cohen J W, Spector W D. “The Effect of Medicaid Reimbursement on Quality of Care in Nursing Homes.”. Journal of Health Economics. 1996;15(1):23–48. doi: 10.1016/0167-6296(95)00030-5. [DOI] [PubMed] [Google Scholar]
- Cowles Research Group. Nursing Home Statistical Yearbook. Montgomery Village, MD: Cowles Research Group; 1996. [Google Scholar]
- Cutler D M. “The Incidence of Adverse Medical Outcomes under Prospective Payment.”. Econometrica. 1995;63(1):29–50. [Google Scholar]
- Davis M A. “On Nursing Home Quality: A Review and Analysis.”. Medical Care Review. 1991;48(2):129–66. doi: 10.1177/002570879104800202. [DOI] [PubMed] [Google Scholar]
- Feder J F, Scanlon W. “Case-Mix Payment for Nursing Home Care: Lessons from Maryland.”. Journal of Health Politics, Policy and Law. 1989;14(3):523–47. doi: 10.1215/03616878-14-3-523. [DOI] [PubMed] [Google Scholar]
- Fries B E, Schneider D P, Foley W J, Gavazzi M, Burke R, Cornelius E. “Refining a Case-Mix Measure for Nursing Homes: Resource Utilization Groups (RUG-III).”. Medical Care. 1994;32(7):668–85. doi: 10.1097/00005650-199407000-00002. [DOI] [PubMed] [Google Scholar]
- Gertler P J. “Subsidies, Quality, and the Regulation of Nursing Homes.”. Journal of Public Economics. 1989;38(1):33–52. [Google Scholar]
- Grabowski D C. “Does an Increase in the Medicaid Reimbursement Rate Improve Nursing Home Quality?”. Journal of Gerontology Series B: Social Sciences. 2001a;56(2):S84–93. doi: 10.1093/geronb/56.2.s84. [DOI] [PubMed] [Google Scholar]
- Grabowski D C. “Medicaid Reimbursement and the Quality of Nursing Home Care.”. Journal of Health Economics. 2001b;20(4):549–69. doi: 10.1016/s0167-6296(01)00083-2. [DOI] [PubMed] [Google Scholar]
- Harrington C, Zimmerman D, Karon S L, Robinson J, Beutel P. “Nursing Home Staffing and Its Relationship to Deficiencies.”. Journal of Gerontology Series B: Psychological Sciences and Social Sciences. 2000;55(5):S278–87. doi: 10.1093/geronb/55.5.s278. [DOI] [PubMed] [Google Scholar]
- Hodlewsky R T, Kumar V, Yi D, Kilpatrick K E. “Trends in Deficiencies and Staffing Associated with Nursing Facility PPS.”. Seniors Housing and Care Journal. 2001;9(1):27–42. [Google Scholar]
- Johnson-Pawlson J, Infeld D L. “Nurse Staffing and Quality of Care in Nursing Facilities.”. Journal of Gerontological Nursing. 1996;22(8):36–45. doi: 10.3928/0098-9134-19960801-11. [DOI] [PubMed] [Google Scholar]
- Lewin Group. “Briefing Chartbook on the Effect of the Balanced Budget Act of 1997 and the Balanced Budget Refinement Act of 1999 on Medicare Payments to Skilled Nursing Facilities”. 2000. [accessed on April 21, 2003]. Available at: http://www.ascp.com/public/ga/2000/pdfs/SNFBBACH.PDF.
- McCall N, Korb J, Pertersons A, Moore S. “Reforming Medicare Payment: Early Effects of the 1997 Balanced Budget Act on Postacute Care.”. Milbank Quarterly. 2003;81(2):277–303. doi: 10.1111/1468-0009.t01-1-00054. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Norton E C. “Incentive Regulation of Nursing Homes.”. Journal of Health Economics. 1992;11:105–28. doi: 10.1016/0167-6296(92)90030-5. [DOI] [PubMed] [Google Scholar]
- Norton E C. “Long-Term Care.”. In: Cuyler A J, Newhouse J P, editors. Handbook of Health Economics. Amsterdam: Elsevier Science; 2000. pp. 955–94. [Google Scholar]
- Norton E C, VanHoutven C H, Lindrooth R C, Normand S T, Dickey B. “Does Prospective Payment Reduce Inpatient Length of Stay?”. Health Economics. 2002;11(5):377–87. doi: 10.1002/hec.675. [DOI] [PubMed] [Google Scholar]
- Nyman J. “Prospective and ‘Cost-Plus’ Medicaid Reimbursement, Excess Medicaid Demand, and the Quality of Nursing Home Care.”. Journal of Health Economics. 1985;4:237–59. doi: 10.1016/0167-6296(85)90031-1. [DOI] [PubMed] [Google Scholar]
- Prospective Payment Assessment Commission. Medicare and the American Health Care System, Report to Congress. Washington, DC: Prospective Payment Assessment Commission; 1997. [Google Scholar]
- Roadman C. Testimony before the Senate Select Committee on Aging, September 5, 2000. Washington, DC: American Health Care Association; 2000. [Google Scholar]
- Scanlon W J. “A Theory of the Nursing Home Market.”. Inquiry. 1980;17(1):25–41. [PubMed] [Google Scholar]
- Schlenker R E. “Comparison of Medicaid Nursing Home Payment Systems.”. Health Care Financing Review. 1991;13(1):93–109. [PMC free article] [PubMed] [Google Scholar]
- StataCorp. Stata Statistical Software: Release 7.0. College Station, TX: StataCorp LP; 2001. [Google Scholar]
- U.S. General Accounting Office. Washington, DC: U.S. General Accounting Office; 1999. “Skilled Nursing Facilities: Medicare Payment Changes Require Provider Adjustment but Maintain Access.”. [Google Scholar]
- Wunderlich G S, Kohler P, editors. Improving the Quality of Long-Term Care. Committee on Improving Quality in Long-Term Care, Division of Health Care Services, Institute of Medicine. Washington, DC: National Academy of Sciences; 2000. [Google Scholar]
- Zhang X, Grabowski D. “Nursing Home Staffing and Quality under the Nursing Home Reform Act.”. Gerontologist. 2004;44:13–23. doi: 10.1093/geront/44.1.13. [DOI] [PubMed] [Google Scholar]