Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2023 Mar 2.
Published in final edited form as: Am J Manag Care. 2013 Nov;19(11):933–939.

Creating Peer Groups for Assessing and Comparing Nursing Home Performance

Margaret M Byrne 1,*, Christina Daw 2, Ken Pietz 3, Brian Reis 3, Laura A Petersen 3
PMCID: PMC9980409  NIHMSID: NIHMS1840038  PMID: 24511989

Abstract

Objective:

Publicly reported performance data for hospitals and nursing homes are becoming ubiquitous. For such comparisons to be fair, facilities must be compared to their peers. We adapted a previously published methodology for developing hospital peer groupings to be applicable to nursing homes, and explored the characteristics of “nearest neighbor” peer groupings to peers developed by classical cluster analysis.

Study Design:

Analysis of Department of Veterans Affairs (VA) administrative databases and nursing home facility characteristics.

Methods:

The nearest neighbor methodology for developing peer groupings involves calculating the Euclidean distance between facilities based on facility characteristics. We describe our steps in selection of facility characteristics, describe the characteristics of nearest neighbor peer groups, and compare them to peer groups derived through classical cluster analysis.

Results:

The facility characteristics most pertinent to nursing home groupings were found to be different from those that were most relevant for hospitals. Unlike classical cluster groups, nearest neighbor groups are not mutually exclusive, and the nearest neighbor methodology resulted in nursing home peer groupings that were substantially less diffuse than those nursing home peer groups created using traditional cluster analysis.

Conclusion:

It is essential that health care policy makers and administrators have a means of fairly grouping facilities for the purposes of quality, cost, or efficiency comparisons. In this research, we show that a previously published methodology can be successfully applied to a nursing home setting. This suggests that the same approach could be applied in other clinical settings such as primary care.

Keywords: nursing homes, peer groupings, cluster analysis, comparisons

Summary:

A methodologically sound, empirically based approach to creating peer groupings can and should be adapted to fit the setting of nursing homes.

Introduction

Given the competitive nature of the health care industry, as well as continually rising health care costs, administrators of health care systems are increasingly interested in evaluating the quality of care provided in their system as well as the efficiency of resource use. To do these evaluations, administrators often compare quality of care or efficiency performance across health care systems or across units within a single health care system. Comparisons across units within a system can be useful in determining where resources should be allocated, where incentives or bonuses might be warranted, or where quality improvement is needed. However, to ensure that comparisons across units are equitable, it is important that the units or facilities that are being compared are similar to one another (peer facilities or peers).

In the research presented here, we consider the identification of peers among nursing home facilities within the Veterans Affairs (VA) Health Care System. We use a methodology that we have previously developed for identification of peers among VA hospitals, modified to accommodate the special characteristics of VA nursing homes.1 VA nursing homes (VANHs) are quite divergent in their characteristics, thus it is important that comparable peers are identified for equitable comparisons to be made across facilities. This can be done using a peer groupings methodology. By determining appropriate VANH peers, we can facilitate the equitable benchmarking of nursing homes in areas of quality or financial evaluation and comparisons.

Here we describe our nearest neighbor methodology for developing VANH peer groupings. We then explore the characteristics of the VANH peer groups developed using the nearest neighbor methodology, and compare them to peer groups developed using traditional cluster analysis. We emphasis that this methodology is agnostic to health care setting (i.e., VA vs. non-VA) and can be applied to a wide variety of settings.

Methods

Development of peer facilities using nearest neighbor and traditional cluster methodologies

We included all 130 nursing facilities in the VA in our analyses. To address the widely disparate scales of the measures that we use, all variables are standardized by converting the measure to a Z-score for the population.

In the nearest neighbor methodology, the software program Clustan (Clustan Software, Edinburgh, UK) is used to calculate the general squared Euclidean distance (GSED) coefficient, which is a multidimensional distance between all pairs of facilities based on the selected variables. Clustan allows for a combination of both continuous and binary measures. The GSED coefficient for the distance between any two facilities j and i is calculated as:

dij2=Skwijk(xikxjk)2Skwijk

where xik is the value of variable k in case i, and wijk is a weight of 1 or 0 depending upon whether or not the comparison is valid for the kth variable. If differential variable weights are specified, it is the weight of the kth variable, or 0 if the comparison is not valid. We report the Euclidean distance, which is the square root of the GSED.

We identify the facilities most closely surrounding each reference facility in the multidimensional space, providing each facility with a customized group of similar facilities. In this way, there is the same number of peer groups as there are facilities.

For comparison with this methodology, we generated peer groups using a standard two-stage cluster analysis technique.2 We used the same variables and data as in the nearest neighbor approach. Ward’s method of hierarchical clustering was used to generate cluster seeds.3 We then used these seeds as input to the standard k-means iterative algorithm for cluster analysis. Based on the R-squared statistics, which is the proportion of variation in distance that is explained by the clustering, we elected to create 8 peer groups from the set of VANHs.

There are several differences between peers developed with classical cluster analysis and nearest neighbor analysis. In classical cluster analysis, membership of a facility in peer groups is mutually exclusive, the size of the peer groups is determined by the results of the clustering, and a facility may be on the “edge” of the sole group that it is in with respect to a certain measure or measures. For example, a facility may have the largest number of beds of any facility in his peer grouping by a substantial margin. Our modification of cluster analysis addresses these concerns. With the nearest neighbor method, the number of facilities that are designated as peers to the reference facility can be selected depending on the needs of the analysis or comparison. Facilities can and do appear in more than one peer grouping, and each facility is by design the hub of its own peer grouping. Finally the peer relationship is not commutative. If facility B is in facility A’s peer group, facility A may or may not be in facility B’s peer group.

Identifying measures to use for peer groupings

A number of steps, outlined below, were taken to determine which variables to use in developing peer groups for this research. At each step in the process, we took the latest results to a panel of VA nursing home experts who provided feedback on whether our findings and conclusions mapped to their experience and observations.

Step 1: we examined literature on risk-adjustment, case-mix reimbursement, determinants of quality of care, and evaluations of quality improvement initiatives.415 We also queried our expert panel on variables that they viewed as important in categorizing nursing homes into peer groups. We included: variables that were related to how much resources would be needed to run the facility or maintain a certain level of quality or treat patients; variables that characterized the facility and gave information about how costly it would be to treat patients there or about the facility’s environment; and importantly, variables of facility characteristics that cannot be easily changed by the administrators and affect per-patient cost. For example, a care center in an urban area with academic affiliation would be expected to have higher costs than a small rural facility.

We identified an extensive list of potential measures that might be important in developing peer groupings. These measures fall into the following domains that we determined were critical for assessing similarities and differences among facilities: Size; Academic mission; Workload/case-mix; Patient Population; and Community environment. The expert panel approved the complete list.

Next, we took steps to narrow the variables to a working list to use in the peer group algorithm. For this task, we used a combination of technical and practical considerations. Step 2: we gave precedence to variables that were continuous rather than binary or categorical, as continuous variables provide more refined information to use in the algorithm (technical consideration). Step 3: we considered the availability of each variable, giving preference to more easily obtainable and more frequently updated data (practical consideration). Step 4: for parsimony in our peer groupings model, we wanted to eliminate variables that did not add new or unique information on the nursing homes. Therefore, we estimated correlations among all of the candidate variables in each domain, and where correlations within a domain were high, we eliminated one of the correlated variables (mean correlation [sd] of variables within same domain was 0.283 [0.218], range 0.005–0.910). The expert panel approved the reduced list.

Development of measure weights for use in peer groupings

In order to calibrate the relative influence of the various measures, we weighted some of the model variables higher than others before inclusion in the algorithms, with a weight of 1 as the basis point. Selection of the variables to weight higher than 1 was done by a number of considerations: salience in peer group development or nursing home literature, and information on importance of the variable provided by our expert panel. The weights greater than 1 that were chosen are shown in Table 1 in parentheses. These weights ranged from 2, for total operating beds, to 6 for mean ADL. The expert panel concurred with the choice of weights.

Table 1.

Final measure used in the nearest neighbor modeling for nursing home peer groupings; weights different from 1 are shown in parentheses.

Domain Measure
Size Number of nursing home operating beds (2)
Academic mission GRECC status (yes/no)
Number of geriatric residents > 1 (yes/no) (includes geriatric medicine and geriatric psychiatry)
Workload/case-mix % census/total stays (3)
% bed days of care in short skilled treating specialties (4)
% bed days of care in long stay dementia and/or psych
Mean case mix index (CMI) (5)
% RN hours/total nursing hours
Patient population Mean activities of daily living (ADL) (6)
Average DxCG® patient relative risk score
% unique patients with Millennium Bill status
Community Environment Urban indicator (2)
State level Medicaid HCBS waiver participants/total adults in poverty

Exploration of peer group characteristics and comparison to traditional cluster analysis

We explored the attributes of the peer groups created with the nearest neighbor methodology, and compared them to peer groups formed using traditional cluster analysis. These analyses mirror closely those presented for hospitals as a whole in previous research.1

Non-discrete nature of nearest neighbor peer groups:

We present nearest neighbor peer groups for 2 reference nursing facilities (here using 20 VANHs per group) to demonstrate the non-mutually exclusive nature of groups developed with this methodology.

Effect of number of VANHs in a peer group on distance to furthest peer:

To explore how the number of VANHs in a peer group affects the Euclidean distance from the reference VANH to furthest peer, we developed 8 sets of peer groups using increments of 5 (i.e., 5, 10, 15, … 40) as the number of VANHs per peer group. We then calculated for each peer group, in each of the sets, the Euclidean distance from the reference VANH to farthest peer. We present graphically the distribution of these Euclidean distances for each of these sets of peer groups.

Comparison to traditional cluster analysis:

We compared the two peer grouping methodologies on how diffuse the resulting peer groups were by computing the square root of the sum of squares (RSS) of the Euclidean distances between all pairs of members of the same group, for all peer groups. The RSS of distances is a metric that represents the diffusion of the peer group with higher values indicating a more diffuse group. We present minimum and maximum RSS of distances across peer groups for the nearest neighbor groups, and the RSS of distance measures for each of the 8 peer groups created using traditional cluster analysis.

Results

Measures used for peer groupings

The final measures that were included in our model are shown in Table 1. Definitions of each of these variables, and data sources for construction of these variables, are available in the Appendix.

Characteristics of nearest neighbor peer groups

Non-discrete nature of nearest neighbor peer groups:

Table 2 shows an extreme example of the non-mutual exclusive nature of the nearest neighbor peer groups: Minneapolis is the closest neighbor peer for the grouping where Chicago is the reference VANH, but Chicago is not in Minneapolis’ peer group at all. This occurs because the peer group that has Minneapolis as a reference is much less dispersed than the group for Chicago.

Table 2.

Example of two peer groupings and the Euclidean distance from reference VANH to each peer

Peer Rank Peer Group for Chicago Euclidian Distance to peer Peer Rank Peer Group for Minneapolis Euclidian Distance to peer
1 Minneapolis MN 1.4781 1 Vancouver WA 0.34
2 St Louis MO 1.4867 2 St Louis MO 0.5456
3 Fargo ND 1.65 3 Boise ID 0.6733
4 Ann Arbor MI 1.7451 4 Leavenworth KS 0.8292
5 Vancouver WA 1.8554 5 Saginaw MI 0.8966
6 Oklahoma City OK 1.9245 6 Ann Arbor MI 0.9246
7 Tucson AZ 1.9483 7 Tucson AZ 0.9655
8 Saginaw MI 2.1424 8 Grand Island NE 0.9791
9 Sioux Falls SD 2.2203 9 Sioux Falls SD 1.0794
10 Leavenworth KS 2.5168 10 Roseburg OR 1.2018
11 Boise ID 2.5515 11 Dallas TX 1.2132
12 Grand Junction CO 2.6634 12 Clarksburg WV 1.251
13 Dallas TX 2.6703 13 Manchester NH 1.2524
14 Cleveland-Brecksv. OH 2.6852 14 Cleveland-Brecksv. OH 1.2682
15 San Diego CA 2.6963 15 Gainesville FL 1.2938
16 Grand Island NE 2.7528 16 Lexington-Leestown KY 1.3184
17 Manchester NH 2.8458 17 Oklahoma City OK 1.3936
18 Clarksburg WV 2.913 18 Grand Junction CO 1.3944
19 Spokane WA 2.914 19 Iron Mountain MI 1.4493
20 Albuquerque NM 2.9494 20 Topeka KS 1.4573

Effect of number of VANHs in a peer group on distance to furthest peer:

Figure 1 shows the distribution of Euclidean distances from the reference VANH to the farthest VANH for peer groupings of different sizes (5–40) for the 130 VANH peer groupings. Using a peer group size of 5 for example, Figure 1 shows that there is a larger cluster of Euclidean distances from reference VANH to farthest peer at around 0.50, and a long right hand skewed tail. The concentration of Euclidean distances from reference to farthest peer moves to the right as the number of peers increases; however, it does not move to the right very quickly. The distribution of Euclidean distances for peer groupings with 35 members for example, shows a concentration only twice that of the groups with 5 members.

Figure 1: Distribution of distances from reference nursing home to farthest neighbor in the peer group; calculated for 120 peer groups, with membership composed of 5 −40 nursing homes by 5 unit increments.

Figure 1:

Comparison to traditional cluster analysis:

Table 3 shows a comparison of the nearest neighbor peer groupings to peer groupings created by traditional cluster analysis. Because these methodologies are fundamentally different, some of the comparisons are constrained by the methodology. The nearest neighbor methodology produces one peer group for each unique VANH, whereas in traditional cluster analysis the number of groups is determined by the analysis parameters. Also, in nearest neighbor groupings, the investigator chooses the number of facilities in a group, whereas the cluster analysis itself determines the number of facilities in each grouping, although this can be manipulated by changing analysis parameters.

Table 3:

Comparison of peer groupings from traditional cluster analysis and nearest neighbor analysis; including summary measures for the RSS of Euclidean distances (measure of diffusion).

Measure Cluster analysis* Nearest Neighbor†

Total Number of groups 8 (mutually exclusive) 130 (non-mutually exclusive)

Range, mean, and median number of peers per group (8,31) 16.25, 13.5 Discretionary

RSS across peer groupings 8-member: 6.57 5-member: 0.65, 189.58
10-member: 7.12 10-member: 1.19, 290.24
 *For cluster analysis: # in peer group and RSS of group 11-member: 9.20 15-member: 1.71, 375.04
12-member: 6.07 20-member: 2.28, 449.75
 †For nearest neighbor analysis: number of members in peer groupings and range of RSS across peer groups with that 15-member: 8.17 25-member: 2.84, 528.17
specified number of members 20-member: 8.52 30-member: 3.41, 606.14
23-member: 12.63 35-member: 4.12, 681.00
31-member: 17.56 40-member: 4.98, 753.65

Median RSS 8.34 5-member: 2.14
10-member: 3.54
15-member: 4.78
20-member: 6.20
25-member: 7.61
30-member: 9.04
35-member: 10.44
40-member: 11.81

Table 3 also shows how the RSS of distances between members compares between traditional cluster and nearest neighbor analysis. The nearest neighbor peer groups show substantial variation in diffusion at each of the different peer group sizes. For example, when we included 10 facilities in each peer grouping, the RSS ranged from 1.19 to 290.24. At each comparable peer group size, the nearest neighbor group that was least diffuse was substantially less diffuse than the traditional group of that size; the opposite was dramatically true for a comparison of the most diffuse nearest neighbor group. For example, comparing the 20 member groupings, the cluster analysis RSS was 8.52, and the nearest neighbor groupings ranged from 2.28 to 449.75. When comparing the median RSS between the two methodologies, the three nearest neighbor groupings with the most members have greater diffusion than the cluster analysis groupings, whereas the opposite is true for the five nearest neighbor groups with fewer members.

Discussion

In Byrne et al,1 we described the application of a two-step process to identify peer groups for medical facilities and applied it to VA medical centers. In the present article, we show how the same technique can be applied to nursing homes. We believe that our method has strong face validity and is practical, because it is grounded in the clinical insights of health care providers, uses readily available databases, and applies methodologically sound clustering techniques. We believe that the two step methodology described here is generalizable to a wide range of clinical and health care settings. No aspect of this process is unique to the VA or to nursing homes. However, it is important that the analyst applying this methodology takes the time to determine which attributes of the health care setting are most important for grouping the facilities. After that, it is necessary that the health care system which wants to implement this methodology has the data which is needed to adequately describe the facilities of interest. These two steps successfully covered, this technique can be applied to any large collection of medical facilities (e.g., outpatient clinics) in which there is sufficient diversity that they cannot accurately be compared as a homogeneous group, and can be used, for example, to compare the facilities on the basis of efficiency, cost, or other outcome measure. It can also be used to look for problem areas in quality of care, i.e., to determine whether patients are at greater risk in facilities with certain characteristics.

In this paper, we also explore the characteristics of the nearest neighbor groups that we developed and compare them to traditional cluster analysis. We find that the nearest neighbor peer groups are less diffuse than those developed with traditional cluster analysis. In addition, the Euclidean radius distances of the peer groups showed a right skewed distribution for a range of peer group sizes and the high point of the distribution did not move substantially to the right as the number of facilities in a peer group increased. If “tight-knit” clusters are a desirable trait for peer groups, then researchers and policy makers should consider a nearest neighbor approach when developing peer facility groupings.

Increasingly, health care providers are being compared based on quality, efficiency, and finances. These decisions are being made across a wide range of health care service providers. For example, the Hospital Compare website provides information that allows consumers to directly compare performance measures across multiple hospitals.15 For such comparisons to be fair, and to secure buy-in by facility leaders and staff, providers must be compared to their peers, and careful consideration of who are truly “peers” is essential.

Summary Statement (“Take-away Points”).

Comparison of health care providing facilities can be done to assess efficiency, as a step in quality improvement or for resource allocation purposes, to name a few reasons. However:

  • Comparisons must be done across “like” facilities;

  • Groupings should be done on multiple attributes and should be compact;

  • Our methodology presents a feasible alternative to classical cluster analysis that results in peer groups of facilities with similar characteristics;

  • Methodology is not unique to the VA, and is generalizable across health care settings and organizations;

  • Results provide managers a scientific and flexible means of grouping their facilities for comparison purposes;

  • This will allow for improved quality, comparison of efficiency, and better resource allocation which will strengthen healthcare reform.

Acknowledgments

We would like to sincerely thank the members of our Expert Panel, who provided background information and professional perspective on VA nursing home care.

• Ursula Braun, MD, Michael E. DeBakey VA Medical Center (MEDVAMC) Geriatrician

• John Culberson, MD, MEDVAMC Extended Care Line Physician

• Beulah Hadrick, RN, MEDVAMC Extended Care Line Executive

• Kim House, MD, Atlanta VAMC Medical Director, NHCU

• Ted Johnson, MD, Atlanta VAMC Director, Geriatric and Extended Care

• William Korchik, MD, Medical Director, VISN 23 Extended Care and Rehabilitation and Dwight Nelson, MSW, EC&R Administrative Officer

• Mark Kunik, MD, MPH, MEDVAMC Geropsychiatrist

• Robert Morgan, PhD, Researcher on mental health care in nursing homes

• Aanand Naik, MEDVAMC Geriatrician

• Pat Parmelee, PhD, Emory University and VISN 7 researcher in nursing home quality

• Bettye Rose Connell, PhD, Atlanta VAMC Rehabilitation Research and Development Center

• Lynn Snow, PhD, VA Birmingham, psychologist researcher with long-term care specialty

Funding Information:

This research was conducted with support from a VA contract, (Project XVA 33-102) at the request of Veterans Integrated Service Networks 1, 12 and 23. This work was supported in part by a VA HSR&D Center of Excellence award (HFP90-020), and the first author was a National Cancer Institute career development awardee (K07 CA101812) at the time that this research was conducted.

APPENDIX

Appendix 1: Data Sources and Variable Definitions

Data Sources

We obtained data on facility, patient, and community characteristics from the following sources:

  • VA National Patient Care Database (Extended Care Inpatient Files)

  • VISN Service Support Center (VSSC) and Bed Control Data / Workload Data

  • DxCG Relative Risk Scores (Diagnostic Cost Group method of risk adjustment)

  • Minimum Data Set Survey (VA nursing homes)

  • VA Office of Academic Affiliations Residents List]

Definitions of Final Measures

Size
  • 1

    Nursing home operating beds. The number of CLC operating beds was obtained from Bed Control Data File. (For two facilities, we obtained bed data directly from the site administrators.)

Academic mission
  • 2

    Geriatric Research, Education and Clinical Center (GRECC) status (yes/no). GRECC status is a binary indicator noting presence of a specialized GRECC program. The GRECCs are “centers of geriatric excellence” designed for the advancement and integration of research, education, and clinical achievements in geriatrics and gerontology into the total VA healthcare system.”

  • 3

    Number of geriatric residents greater than 1.0 (yes/no). A binary indicator noting the CLC has more than 1.0 geriatric residents/fellows.

Workload/case-mix
  • 4

    Percent census/total stays. This measure serves as an indicator of turnover in the facility.

FY09 inpatients on 9/30/09 (census stays)

(# FY09 discharges + FY09 census stays)

  • 5

    Percent days of care in short skilled treating specialties. The percentage of all FY09 days of care (discharges + census stays) delivered in short skilled treating specialties.

  • 6

    Percent long-stay dementia and/or psych days. The percentage of all FY09 days of care (discharges + census stays) delivered in long stay dementia and/or psychiatric care treating specialties. The expert panel suggested inclusion of this variable, as the acuity of these patients may not be properly represented in the Case Mix Index from the Minimum Data Set.

  • 7
    Mean Case Mix Index (CMI). This measure reflects a combination of patient characteristics and treatment intensity. To develop a facility level metric, we identified all full assessments in the FY09 MDS data as those with:
    • Primary reason for assessment is: 1) admission assessment (required by day 14), 2) annual assessment, or 3) significant change in status assessment (two or more ADL areas); OR
    • Primary reason for assessment is ‘none of the above’ and has been given a Medicare 5 day assessment.

For each CLC we calculated the mean CMI value for each month of FY09 and then averaged across all months.

  • 10

    Percent RN hours /total nursing hours. This measure is meant to capture patient complexity and severity of case-mix for the CLC.

Total RN hours

(RN + LVN + Nurse Asst hours)

For each CLC we calculated the mean percent RN hours for each month of FY09 and then averaged across all months.

Patient population
  • 11

    Mean Activities of Daily Living (ADL) index. This measure is intended to reflect the functional status of patients being served by a CLC. To develop a facility level metric, we identified all full assessments as with the Mean CMI measure (see above). For each CLC we calculated the mean ADL value for each month of FY09 and then averaged across all months.

  • 12

    Average Patient Relative Risk Score (RRS). This is the average of a facility’s patient-level Relative Risk Scores, a measure of overall patient illness burden. The individual patient’s DxCG® RRS is a ratio of predicted costs based on diagnoses to average observed costs:

We derived diagnostic cost group (DCG)-based RRSs from the Houston HSR&D VA DCG model for users in priority groups 1–8, then calculated and scaled these RRSs across the VA to yield a population mean of 1. We used DxCG® RiskSmart 2.3 software.

The average RRS at a facility is the sum of RRSs of all patients at that facility divided by the number of unique patients at the facility in FY 2009. A RRS value of 1.00 reflects an average illness burden of the national VA population.

  • 13

    Percent of FY09 unique patients with “Millennium Bill” status. This variable is the percent of unique patients at the CLC with service connection status >= 70%.

Community environment
  • 14

    Urban status (yes/no). This measure identifies a CLC as “urban,” where urban indicates the CLC is located in a county within a metropolitan area of population equal to 250,000 or greater in 2003. This threshold was determined by applying the USDA Economic Research Service Rural Urban Continuum codes, letting urban= (1, 2).

  • 15

    Ratio of Medicaid Home and Community Based Services (HCBS) waiver participants/total adults in poverty. We identified the number of Medicaid HCBS waiver participants for each state, excluding those for children and mental retardation/developmental delays, and divided by the total number of adults in poverty for that state. This is a measure of alternative resources for low-income, frail Veterans (Kaiser Commission on Medicaid and the Uninsured, 2009).

Footnotes

Conflict of Interest: There are no conflicts of interest.

References

  • 1.Byrne M, Daw C, Nelson H, Urech T, Pietz K, Petersen LA. A method to develop health care peer groups for quality and financial comparisons across hospitals. HSR. 2009;44(2 Pt 1):577–92. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Everitt BS, Landau S, Leese M, Stahl D. Cluster Analysis, 5th ed., John Wiley & Sons, Ltd., 2011. [Google Scholar]
  • 3.Punj G, Stewart DW. Cluster analysis in marketing research: Review and suggestions for application. J Market Research. 1983;20:134–148. [Google Scholar]
  • 4.Grabowski DC, Angelelli JJ, Mor V. Medicaid payment and risk-adjusted nursing home quality measures. Health Aff. 2004;23(5):243–252. [DOI] [PubMed] [Google Scholar]
  • 5.Rosen A, Wu J, Chang BH, et al. Risk adjustment for measuring health outcomes: An application in VA long term care. Am J Med Qual. 2001;16:118–127. [DOI] [PubMed] [Google Scholar]
  • 6.Zinn J, Spector W, Hsieh L, Mukamel DB. Do trends in the reporting of quality measures on the nursing home compare web site differ by nursing home characteristics? Geront. 2005;45(6):720–730. [DOI] [PubMed] [Google Scholar]
  • 7.Rask K, Parmelee PA, Taylor JA, et al. Implementation and evaluation of a nursing home fall management program. J AmGerSoc. 2007;55(3):342–349. [DOI] [PubMed] [Google Scholar]
  • 8.Hallenbeck J, Hickey E, Czarnowski E, Lehner L, Periyakoil VS. Quality of care in a Veterans Affairs’ nursing home-based hospice unit. J Palliative Med. 2007;10(1):127–135. [DOI] [PubMed] [Google Scholar]
  • 9.Bjorkgren MA, Fries BE. Applying RUG-III for reimbursement of nursing facility care. Int J Healthcare Tech Manag. 20006;7(1/2):82–99. [Google Scholar]
  • 10.Baier RR, Gifford DR, Patry G, et al. Ameliorating pain in nursing homes: a collaborative quality-improvement project. J AmGerSoc. 2004;52(12):1988–1995. [DOI] [PubMed] [Google Scholar]
  • 11.Parmelee PA. Quality improvement in nursing homes: The elephants in the room. J AmGerSoc. 2005;52(12):2138–2140. [DOI] [PubMed] [Google Scholar]
  • 12.McCarthy JF, Blow FC, Kales HC. Disruptive behaviors in Veterans Affairs nursing home residents: how different are residents with serious mental illness? J AmGerSoc. 2004; 52(12):2031–2038. [DOI] [PubMed] [Google Scholar]
  • 13.Schonfeld L, King-Kallimanis B, Brown LM, et al. Wanderers with cognitive impairment in Department of Veterans Affairs nursing home care units. J AmGerSoc. 2007;55(5):692–699. [DOI] [PubMed] [Google Scholar]
  • 14.Grabowski DC, Stewart KA, Broderick SM, Coots LA. Predictors of nursing home hospitalization: a review of the literature. Med Care Res Rev. 2008; 65(1):3–39. [DOI] [PubMed] [Google Scholar]
  • 15.Mukamel DB, Spector WD. Nursing home costs and risk-adjusted outcome measures of quality. Med Care. 2000;38(1):78–89. [DOI] [PubMed] [Google Scholar]
  • 16.www.medicare.gov/hospitalcompare/; accessed 12/11/2012.

RESOURCES