Skip to main content
VA Author Manuscripts logoLink to VA Author Manuscripts
. Author manuscript; available in PMC: 2015 Dec 17.
Published in final edited form as: Med Care. 2014 Jul;52(7):619–625. doi: 10.1097/MLR.0000000000000144

Are Comparisons of Patient Experiences Across Hospitals Fair? A Study in Veterans Health Administration Hospitals

Paul D Cleary *, Mark Meterko †,, Steven M Wright §, Alan M Zaslavsky
PMCID: PMC4682878  NIHMSID: NIHMS742902  PMID: 24926709

Abstract

Background

Surveys are increasingly used to assess patient experiences with health care. Comparisons of hospital scores based on patient experience surveys should be adjusted for patient characteristics that might affect survey results. Such characteristics are commonly drawn from patient surveys that collect little, if any, clinical information. Consequently some hospitals, especially those treating particularly complex patients, have been concerned that standard adjustment methods do not adequately reflect the challenges of treating their patients.

Objectives

To compare scores for different types of hospitals after making adjustments using only survey-reported patient characteristics and using more complete clinical and hospital information.

Research Design

We used clinical and survey data from a national sample of 1858 veterans hospitalized for an initial acute myocardial infarction (AMI) in a Department of Veterans Affairs (VA) medical center during fiscal years 2003 and 2004. We used VA administrative data to characterize hospitals. The survey asked patients about their experiences with hospital care. The clinical data included 14 measures abstracted from medical records that are predictive of survival after an AMI.

Results

Comparisons of scores across hospitals adjusted only for patient-reported health status and sociodemographic characteristics were similar to those that also adjusted for patient clinical characteristics; the Spearman rank-order correlations between the 2 sets of adjusted scores were >0.97 across 9 dimensions of inpatient experience.

Conclusions

This study did not support concerns that measures of patient care experiences are unfair because commonly used models do not adjust adequately for potentially confounding patient clinical characteristics.

Keywords: patient experiences, case-mix, hospitals


Increasingly, consumers, providers, and purchasers are interested in using patient surveys to assess patient experiences with medical care.14 In 1995, the Agency for Healthcare Research and Quality launched what is now referred to as the Consumer Assessment of Healthcare Providers and Systems (CAHPS) project to develop standardized surveys to assess patient experiences,59 initially focusing on ambulatory care. The Centers for Medicare & Medicaid Services (CMS) later funded the CAHPS consortium to develop an instrument for assessment of patient experiences at acute care hospitals, Hospital CAHPS (HCAHPS).10,11 CMS now administers the HCAHPS survey nationally using a standard protocol.12 Since 2008, HCAHPS results have been publicly reported quarterly.12,13 The Patient Protection and Affordable Care Act of 2010 established Medicare's Hospital Value-Based Purchasing (HVBP) program. In fiscal year 2013, 30% of participating hospitals' HVBP payment was determined by their performance on the HCAHPS survey.12,14

As with any quality indicator,15,16 comparisons of patient experience scores across physicians, health plans, or hospitals may be affected by patient characteristics unrelated to the quality of care provided, for at least 2 reasons.17 First, some processes of care are likely to vary with patient characteristics.17,18 Second, patients' characteristics can influence how they respond to survey questions. Varying distributions of such characteristics across hospitals might affect assessments of care, making comparisons among hospitals misleading and giving hospitals an incentive to attract patients likely to give higher ratings and avoid those most likely to report problems.

To address these issues, standard CAHPS analyses use statistical models to predict what each hospital's ratings would have been for a standard patient population, thereby removing from comparisons the predictable effects of differences in patient characteristics that vary across hospitals. These analyses typically rely on measures of patient characteristics available from the survey or administrative data.17,1922 Some hospital administrators and clinicians have been concerned that their facility is disadvantaged by this limitation.23,24 One frequently voiced concern is that large, academic health centers have more “complex” patients that are harder to treat and that available case-mix variables do not adequately capture that complexity.

Initial analyses of HCAHPS data used both survey data and clinical information, specifically the hospital service line (medical, surgical, maternity care) and diagnostic groups based on the Diagnostic-Related Group code.19 Of the 20 medical conditions modeled, only having a circulatory disorder was among the most important case-mix variables and it affected scores only for those treated on a surgical service. Although the overall impact of case-mix adjustment is typically modest,17,1922 the rankings of some hospitals may be substantially affected.17 CMS now uses similar, quarterly estimated, adjustment models with patient characteristics from the survey and hospital administrative data, but not the more specific Diagnostic-Related Group–based variables, when making hospital comparisons (http://www.hcahpsonline.org/modeadjustment.aspx).

To assess the incremental effect of adjusting for clinical variables in addition to patient characteristics that are asked about on surveys, we estimated and compared several models that used measures of patient-centered care (PCC) as dependent variables and detailed clinical and process of care measures for patients treated for a heart attack, in addition to patient sociodemographic characteristics, as independent variables.

Methods

Sample

Data for patients treated in Veteran's Hospitals for an AMI were obtained from the VA's External Peer Review Program (EPRP) administered by the Office of Analytics and Business Intelligence, Office of Performance Measurement (previously the VA Office of Quality and Performance). Those data are part of a national performance measurement program that collects clinical data using chart audit by trained abstractors to assess the quality of care for all AMI inpatients. In a separate program, the VA also uses a survey to assess the experience of hospitalized patients, the Survey of Healthcare Experiences of Patients (SHEP). The VA sends a SHEP survey to a random sample of patients who were discharged to the community from each acute care VA hospital from 6 major services: medicine, surgery, psychiatry, rehabilitation, neurology, and spinal cord injury.

During fiscal years 2003 and 2004, all AMI patients selected for the EPRP study were also included in the SHEP sample and were mailed a survey 4–6 weeks after the end of the month in which they were discharged. Of the 2815 AMI patients, 1858 (66%) responded. Thus, the quality improvement database contained measures of both technical quality and perceptions of care for AMI patients. For patients with multiple AMI events, we selected the first (index) hospitalization within the 2-year period.

Clinical Condition, Medical History, and Admission Process

The EPRP chart abstraction coded data on 14 clinical measures that previous research25 has shown to be predictive of 30-day mortality among veterans with AMI. All these variables were considered in the present study as potential controls for severity of illness and aspects of admission that might affect the process of care (Table 1). Patients' date of birth and sex were obtained from VA administrative data at the time of sampling.

Table 1. Patient Demographic, Clinical, and Admission Process Characteristics.

Characteristics (n =1858) Parameters* Missing (%)
Demographic characteristics (Source: SHEP Patient Survey)
 Age in years at admission [mean (SD)] (y) 68.0 (11.1) 10.4
 Sex (male %) 98.2 5.8
 Education (%) 3.4
  High school (HS) or less 57.7
  Some college or post HS 28.4
  Four years college or more 13.9
 Racial background (%) 5.1
  White 85.9
  African American 10.4
  Other minority 3.6
 In general, would you say your health is (%) 3.3
  Poor 13.3
  Fair 35.8
  Good 33.5
  Very good 14.5
  Excellent 3.0
Clinical condition and history (Source: EPRP Chart Abstraction)
 History of cancer (yes %) 6.1 7.7
 History of lipid disorders (yes %) 69.9 7.6
 History of CHF (yes %) 32.8 0
 History of dementia (yes %) 7.2 0
 Stroke within past 5 y (yes %)§ 2.2 0
 Highest serum creatinine [mean (SD)] 1.55 (1.26) 11.7
 First troponin level was negative (yes %)§ 31.9 0
 Heart rate upon hospital arrival [mean (SD)] 84.1 (22.0) 27.3
 Systolic BP upon hospital arrival [mean (SD)] 145.4 (27.5) 27.3
 Pain symptoms (%) 27.3
  Chest pain 39.5
  Chest pressure 16.2
  Radiating pain 26.9
Admission process characteristics (Source: EPRP Chart Abstraction)
 Night admission (yes %) 25.1 2.1
 Weekend admission (yes %) 32.1 2.2
 Transfer from ED of another hospital (yes %) 0.9 3.2
 In hospital already when had AMI (yes %) 2.9 0.2
*

On the basis of total nonmissing cases.

Except for age and sex, which in this study were obtained from the EPRP chart abstraction.

And/or on lipid-lowering medications before hospitalization.

§

Available for FY04 cases only.

Between 11:00 pm and 8:00 am, any day.

Between 5:00 pm Friday and 7:00 am Monday.

Patient Evaluations of Care

The inpatient SHEP survey includes a modified version of the PCC questionnaire developed by the Picker Institute3,26 and several questions about patient sociodemographic characteristics. Although the SHEP and CAHPS surveys differ in several respects, such as response scales and the specific experiences asked about, both were developed following similar principles27,28 including a focus on reports about—versus evaluations of—care.29,30 Thus, the SHEP is an appropriate survey for examining factors that might affect CAHPS responses.

The Picker PCC component of the SHEP survey consists of 55 questions asking patients to evaluate 9 domains of their inpatient experience: access, courtesy, information about their illness and care, coordination of care, attention to patient preferences, emotional support, family involvement, physical comfort, and preparation for transition to outpatient care. Response categories were: “yes, always,” “yes, sometimes,” and “no.” We calculated the proportion of questions that were answered “yes, always” for each domain. We also calculated a total PCC score as the equally weighted average of the 9 domain scores. For presentation, scores were multiplied by 100 and can be read as the percent of responses indicating favorable perceptions of care (Table 2). The Picker survey domain scores were conceptualized as indices of substantive domains, as opposed to assessing a purely unidimensional construct, but nevertheless have good to excellent estimated reliabilities in the study sample [coefficient α = access (0.53); coordination (0.58); courtesy (0.66); information (0.76); emotional support (0.83); patient preferences (0.67); family involvement (0.73); physical comfort (0.62); transition (0.85); total score (0.91)]. The SHEP survey also asked respondents to report their education, marital status, employment status, race, and total household income for the previous year.

Table 2. Components of the Patient-centered Care (PCC) Index: Patient-level Basic Descriptive Statistics (n = 1858).

Components Mean SD
Access to providers 79.9 25.7
Courtesy 90.8 19.2
Information about illness and care 69.3 32.7
Coordination of care 79.9 24.1
Attention to patient preferences 74.7 29.4
Emotional support 66.4 35.6
Family involvement 72.5 35.1
Physical comfort 85.9 24.4
Preparation for transition to outpatient 64.6 38.5
Total score 76.52 22.64

Hospital Characteristics

We used VA administrative data to classify hospitals by size, dichotomized as small (100 beds or fewer) or large (> 100 beds), urban location, and teaching status (based on membership in the Council of Teaching Hospitals and Health Systems of the Association of American Medical Colleges). We also grouped facilities according to their complexity level (5 ordinal categories). These complexity levels are used within the VA to identify comparable hospitals for comparing clinical performance and to set pay levels for directors and other facility leadership. The complexity level is based on a composite index based on 7 factors including patient volume, level of intensive care provided, patient acuity, number and distribution of residency training slots, total research funding managed by the facility, and number of physician specialists.31,32 For some analyses, hospitals were categorized as complex (complexity level = 1A, 1B, or 1C) or not complex (complexity level of 2 or 3). Table 3 shows the distributions of hospitals and patients by these characteristics.

Table 3. Raw and Adjusted Picker Patient-Centered Care (PCC) Summary Scores Stratified by Selected Hospital Characteristics.

PCC Means, Adjusted for

Hospital Characteristic (No. Hospitals = 120) No. Patients (n = 1858) Raw PCC Means* Model 1: Sociodemographic Characteristics Only Model 2: Clinical Factors Only Model 3: Admission Process Only§ Model 4: All Covariates
Size
 Large (76) 1520 76.34 76.27 76.33 76.35 76.28
 Small (44) 338 77.34 77.63 77.37 77.29 77.58
 Difference L–S −1.00 −1.36 −1.04 −0.94 −1.30
P for difference 0.4926 0.2971 0.4460 0.4879 0.3195
Urban
 Yes (100) 1762 76.40 76.40 76.40 76.40 76.41
 No (20) 96 78.73 78.63 78.71 78.69 78.60
 Difference Y–N −2.33 −2.23 −2.31 −2.29 −2.19
P for difference 0.3253 0.3392 0.3252 0.3349 0.3525
Teaching
 Yes (67) 1356 76.08 76.16 76.04 76.08 76.09
 No (53) 502 77.71 77.50 77.86 77.73 77.68
 Difference Y–N −1.63 −1.34 −1.82 −1.65 −1.59
P for difference 0.1665 0.1952 0.1181 0.1594 0.1554
Complexity
 1A High (34) 981 76.05 75.86 76.09 76.02 75.87
 1B (16) 278 76.74 77.37 76.85 76.85 77.41
 1C (16) 264 74.78 75.20 74.62 74.82 75.12
 2 (31) 203 80.72 79.83 80.64 80.79 79.83
 3 Low (23) 132 76.57 77.19 76.64 76.49 77.14
 Difference: (1A, 1B, lC)–(2, 3) −3.13 −2.73 −3.11 −3.14 -2.73
P for difference 0.0218 0.0374 0.0228 0.0213 0.0384
*

Higher scores on the Picker PCC summary scale indicate more favorable perceptions of care.

Model 1 includes age at admission, education (3 category ordinal variable), minority racial background (y/n), and general health status (5-point ordinal variable).

Model 2 includes history of cancer (y/n), history of CHF (y/n), history of dementia (y/n), highest serum creatinine level, heart rate at admission, systolic blood pressure at admission, pain symptom count (1–3; chest pain, chest pressure, radiating pain).

§

Model 3 consisted of 1 variable: was subject already an inpatient when AMI occurred (y/n).

Model 4 includes all covariates from models 1–3.

Analyses

To assess potential biases because of survey non-response, we compared our final sample of AMI patients who had returned a survey (n = 1858) with those who had not (n = 957) on 4 variables available for both groups: sex (by χ2), age, length of hospital stay, and technical quality of care (by t tests).

The 1858 patients in the present study received their AMI care at 120 different VA hospitals. Our sample represented all patients meeting our criteria irrespective of which hospital they were located in; thus not a clustered sample. Nevertheless, we assessed the generalizability of our results by computing the potential impact on standard errors of treating our data as a clustered sample from a hypothetical larger population of hospitals. We computed the intraclass correlation coefficients of Picker scores using the between-hospital and within-hospital variance estimates from an unconditional means hierarchical linear model. We also reestimated the main models taking clustering into account.

We multiply imputed all missing values in all predictors using model-based procedures, making possible inferences using all available data while properly reflecting the uncertainty in the imputed values.33 We generated 5 sets of imputed values34 using the Markov Chain Monte Carlo method of PROC MI in SAS 9.1 and combined analyses of the 5 completed datasets using PROC MIANALYZE to produce point estimates and adjusted standard errors.

Our main analyses assessed the differential effects of several case-mix model specifications on hospital scores grouped by each of the 4 hospital characteristics described above. First, to define a baseline, we calculated the unadjusted mean total PCC index score for patients cared for in the facilities who fell into each of the hospital characteristics categories described above (for example, patients treated in small and large facilities). Next, we estimated 4 sets of linear models that included that hospital characteristic in combination with different sets of covariates. Model 1 included variables that are available on most surveys and often used in adjustment models: age at admission, education (3 category ordinal variable), minority racial background (y/n), and self-reports of general health status (excellent, very good, good, fair, and poor). Model 2 included only clinical factors: history of cancer (y/n), history of CHF (y/n), history of dementia (y/n), highest serum creatinine level, heart rate at admission, systolic blood pressure at admission, and pain symptom count (1–3; chest pain, chest pressure, radiating pain), and model 3 included only whether the subject was already an inpatient when AMI occurred (y/n). Finally, model 4 included all the covariates from models 1 to 3. For each model, we calculated the adjusted total PCC index means for the various categories of hospitals and the differences between those category means, which could then be compared with the results based on the unadjusted scores. Finally, we computed the Spearman rank-order correlations between the model 1 hospital means (adjusted for survey-based patient self-report demographics and health status) and the model 4 hospital means (adjusted by all covariates) to assess the impact on hospital performance rankings of including clinical and hospital care process measures in addition to patient-reported demographic characteristics and health status in the adjustment. This was done for hospital means on each of the 9 dimensions of patient experience as well as for the PCC index score. Institutional Review Boards at the Veterans Administration and Yale University approved the study.

Results

There were no significant differences between our sample of AMI subjects who did and did not return a SHEP survey on sex, length of stay, or technical quality of care; P-values ranged from 0.12 (sex) to 0.85 (length of stay). However, the SHEP survey respondents were about 2 years younger (67.6) on average than those who did not return a survey (69.7; t = 4.42; P < 0.001). The intraclass correlation coefficients estimated from the unconditional means hierarchical linear model was only 0.0073; therefore, we did not take clustering by hospital into account in the models presented.

The unadjusted PCC means in Table 3 show that patients in smaller, nonurban, and nonteaching hospitals tended to report better experiences with care than patients at larger, urban, and teaching hospitals, although these differences are not statistically significant. The PCC scores for patients in less complex hospitals (complexity 2 and 3) were significantly higher (P < 0.05) than patients in more complex hospitals (complexity 1A, 1B, 1C). Adjusting for patients' sociodemographic characteristics (model 1) had little impact on the contrasts and the difference between complex and noncomplex hospitals was still significant (P < 0.05). The differences in adjusted scores produced by models 2, 3, and 4 are generally similar. For the contrast between teaching and nonteaching hospitals, the differences between adjusted scores generated by model 4 were bigger than those generated by model 1 and, therefore, were more unfavorable to the teaching hospitals.

Summary scores may mask differences in component scores; therefore, we examined comparable models for each Picker dimension score (data not shown). Although the total score was not significantly different between teaching and nonteaching hospitals, nonteaching hospitals had significantly better scores for access (P < 0.05), coordination (P < 0.01), and respect for preferences (P < 0.05). Adjusting the scores for sociodemographic characteristics reduced these differences somewhat, but the differences were still significant. Adjusting for all covariates increased the differences somewhat for each dimension score and all contrasts were still significant. When we compared complex and noncomplex hospitals on the specific dimension scores, lower complexity hospitals had significantly higher (more favorable) scores than high-complexity hospitals on the access (P < 0.01), coordination (P < 0.01), and respect for personal preferences (P < 0.05) dimensions. These differences persisted after correcting for sociodemographic characteristics. The contrasts were comparable after adjusting for all the covariates (model 4), although the difference in respect for personal preferences was nonsignificant (P = 0.06).

We also tested whether the differences by hospital category under model 1 differed significantly from those under model 4 by creating a new dataset in which each case was entered twice, once including values for the variables in model 1 and the second time with values for the variables included in model 4 and dummies for hospital category. PROC SURVEYMEANS (SAS 9.1) with clustering by subject provided an appropriate standard error and significance test for the difference between the hospital category differences in the 2 models. None of the contrasts were statistically significant.

The Spearman rank-order correlations between model 1-adjusted hospital means and model 4-adjusted hospital means exceeded 0.97 for all 9 individual dimensions of in-patient experience as well as the total PCC index score. These correlations were computed twice, once using all hospitals (n = 67) with ≥10 study patients and once for those hospitals (n = 24) with ≥25 study patients; the results were virtually the same for both subsamples (data not shown).

To understand better why adding clinical variables to the case-mix model did not have a larger effect on the contrasts reported, we examined the values for teaching and nonteaching and complex and noncomplex hospitals. Means for none of the clinical variables used were significantly different between teaching and nonteaching hospitals and only dementia differed significantly between high-complexity and low-complexity hospitals. Low-complexity hospitals had more study patients with a history of dementia, although the effect size was very small (ϕ = 0.05).

To test for the effect of adjustments for clustering, we re-estimated the models in Table 3, including a random effect for hospitals. As expected, the standard errors increased, but the results for all models for all of the stratifying variables (facility size, urban location, teaching status, and complexity) were the same in terms of the direction of group differences and the significance of those differences. There were 2 exceptions. The significance of the differences between more and less complex facilities changed from 0.04 to 0.08 for both models. In the 2 comparisons wherein the significance of results did not change, the results were still nonsignificant and smaller when clustering was taken into account. Thus, our conclusions would be the same had we accounted for clustering.

Discussion

It is now well known that there are large regional and organizational variations in many care process and technical quality measures3537 and in patient-reported care experiences.8,3845 Although the possible confounding effects of patient characteristics have been a concern for a variety of measurement systems,15 questions about the fairness of using patient reports have sometimes been raised by teaching hospitals,23 which often have lower scores, on average, than nonteaching hospitals.37

In this study, we used standardized measures of patient clinical characteristics in adjustment models to see if they explained the differences in PCC care that are not explained by simple patient characteristics commonly asked about on patient surveys. In this study, patient clinical characteristics had relatively little impact on interunit comparisons. This may be because a simple self-rating of global health, such as the measure used in the Picker and CAHPS surveys, captures well the influence of many of the characteristics of concern. For example, self-reported health rating used in these analyses had a correlation of 0.60 with the Short-Form 12 physical health scale,46 a more extensive self-assessment of health.

This study has several potential limitations. The sample was predominately male and consisted of veterans seeking care within the VA system for an AMI. Another important limitation for testing the impact of case-mix adjustment was the relative uniformity of clinical severity across hospitals. As we studied only 1 condition and the patients treated in the VA are more homogenous than in a broader sample of hospitals, one would expect the effect of adjusting for patient characteristics to be smaller in this study than in a larger study with a more heterogenous set of conditions and hospitals. The size of the coefficients for clinical adjustors in this study was small. If this is also true in a more representative sample of conditions and patients that would argue that the adjustments would not have a large effect in such a sample. However, if the associations were stronger in other conditions and patients, then the effect of adjustment would be larger.

Although they did not differ with regard to sex, length of stay, or technical quality of care, the SHEP survey respondents in this study were about 2 years younger than those who did not return a survey. As VA patients tend to be older than patients in other hospitals, this is not a serious concern for generalizability. In general, younger patients tend to be healthier than older patients, but they also tend to report more problems with care. Thus, although non-respondents are often different than respondents in ways not reflected by measured variables, we would expect that if nonresponse introduced any bias, it would be toward lower PCC scores in the present sample, which might reduce the effect size of adjustments in our models. It is unlikely, however, that an average age difference of 2 years would introduce a substantial bias in this regard.

Because these data were originally compiled to study the impact of patient-centered care on outcomes for patients with an AMI,47 for patients with multiple AMI events we used information from the first hospitalization within the 2-year period. Some of the patients in our study were readmitted before completing the survey triggered by their first visit; although survey instructions included a reference to the date of the initial admission, some of these patients may have based their responses in part on the subsequent admission. We do not think, however, that this would bias comparisons between different models using the same data. Finally, our modest per-hospital sample size forced us to analyze the effects of adjustment on categories of hospitals rather than individual hospitals.

Despite these potential limitations, these results suggest that traditional methods of adjusting for possible confounders of patient-reported experiences capture the main measured sources of spurious variation between hospital types such as small and large hospitals and academic and nonacademic health centers. Although these classes of hospitals frequently have patients with different medical complexity, completely accounting for measured clinical characteristics did not change the main comparisons of interest in this study. In future research, it would be desirable to assess these issues in a more heterogenous group of hospitals with more heterogenous patients.

Acknowledgments

Supported by a grant from the Picker Institute, a grant from the Department of Veterans Affairs Health Services Research and Development Service (Grant number IIR-07-244-3), and a cooperative agreement from the Agency for Healthcare Research and Quality (#U18HS016978).

Footnotes

M.M. had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

The authors declare no conflict of interest.

References

  • 1.Cleary PD. The increasing importance of patient surveys. Br Med J. 1999;319:720–721. doi: 10.1136/bmj.319.7212.720. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Cleary PD, McNeil BJ. Patient satisfaction as an indicator of quality care. Inquiry. 1988;25:25–36. [PubMed] [Google Scholar]
  • 3.Cleary PD, Edgman-Levitan S, Roberts M, et al. Patients evaluate their hospital care: a national survey. Health Aff. 1991;10:254–267. doi: 10.1377/hlthaff.10.4.254. [DOI] [PubMed] [Google Scholar]
  • 4.Goldstein E, Cleary PD, Langwell KM, et al. Medicare Managed Care CAHPS: a tool for performance improvement. Health Care Finan Rev. 2001;22:101–107. [PMC free article] [PubMed] [Google Scholar]
  • 5.Hargraves JL, Hays RD, Cleary PD. Psychometric properties of the Consumer Assessment of Health Plans (CAHPS®) 2.0 Adult Core Survey. Health Serv Res. 2003;38:1509–1527. doi: 10.1111/j.1475-6773.2003.00190.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Homer CJ, Fowler FJJ, Gallagher PM, et al. The Consumer Assessment of Health Plans Study (CAHPS) survey of children's health care. Jt Comm J Qual Improv. 1999;25:369–378. doi: 10.1016/s1070-3241(16)30452-7. [DOI] [PubMed] [Google Scholar]
  • 7.Daniels AS, Shaul JA, Greenberg P, et al. The Experience of Care and Health Outcomes Survey (ECHO): a consumer survey to collect ratings of behavioral health care treatment, outcomes and plans. In: Maruish ME, editor. The Use of Psychological Testing for Treatment Planning and Outcomes Assessment. Fairfax, VA: Lawrence Erlbaum Assoc; 2004. [Google Scholar]
  • 8.Landon BE, Zaslavsky AM, Bernard SL, et al. Comparison of performance of traditional Medicare vs. Medicare managed care. JAMA. 2004;291:744–1752. doi: 10.1001/jama.291.14.1744. [DOI] [PubMed] [Google Scholar]
  • 9.Gallagher P, Ding L, Ham HP, et al. Development of a new patient-based measure of pediatric ambulatory care. Pediatrics. 2009;124:1348–1354. doi: 10.1542/peds.2009-0495. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Keller S, O'Malley AJ, Hays RD, et al. Methods used to streamline the CAHPS® hospital survey. Health Serv Res. 2005;40:2057–2077. doi: 10.1111/j.1475-6773.2005.00478.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.O'Malley AJ, Zaslavsky AM, Hays RD, et al. Exploratory factor analyses of the CAHPS® Hospital Pilot Survey responses across and within medical, surgical and obstetric services. Health Serv Res. 2005;40:2078–2095. doi: 10.1111/j.1475-6773.2005.00471.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Giordano LA, Elliott MN, Goldstein E, et al. Development, implementation, and public reporting of the HCAHPS survey. Med Care Res Rev. 2010;67:27–37. doi: 10.1177/1077558709341065. [DOI] [PubMed] [Google Scholar]
  • 13.Centers for Medicare & Medicaid Services. HCAHPS Website. HCAHPS Hospital Consumer Assessment of Healthcare Providers and Systems. [Accessed May 29, 2012];2012 Available at: http://www.hcahps.org/home.aspx.
  • 14.Ray J. Medicare to begin basing hospital payments on patient-satisfaction scores. [Accessed January 2, 2012];2011 Available at: http://www.kaiserhealthnews.org/Stories/2011/April/28/medicare-hospital-patient-satisfaction.aspx.
  • 15.Zaslavsky AM, Hochheimer JN, Schneider EC, et al. Impact of sociodemographic case mix on the HEDIS measures of health plan quality. Med Care. 2000;38:981–992. doi: 10.1097/00005650-200010000-00002. [DOI] [PubMed] [Google Scholar]
  • 16.Zaslavsky AM, Epstein AM. How patients' sociodemographic characteristics affect comparisons of competing health plans in California on HEDIS (R) quality measures. Int J Qual Health Care. 2005;17:67–74. doi: 10.1093/intqhc/mzi005. [DOI] [PubMed] [Google Scholar]
  • 17.Zaslavsky AM, Zaborski LB, Ding L, et al. Adjusting performance measures to ensure equitable plan comparisons. Health Care Finan Rev. 2001;22:109–126. [PMC free article] [PubMed] [Google Scholar]
  • 18.Hargraves JL, Wilson IB, Zaslavsky A, et al. Adjusting for patient characteristics when analyzing reports from patients about hospital care. Med Care. 2001;39:635–641. doi: 10.1097/00005650-200106000-00011. [DOI] [PubMed] [Google Scholar]
  • 19.O'Malley AJ, Zaslavsky AM, Elliot MN, et al. Case-Mix adjustment of the CAHPS® Hospital survey responses. Health Serv Res. 2005;40:2078–2095. doi: 10.1111/j.1475-6773.2005.00470.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Eselius LL, Zaslavsky AM, Huskamp HA, et al. Casemix adjustment of consumer reports about managed behavioral health care and health plans. Health Serv Res. 2008;43:2014–2032. doi: 10.1111/j.1475-6773.2008.00894.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Kim M, Zaslavsky AM, Cleary PD. Adjusting Pediatric CAHPS scores to ensure fair comparison of health plan performances. Med Care. 2005;43:44–52. [PubMed] [Google Scholar]
  • 22.Elliott MN, Zaslavsky AM, Goldstein E, et al. Effects of survey mode, patient mix, and nonresponse on CAHPS Hospital Survey scores. Health Serv Res. 2009;44:501–508. doi: 10.1111/j.1475-6773.2008.00914.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Press I. Quality conundrum. Patient satisfaction cannot be judged on just one measure. Mod Healthc. 2011 Available at: http://www.modernhealthcare.com/article/20111010/MAGAZINE/310109981. Posted October 10, 2011. [PubMed]
  • 24.Zusman EE. HCAHPS replaces Press Ganey Survey as quality measure for patient hospital experience. Neurosurgery. 2012;71:N21–N24. doi: 10.1227/01.neu.0000417536.07871.ed. [DOI] [PubMed] [Google Scholar]
  • 25.Maynard C, Lowy E, Rumsfeld J, et al. The prevalence and outcomes of in-hospital acute myocardial infarction in the Department of Veterans Affairs Health System. Arch Intern Med. 2006;166:1410–1416. doi: 10.1001/archinte.166.13.1410. [DOI] [PubMed] [Google Scholar]
  • 26.Cleary PD, Edgman-Levitan S, Walker JD, et al. Using patient reports to improve medical care: a preliminary report from 10 hospitals. Qual Manag Health Care. 1993;2:31–38. [PubMed] [Google Scholar]
  • 27.Cleary PD, Edgman-Levitan S. Health care quality. Incorporating consumer perspectives. JAMA. 1997;278:1608–1612. [PubMed] [Google Scholar]
  • 28.Edgman-Levitan S, Cleary PD. What information do consumers want and need? Health Aff. 1996;15:42–56. doi: 10.1377/hlthaff.15.4.42. [DOI] [PubMed] [Google Scholar]
  • 29.Cleary PD. Satisfaction may not suffice: a commentary on “A patient's perspective”. Int J Technol Assess Health Care. 1998;14:35–37. doi: 10.1017/s0266462300010503. [DOI] [PubMed] [Google Scholar]
  • 30.Cleary PD, Lubalin J, Hays R, et al. Debating survey approaches. Health Aff. 1998;17:256–266. doi: 10.1377/hlthaff.17.1.265. [DOI] [PubMed] [Google Scholar]
  • 31.Stefos T, LaVallee N, Holden F. Fairness in prospective payment: a clustering approach. Health Serv Res. 1992;27:215–237. [PMC free article] [PubMed] [Google Scholar]
  • 32.Workforce Committee. 2011 Facility Complexity Level Model. Executive decision memo to Under Secretary for Health, Department of Veterans Affairs. 2012 [Google Scholar]
  • 33.Rubin DB. Multiple Imputation for Nonresponse in Surveys. New York: John Wiley & Sons; 1987. [Google Scholar]
  • 34.Schafer JL. Multiple imputation: a primer. Stat Meth Med Res. 1999;8:3–15. doi: 10.1177/096228029900800102. [DOI] [PubMed] [Google Scholar]
  • 35.Wennberg J. The Dartmouth Atlas of Health Care. Hanover, New Hampshire: American Hospial Publishing Inc.; 1998. [PubMed] [Google Scholar]
  • 36.Jencks SF, Cuerdon T, Burwen DR, et al. Quality of medical care delivered to Medicare beneficiaries. A profile at state and national levels. JAMA. 2000;284:1670–1676. doi: 10.1001/jama.284.13.1670. [DOI] [PubMed] [Google Scholar]
  • 37.Lehrman WG, Elliott MN, Goldstein E, et al. Characteristics of hospitals demonstrating superior performance in patient experience and clinical process measures of care. Med Care Res Rev. 2010;67:38–55. doi: 10.1177/1077558709341323. [DOI] [PubMed] [Google Scholar]
  • 38.Zaslavsky AM, Zaborski LB, Cleary PD. Plan, geographical, and temporal variation of consumer assessments of ambulatory health care. Health Serv Res. 2004;39:1467–1485. doi: 10.1111/j.1475-6773.2004.00299.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Zaslavsky AM, Landon BE, Beaulieu ND, et al. How consumer assessments of managed care vary within and among markets. Inquiry. 2000;37:146–161. [PubMed] [Google Scholar]
  • 40.Mittler JN, Landon BE, Fisher ES, et al. Market variations in intensity of Medicare service use and beneficiary experiences with care. Health Serv Res. 2010;45:647–669. doi: 10.1111/j.1475-6773.2010.01108.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Jha AK, Orav EJ, Zheng J, et al. Patients' perception of hospital care in the United States. New Engl J Med. 2008;359:1921–1931. doi: 10.1056/NEJMsa0804116. [DOI] [PubMed] [Google Scholar]
  • 42.Jha AK, Li Z, Orav EJ, et al. Care in U.S. hospitals—the Hospital Quality Alliance program. New Engl J Med. 2005;353:265–274. doi: 10.1056/NEJMsa051249. [DOI] [PubMed] [Google Scholar]
  • 43.Keeler EB, Rubenstein LV, Kahn KL, et al. Hospital characteristics and quality of care. JAMA. 1992;268:1709–1714. [PubMed] [Google Scholar]
  • 44.Goldman LE, Dudley RA. United States rural hospital quality in the Hospital Compare database-accounting for hospital characteristics. Health Policy. 2008;87:112–127. doi: 10.1016/j.healthpol.2008.02.002. [DOI] [PubMed] [Google Scholar]
  • 45.Keenan P, Landon BE, Cleary PD, et al. Geographic area variations in the Medicare health plan era. Med Care. 2010;48:260–266. doi: 10.1097/MLR.0b013e3181ca410a. [DOI] [PubMed] [Google Scholar]
  • 46.Ware JE, Jr, Kosinksi M, Keller SD. A 12-item short-form Health survey: Construction of scales and preliminary tests of reliability and validity. Med Care. 1996;34:220–233. doi: 10.1097/00005650-199603000-00003. [DOI] [PubMed] [Google Scholar]
  • 47.Meterko M, Wright S, Lin H, et al. Mortality among patients with acute myocardial infarction: the influences of patient-centered care and evidence-based medicine. Health Serv Res. 2010;45:1188–1204. doi: 10.1111/j.1475-6773.2010.01138.x. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES