Skip to main content
Journal of Patient Experience logoLink to Journal of Patient Experience
. 2015 Nov 1;2(2):29–36. doi: 10.1177/2374373515615977

Correlation of Inpatient Experience Survey Items and Domains With Overall Hospital Rating

Kyle Kemp 1,2,, Brandi McCormack 1, Nancy Chan 1, Maria J Santana 2, Hude Quan 2
PMCID: PMC5513631  PMID: 28725821

Abstract

Objective:

To determine which individual patient experience questions and domains were most correlated with overall inpatient hospital experience.

Methods:

Within 42 days of discharge, 27 639 patients completed a telephone survey based upon the Hospital-Consumer Assessment of Healthcare Systems and Processes instrument. Patients rated their overall experience on a scale of 0 (worst care) to 10 (best care). Correlation coefficients were calculated to assess the relationships between individual survey questions and domains with overall experience.

Results:

Questions on provider coordination and nursing care were most correlated with overall experience. Hospital cleanliness, quietness, and discharge information questions showed poor correlation. Correlation with overall experience was strongest for the “communication with nurses” domain.

Conclusions:

Our individual question results are novel, while the domain-based findings replicate those of US-based providers, results which had not yet been reported in the Canadian context—one with universal health care coverage. Our results suggest that our large health care organization may attain initial inpatient experience improvements by focusing upon personnel-based initiatives, rather than physical attributes of our hospitals.

Keywords: inpatient experience, HCAHPS, correlation, domains

Introduction

Inpatient experience is a patient-reported outcome which assesses the perceived quality of health care interactions and services which are delivered over the course of a given hospital stay. In the United States, the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey was introduced to measure hospital experience in a rigorous, systematic fashion. As such, this validated survey allows for meaningful comparison between multiple hospitals, something that was previously not possible with the use of ad-hoc, in-house instruments which varied between institutions (14). On a larger scale, HCAHPS results in the United States now play a crucial role in the value-based purchasing program introduced under the Affordable Care Act of 2010 (5). As such, patient experience results now directly affect a portion of hospital funding, resulting in a clear incentive to not only collect patient experience data, but also to act upon it.

The HCAHPS survey contains over 30 questions and touches upon 9 different domains (communication with doctors, communication with nurses, responsiveness of hospital staff, pain management, communication about medicines, discharge information, cleanliness of the hospital environment, quietness of the hospital environment, and transition of care) (1, 6). With respect to the US-based data, the Centers for Medicare & Medicaid Services (central repository for HCAHPS data) reports correlation data between each of these domains, as well as overall experience (7), which is asked of patients as per the following statement:

We want to know your overall rating of your stay at <HOSP>. This is the stay that ended around <DDATE>. Please do not include any other hospital stays in your answer. Using any number from 0 to 10, where 0 is the worst hospital possible and 10 is the best hospital possible. What number would you use to rate this hospital during your stay?

By determining which area(s) may provide the most benefit to overall care experience, this type of correlational analysis overcomes one of the perceived challenges in trying to implement organizational change from HCAHPS. Simply said, the results may inform the end-user as to the specific domains which when improved would theoretically provide the greatest gains in overall patient experience (8, 9). However, the reported data do not examine individual questions nor do it address these comparisons within a Canadian setting, one which employs a universal health care model (10). Therefore, the purposes of this project were to assess which (a) individual questions and (b) HCAHPS domains are most correlated with overall inpatient hospital experience in our Canadian data set.

Methods

Survey Instrument

Our organization’s province-wide inpatient hospital experience survey has been administered on a continuous basis since 2009. The overall rating of care question is one of 16 publicly-reported performance measures (11). The survey is comprised of 51 questions, which includes 32 core HCAHPS items, as well as 19 other items which address organization-specific policies and procedures. Surveys are administered by a trained team of health research interviewers using computer-assisted telephone interview (CATI) software (Voxco (Voxco version 1.10); Montreal, Canada). Potential respondents are contacted Monday to Friday from 9 am to 9 pm and on Saturdays from 9 am to 4 pm. To ensure standardization, 10% of all calls are monitored for quality assurance and training purposes.

Each survey typically requires 15 to 20 minutes to complete using a standard script with a list of standard prompts and responses to frequently asked questions. Responses to each survey question are Likert-type scales. Certain questions ask the respondent to rate aspects of their care on a scale of 0 (worst) to 10 (best), while other items employ categorical responses (eg, always, usually, sometimes, and never). Detailed information about the development, validity, and American results from HCAHPS is publicly available at www.hcahpsonline.org (1, 6).

Sample Derivation and Dialing Protocol

Across our province, acute care admission, discharge, and transfer information are captured in a series of clinical databases which are updated daily. On a biweekly basis, eligible discharges are extracted using a standard script. This script filters all inpatient records based on our survey exclusion criteria. These include age less than 18 years old, inpatient stay of less than 24 hours, death during hospital stay (no proxy surveys are permitted), any day surgery or ambulatory procedures, and any psychiatric unit or psychiatric physician service on record (12). For compassionate reasons, our organization also excludes any records containing any dilation and curettage procedures, as well as visits relating to still births, or those associated with a baby with length of stay greater than 6 days (eg, complication/neonatal intensive care unit stay). The 4 data extracts are combined into 1 complete provincial list, with duplicate entries (if present) being filtered out.

Once compiled, the complete list of eligible inpatient discharges is imported into the CATI software program and stratified at the hospital level. At the time of interview, random dialing is performed on the sample, until a quota of 5% of eligible discharges across all of our province’s 94 acute care hospitals is met. Patients are contacted up to 42 days postdischarge. To maximize the potential for survey completion, each dialed number is called up to 9 times on varying days and times or until a definitive result (eg, survey completed, refusal, etc) is obtained.

Analysis

To account for differences in individual question response scales (eg, 3-point, 4-point, and 11-point), all inpatient experience responses were converted to a normalized scale of 0 (worst possible score) to 100 (best possible score). For example, for questions with a response scale of never, sometimes, usually, and always, scores were converted to 0, 33.33, 66.66, and 100, respectively. For the independent variable (overall rating of care), respondents were asked to rate the overall care that they received during their inpatient visit on an 11-point scale of 0 to 10.

These scores were converted to the 0 (worst possible) to 100 (best possible) score, where 1 equals 10, 2 equals 20, and so on. With respect to HCAHPS domains, mean scores were calculated using the following formula:

Sum of normalized question scores in the domain/Number of questions in the domain. For example, the mean domain score for nurse communication, which is comprised of 3 questions, was calculated by dividing the sum of the nurse respect, nurse listening, and nurse explanation normalized question scores by 3. This process was completed for each domain score. A listing of the domains with their corresponding questions is provided in Table 1.

Table 1.

Inpatient Survey Domains and Question Composition.

Domain Questions
Communication from nurses Nurse respect
Nurse listening
Explanations from nurses
Communication from doctors Doctor respect
Doctor listening
Explanations from doctors
Responsiveness of hospital staff Call button response
Bathroom assistance
Pain management Help with pain
Pain control
Communication about medicines New medicine purpose
New medicine side effects
Cleanliness of hospital Room cleanliness
Quietness of hospital Room quietness
Discharge information Help after discharge
Symptoms after discharge

The characteristics of the sample were generated using descriptive statistics. The relation between normalized individual questions and overall rating of care (objective a) was calculated using the Spearman correlation statistic. The relation between normalized domain scores and overall rating of care (objective b) was calculated using the Pearson correlation statistic. All analyses were performed using SAS Network Version 9.3 for Windows (Cary, North Carolina). P values of less than .05 were considered statistically significant for all comparisons.

Results

Over the 3-year study period (April 2011 to March 2014), 27 492 inpatient experience surveys were completed. One-hundred twenty-three patients did not provide a response to the “overall rating of care” question. These surveys were excluded from analysis, resulting in a final sample of 27 369 completed surveys (99.6% of original cohort). Characteristics of the final sample are provided in Table 2. Our survey cohort was primarily female (64.7%) and between 25 and 74 years of age (74.8%). The majority of respondents was born in Canada (85.3%) and primarily spoke English at home (90.5%). The mean age of the cohort was 53.8 ± 20.0 years (median = 56.0), and the mean length of stay was 5.4 ± 9.3 days (median = 3.0).

Table 2.

Characteristics of Sample.a

Characteristic n Percentage of Sample
Sex
 Male 9665 35.3
 Female 17 704 64.7
Age, in years
 18-24 1863 6.8
 25-34 5028 18.4
 35-44 2916 10.7
 45-54 3439 12.6
 55-64 4683 17.1
 65-74 4424 16.2
 75-79 2011 7.4
 80 and older 3005 11.0
Birth location (n = 27 351)
 Canada 23 323 85.3
 Outside Canada 4028 14.7
Primary language spoken at home (n = 27 338)
 English 24 728 90.5
 Other 2610 9.5
Marital status (n = 27 203)
 Single (never married) 2811 10.3
 Married/common-law/living with partner 18 886 69.4
 Divorced/separated/widowed 5506 20.2
Maximum education level (n = 26 214)
 Elementary or junior high 3349 12.8
 Senior high (some or complete) 8617 32.9
 College/technical school (some or complete) 8602 32.8
 Undergraduate level (some or complete) 4447 17.0
 Postgraduate degree complete 1199 4.6
Length of hospital stay, days (n = 27 368)
 1.0-2.0 8149 29.8
 2.01-4.0 9433 34.5
 4.01-8.0 4866 17.8
 Greater than 8.0 4920 18.0

an = 27 369 unless otherwise stated.

Correlation results between individual questions and the patients’ overall rating of care are presented in Table 3. The ranked results, a brief description of the question, the possible answers, and the wording of each item (as read verbatim to the patient) are also presented. From this, the question pertaining to provider coordination was most correlated with overall rating of care (n = 27 258, r = .54, P < .001). Other top-5 ranking questions pertained to nurse follow-up (n = 26 533, r = .46, P < .001), nurse listening (n = 27 253, r = .45, P < .001), help with pain (n = 20 775, r = .42, P < .001), and nurse respect (n = 27 243, r = .41, P < .001). All nursing questions ranked within the top 7 questions, with the lowest-ranking nursing question being related to nurse explanations (n = 27 131, r = .38, P < .001). Items pertaining to physical attributes/environment of the hospital showed poor correlation with overall rating of care. These included room cleanliness (ranked 15th of 24; n = 26 944, r = .35, P < .001), and room quietness (ranked 18th of 24; n = 27 112, r = .31, P < .001). The two lowest-ranked questions were related to discharge information: help after discharge (n = 24 103, r = .23, P < .001), and symptoms after discharge (n = 24 826, r = .16, P < .001).

Table 3.

Individual Item Correlations With Overall Hospital Experience.

Rank Item Description Possible Answers Wording of Question/Item Spearman’s r
1 Provider coordination 1: Excellent How would you describe how well all of the health care professionals coordinated their efforts to serve your needs? 0.54 (n = 27 258)
2: Very good
3: Good
4: Fair
5: Poor
2 Nurse follow-up 1: Never How often did nurses follow-up on your concerns and observations? 0.46 (n = 26 533)
2: Sometimes
3: Usually
4: Always
3 Nurse listening 1: Never How often did nurses listen carefully to you? 0.45 (n = 27 253)
2: Sometimes
3: Usually
4: Always
4 Help with pain 1: Never How often did the hospital staff do everything they could to help you with your pain? 0.42 (n = 20 775)
2: Sometimes
3: Usually
4: Always
5 Nurse respect 1: Never How often did nurses treat you with courtesy and respect? 0.41 (n = 27 243)
2: Sometimes
3: Usually
4: Always
6 Physician follow-up 1: Never How often did doctors follow-up on your concerns and observations? 0.39 (n = 25 756)
2: Sometimes
3: Usually
4: Always
7 Nurse explanations 1: Never How often did nurses explain things in a way that you could understand? 0.38 (n = 27 131)
2: Sometimes
3: Usually
4: Always
8 Call button response 1: Never After you pressed the call button, how often did you get help as soon as you wanted it? 0.38 (n = 20 424)
2: Sometimes
3: Usually
4: Always
9 New medicine side effects 1: Never Before giving you any new medicine, how often did hospital staff describe possible side effects in a way you could understand? 0.38 (n = 14 261)
2: Sometimes
3: Usually
4: Always
10 Patient involvement in decisions 1: No, I wanted to be more involved Did you have enough involvement in decisions about your treatment? 0.38 (n = 25 588)
2: Yes, somewhat
3: Yes, definitely
11 Patient preferences 1: Strongly disagree The hospital staff took your preferences and those of your family or caregiver into account in deciding what your health care needs would be when you left the hospital. 0.37 (n = 25 431)
2: Disagree
3: Agree
4: Strongly agree
12 Bathroom assistance 1: Never How often did you get help getting to the bathroom or in using a bedpan as soon as you wanted? 0.36 (n = 11 545)
2: Sometimes
3: Usually
4: Always
13 Physician listening 1: Never How often did doctors listen carefully to you? 0.35 (n = 26 945)
2: Sometimes
3: Usually
4: Always
14 Pain control 1: Never How often was your pain well controlled? 0.35 (n = 20 729)
2: Sometimes
3: Usually
4: Always
15 Room cleanliness 1: Never How often were your room and bathroom kept clean? 0.35 (n = 26 944)
2: Sometimes
3: Usually
4: Always
16 Physician explanations 1: Never How often did doctors explain things in a way that you could understand? 0.33 (n = 27 008)
2: Sometimes
3: Usually
4: Always
17 Physician respect 1: Never How often did doctors treat you with courtesy and respect? 0.32 (n = 27 073)
2: Sometimes
3: Usually
4: Always
18 Room quietness 1: Never How often was the area around your room quiet at night? 0.31 (n = 27 112)
2: Sometimes
3: Usually
4: Always
19 Patient discharge information 1: Strongly disagree When you left the hospital, you had a clear understanding of the things that you were responsible for in managing your health. 0.31 (n = 27 003)
2: Disagree
3: Agree
4: Strongly agree
20 New medicine purpose 1: Never Before giving you any new medicine, how often did hospital staff tell you what the medicine was for? 0.30 (n = 14 620)
2: Sometimes
3: Usually
4: Always
21 Patient discharge medications 1: Strongly disagree When you left the hospital, you clearly understood the purpose for taking each of your medications. 0.27 (n = 25 438)
2: Disagree
3: Agree
4: Strongly agree
22 Family involvement 1: Not as much as I wanted During your hospital stay, how much did hospital staff include your family or someone close to you in decisions about your care? 0.26 (n = 19 719)
2: As much as I wanted
3: More than I wanted
23 Help after discharge 1: Yes During your hospital stay, did doctors, nurses or other hospital staff talk with you about whether you would have the help you needed when you left the hospital? 0.23 (n = 24 103)
2: No
24 Symptoms after discharge 1: Yes During this hospital stay, did you get information, in writing, about what symptoms or health problems to look out for, after you left the hospital? 0.16 (n = 24 826)
2: No

aAll correlations were significant at the P < .01 level.

Patient-level domain correlation results are shown in Table 4. Communication with nurses was the domain most correlated with overall rating of care (r = .60, P < .001). Four domains showed similar correlation with overall rating of care. These included responsiveness of hospital staff (r = .49, P < .001), pain management (r = .48, P < .001), communication with doctors (r = .43, P < .001), and communication about medicines (r = .42, P < .001). Cleanliness of the hospital (r = .35, P < .001), quietness of the hospital (r = .30, P < .001), and discharge information (r = .29, P < .001) were the 3 domains that showed least correlation with overall rating of care.

Table 4.

Patient-Level Domain Correlations.

Communication with Nurses Communication with Doctors Responsiveness of Hospital Staff Pain Management Communication About Medicines Cleanliness of Hospital Quietness of Hospital Discharge Information Overall Hospital Rating Recommend the Hospital
Communication with nurses 1 0.43 0.56 0.52 0.45 0.30 0.23 0.28 0.60 0.54
Communication with doctors 1 0.32 0.34 0.38 0.18 0.16 0.29 0.43 0.41
Responsiveness of hospital staff 1 0.47 0.37 0.28 0.26 0.22 0.49 0.43
Pain management 1 0.35 0.23 0.22 0.24 0.48 0.43
Communication about medicines 1 0.23 0.20 0.37 0.42 0.37
Cleanliness of hospital 1 0.25 0.11 0.35 0.28
Quietness of hospital 1 0.08 0.30 0.22
Discharge information 1 0.29 0.29
Overall hospital rating 1 0.68
Recommend the hospital 1

Abbreviation: HCAHPS, Hospital Consumer Assessment of Healthcare Providers and Systems.

aPatient-level Pearson correlations of rescaled linear means of HCAHPS measures for patients discharged between April 2011 and March 2014 (27 369 surveys). All correlations are significant at the P < .001 level.

Discussion

The present study provides novel information on the comparison of individual HCAHPS questions to overall rating of care in the inpatient setting, something that to our knowledge has not been done previously. Second, correlation results examining the comparison of HCAHPS domains to overall rating of care in the Canadian context are shown. Our main study finding was that in the inpatient setting, staff-based questions (eg, staff coordination, nurse follow-up, and nurse listening) and domains (eg, communication with nurses and responsiveness of hospital staff) were more correlated with overall rating of care, when compared to items/domains pertaining to physical features (eg, hospital cleanliness and hospital quietness) and care processes (eg, discharge information). The domains-based findings are similar to those observed in the United States, as published by the CMS (7). Similar results have also been reported in settings as remote as rural China (13), which speaks volumes as to the robustness of these findings.

Our study results are timely. With the introduction of the Affordable Care Act, hospital reimbursement, in part, now focuses upon the quality of services delivered, as opposed to volume. The HCAHPS performance is now directly tied to a portion of hospital funding, providing a clear incentive to improve the care that is delivered to patients. Poor performance on the HCAHPS and other CMS programs, such as the Readmissions Reduction Program (14, 15) and the Hospital-Acquired Condition Reduction Program (16, 17), now result in financial penalties to poor-performing hospitals. Although hospitals that use the HCAHPS instrument may routinely obtain reports of their results from their survey vendor (such as Press Ganey and Deidre Mylod, Personal communication, April 16, 2015), the information and methodology contained within these reports has yet to be available within the public domain. As such, we suggest that publicly-reported results not only explore the correlations between domains and overall ratings of care but should also include the results of individual questions. Additionally, the methodology in the current manuscript presents an analytic plan that may allow organizations who conduct their own survey to reliably assess the key survey items that drive the overall experience scores of their inpatients.

Although somewhat intuitive given the “first face” role that nurses represent with their patients, our findings document the importance of nursing questions and domains in contributing to the overall rating of care. In addition to communication among providers, nurse listening, follow-up, respect, and explanations all figured prominently in our correlational analysis. Simply said, if a patient has a good experience with their nurse(s), they tend to report a pleasant overall hospital experience in our inpatient setting. Recognizing this powerful relationship, strategies to engage nurses are a means of improving the patient experience.

A primary study limitation is that our survey was conducted by telephone. As such, our results may not apply when other modes such as mail-outs or interactive voice response are used. Prior to organization-wide inception, we conducted a pilot study which found differences in response rates and response patterns between mail and phone survey modes. This finding has been replicated in other health surveys (including HCAHPS), where telephone respondents typically rate their care experience more positively, when compared to paper-based questionnaires (1822). For this very reason, the CMS employs a mode adjustment algorithm when comparing survey results from varying modes (23, 24).

A secondary limitation pertains to the study location. As the study was conducted in Canada, a country with universal health care coverage, a similar investigation in the United States may be warranted due to inherent differences in the funding structure. Additionally, a potential limitation may be that prospective participants with a strongly negative opinion of their inpatient care may have refused to take the survey. Given the low percentage of outright refusals obtained over the study period (approximately 5% of all dialed numbers), we feel this to be of minimal concern.

A final limitation lies within the interpretation of our results. Despite showing a poor correlation with overall experience, some items/domains may still provide excellent opportunities for quality improvement. In our own analysis, hospital cleanliness was not correlated with overall experience scores. However, we do not advocate simply discounting hospital cleanliness, as it would be foolish to not consider it a priority. Patients view hospital cleanliness as a marker of quality, one that has been associated with hospital-acquired infections (25). Qualitative reports of what patients deem important may provide additional value.

In summary, our findings replicate those of the US-based HCAHPS-reporting hospitals, which showed that staff-based domains were most correlated with overall hospital experience. Our investigation has delved one level deeper by examining the relationships between individual questions and the overall rating of care. As with the domains, staff-based items, particularly those relating to staff coordination and nursing care, were most correlated with overall rating of care. Interestingly, hospital cleanliness, quietness, and questions pertaining to discharge planning did not have a high degree of correlation with overall rating of care. Our results provide excellent opportunities for targeted quality improvement initiatives in our jurisdiction as well as the broader Canadian context. Based on our findings, we advocate that our health care organization should aim to improve overall inpatient care by commencing with initiatives to improve staff-related items (eg, staff coordination and interactions with patients), as these were most correlated with the overall rating. Perhaps most importantly, other organizations may use our methodology to determine additional areas in which to focus their quality improvement efforts.

Acknowledgments

The authors wish to recognize the contributions of the trained team of interviewers from Primary Data Support, Analytics (Data Integration, Measurement and Reporting (DIMR), Alberta Health Services), as well as the patients who graciously participated in our survey.

Author Biographies

Kyle Kemp is Consultant with Alberta Health Services, and is currently completing his PhD studies at the University of Calgary. His research focuses on the relationship between patient experience, health outcomes, and health utilization patterns in the Canadian system.

Brandi McCormack is the Director of Primary Data Support at Alberta Health Services. Her team provides methodological expertise to quality improvement teams and captures patient-reported measures across the health system.

Nancy Chan is the Lead of Alberta Health Services' inpatient experience survey project. She is responsible for the day-to-day administration of the survey.

Maria J Santana is an Assistant Research Professor at the University of Calgary, W21C Research and Innovation Centre, O’Brien Institute of Public Health and the Department of Community Health Sciences. She received her PhD in Clinical Epidemiology from the University of Alberta and is a patient-reported outcome measures methodologist.

Hude Quan is a Professor at the Department of Community Health Sciences at the University of Calgary and Director of the World Health Organization Collaborating Centre in Classification, Terminology and Standards at the O’Brien Institute for Public Health. Dr. Quan is the Lead for Alberta’s Strategies for Patient Oriented Research SUPPORT Unit Methods Support & Development Platform. He has published over 250 papers in peer reviewed journals, and in 2014 and 2015, Thomson Reuters listed him as one of the world’s highly cited researchers.

Footnotes

Declaration of Conflicting Interests: The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding: The author(s) received no financial support for the research, authorship, and/or publication of this article.

References


Articles from Journal of patient experience are provided here courtesy of SAGE Publications

RESOURCES