Abstract
Objective
The purpose of this study was to assess the relationship between emergency medicine (EM) resident and attending physician patient satisfaction scores.
Methods
We added four resident questions to the standard Press Ganey survey used at a large, urban, university hospital with a PGY‐1 to ‐4 EM residency. The resident questions were identical to the traditional attending questions. Press Ganey distributed the modified survey to a random sample of 30% of discharged patients. We assessed the correlation between resident and attending top‐box Press Ganey scores using Pearson's coefficients. Two‐tailed two‐sample comparisons of proportions were used to compare top‐box responses between residents and attendings.
Results
From September 1, 2012, to August 31, 2015, a total of 66,216 patients received surveys, and 7,968 responded, resulting in a 12.03% response rate, similar to Press Ganey survey response rate at comparable peer institutions. Patients were able to discriminate between residents and attendings; however, 751 surveys did not contain responses for residents, resulting in a total number of 6,957. All 64 of the EM residents had a minimum of 5 or more surveys returned. There was a high degree of correlation between resident and attending top‐box scores with correlation coefficients ranging from 0.75 to 0.80. However, the proportion of top‐box scores was consistently higher for residents (p < 0.05).
Conclusions
There is a high degree of correlation between resident and attending top‐box scores on Press Ganey surveys, with residents scoring slightly higher than attendings. The addition of resident questions to the standard Press Ganey survey does not appear to decrease overall attending scores.
Patient satisfaction in the ED has become an important initiative for providers, patients, hospitals, and regulatory agencies. Increasing patient satisfaction has been linked to improved health outcomes and is increasingly being used as a quality indicator.1 The vast majority of hospitals and emergency departments (EDs) in the United States use patient satisfaction as a quality metric, and in some institutions it is used to partially determine payment and bonus incentives. The Centers for Medicare and Medicaid Services (CMS) is currently developing an ED‐specific survey tool (ED‐CAHPS) that will likely be mandatory and publicly reported.2 CMS plans to implement this metric to form part of the value‐based purchasing program, and thus reimbursements will likely be tied to performance on these surveys.
Individual hospitals often use formal survey instruments developed by vendors to measure the patient experience. The Press Ganey survey is the most widely used survey instrument in the United States and was utilized by more than 1,000 acute care hospitals in 2012, including the majority of University Healthsystem Consortium hospitals. The Press Ganey survey is a validated survey instrument that includes a section on physician care consisting of an overall score that is a composite of four separate questions: “courtesy of the doctor,” “doctor's concern for your comfort while treating you,” “doctor's concern to keep you informed about your treatment,” and “degree to which the doctor took time to listen to you.” Patients are asked to respond to each question using a five‐point Likert scale (1 = very poor, 2 = poor, 3 = fair, 4 = good, 5 = very good). While commonly used by EDs both with and without resident physicians, these survey instruments and questions have rarely been customized to capture feedback specific to resident physicians. As of 2014, less than five hospitals measured resident patient satisfaction rates on the Press Ganey survey, and none measured it with direct questions focused on resident care.3
Importance
There have been multiple studies to evaluate the impact of various factors on Press Ganey scores, including length of stay, patient demographics, ongoing construction, and even postdischarge call‐backs.4, 5, 6 Medical student involvement in patient care in the ED has previously been shown not to detract from overall satisfaction scores.7 However, the impact of emergency medicine (EM) residents on patient satisfaction is unknown. The literature suggests that ED teamwork correlates highly with patient satisfaction.8, 9 Despite this fact, questions about residents are traditionally not included in these surveys even though residents are a key component of the ED team, have direct interaction with patients, and directly provide patient care. Few institutions currently include questions related to trainees, while none include a resident‐specific section in the Press Ganey survey instrument. Adding resident questions to the Press Ganey survey has an unknown effect on overall patient satisfaction scores for an ED, and this may lead to hesitancy to include these questions on a formal survey.
Goals of This Investigation
The purpose of this study is to measure patient satisfaction scores for EM residents to assess the correlation between resident and attending Press Ganey survey scores and to determine if resident scores differ when compared with attending scores.
Methods
Study Design and Setting
We conducted a prospective observational cohort study of patient satisfaction scores at a large, urban, university hospital with a PGY‐1 to ‐4 EM residency. This study was deemed institutional review board exempt by the institutional review board of the study institution.
Study Protocol
We added four resident questions to the standard Press Ganey survey used at the study institution. The resident questions replicated exactly the attending (or “doctors”) questions and were as follows: on a 1 to 5 scale (5 = very good), “rate the courtesy of the resident physician,” “rate the degree to which the resident physician took the time to listen to you,” “rate the resident physician's concern to keep you informed about your treatment,” and “rate the resident physician's concern for your comfort while treating you.” The existing attending physician questions were revised to say “supervising physician” instead of “physician” to better differentiate between the residents and attendings (Data Supplement S1, available as supporting information in the online version of record of this paper, which is available at http://onlinelibrary.wiley.com/doi/10.1002/aet2.10039/full). The study institution had been using the Press Ganey survey since 1994, and the survey was last revised in 2001. All attendings were aware of the specific questions that were asked about them on the survey. The residents were notified that new resident‐specific questions had been added to the survey prior to implementation. All residents rotating through the department, including rotating residents from other departments, were included in each year.
Selection of Participants
The survey was sent via U.S. mail or e‐mail by Press Ganey to a random sample of 30% of discharged patients. The randomization process was performed by Press Ganey. We included all returned surveys from September 1, 2012, to August 31, 2015, in the study.
Methods and Measurements
Press Ganey collected the data and the patient services department of the study institution compiled them for our analysis. Attending and resident physician identifiers were removed and the database was deidentified prior to our analysis.
Outcomes
We had two primary outcomes in this study. The first was to determine if there exists a correlation between resident and attending top‐box responses on the Press Ganey survey. In this study, top box was defined as scoring a “5” on the 1 to 5 Likert scale for each individual question. The second was to identify if there exists a difference in the proportion of top‐box scores between attending and resident surveys.
Analysis
We explored key variables in our data set using means and standard deviations or medians and interquartile ranges as appropriate. We determined the correlation between resident and attending top‐box Press Ganey scores using Pearson's product‐moment correlation coefficients. Two‐tailed two‐sample comparisons of proportions were used to compare proportions of top‐box responses for residents and attendings. For all analyses, alpha was set at 0.05.
Results
From September 1, 2012, to August 31, 2015, a total of 66,216 patients received surveys, and 7,968 responded, resulting in a 12.03% response rate, similar to Press Ganey survey response rate at comparable peer institutions. During our study period, we obtained 7,698 surveys. A total of 751 (9.8%) did not provide data regarding residents and so were eliminated yielding a total number of 6,947 (90.2%). Sixty‐five percent of respondents were female. The majority (70%) of respondents were white, 18% were black, 9% were Latino, and 3% were Asian. There was a high degree of correlation between responses to resident‐specific and attending‐specific survey responses (Table 1). Residents received consistently higher percentages of top‐box responses (Figure 1) and the proportion receiving top‐box scores was statistically higher for residents (Table 2). The overall attending scores were not significantly altered during the study period compared to the prior 2 years.
Table 1.
Question | Correlation | n |
---|---|---|
Courtesy of the doctor/resident | 0.7464a | 6,840 |
Degree to which the doctor/resident took the time to listen to you | 0.7520a | 6,831 |
Doctor's/resident's concern to keep you informed about your treatment | 0.7998a | 6,814 |
Doctor's/resident's concern for your comfort while treating you | 0.8023a | 6,767 |
Correlation denotes Pearson's product‐moment correlation coefficient.
n = number of nonmissing survey responses.
Denotes significant correlation between doctor and resident scores (p < 0.05)
Table 2.
Question | Proportion Top Box | 95% CI | p‐value |
---|---|---|---|
Courtesy of the doctor | 0.6778 | 0.6667–0.6889 | 0.0064 |
Courtesy of the resident | 0.6993 | 0.6885–0.7101 | |
Doctor's concern for your comfort while treating you | 0.6150 | 0.6035–0.6266 | 0.0000 |
Resident's concern for your comfort while treating you | 0.6508 | 0.6396–0.6620 | |
Doctor's concern to keep you informed about your treatment | 0.6114 | 0.5999–0.6230 | 0.0000 |
Resident's concern to keep you informed about your treatment | 0.6464 | 0.6351–0.6577 | |
Degree to which the doctor took the time to listen to you | 0.6566 | 0.6453–0.6678 | 0.0009 |
Degree to which the resident took the time to listen to you | 0.6832 | 0.6723–0.6942 |
Top box denotes highest score (5/5) on a standard Likert scale.
Discussion
Our study is the first to demonstrate a high degree of correlation between resident and attending top‐box scores on Press Ganey surveys, with residents scoring slightly higher than attendings. Importantly, the resident scores did not decrease the department average when resident and attending scores were evaluated together.
The residents in our study received higher patient satisfaction scores than the attendings. In an academic environment, this might be due to the increased time residents spend with patients compared to attendings. Additionally, residents are often responsible for delivering test results, diagnoses, discharge plan, and instructions. These communication events are likely lengthier and more memorable for patients than the attending examinations, which may be the only time the attending interacts with the patient. The exact reasons for this discrepancy require further research.
Since we have shown that resident‐specific questions do not lower attending scores, we believe that it is reasonable and beneficial to add these questions. As patient satisfaction becomes increasingly important in the current healthcare environment, residents will benefit from direct feedback from their patients. Residency programs have an obligation to prepare their trainees for this critical aspect of their future careers. In our study period, each trainee received specific feedback about their patient satisfaction scores at mid‐ and end‐of‐year evaluations, along with individually tailored advice on strategies to improve from faculty. Direct feedback from patients helps familiarize new residents with the concept of patient satisfaction and allows them to gain an understanding of how they are perceived by patients and to take action while still in training. By providing these data to the residents as part of their structured review session with program leadership, residents are able to act upon this feedback in a timely manner.
Our study did not directly address the benefit to trainees by adding resident‐specific scores. Future study could evaluate whether adding these scores, coupled with periodic feedback, results in higher future scores for individual residents.
Limitations
According to Press Ganey, a 50% response rate or a sample size of at least 30 surveys is needed to consider the survey data reliable. Many of the individual providers at both the resident and the attending level did not have 30 surveys returned. By convention, residents and attendings who had five or more surveys returned received data on how they compared to other providers, but it is unclear on an individual level if these scores are statistically reliable. Additionally, many patients are seen by more than one attending due to change‐of‐shift handoff and more than one resident on junior/senior resident teams.
In this study, the patient satisfaction scores collected by Press Ganey were attributed to the attending physician at the time of discharge and the resident who wrote the patient's note. At least in some cases, the feedback may have been intended for the original treating physicians rather than the discharge team. However, it is unlikely this would affect our primary outcomes as we were comparing cumulative resident and attending scores.
Residents were aware of the specific questions being added to the survey to evaluate their individual patient satisfaction scores. This may introduce observer, or Hawthorne, effect.
At the study institution, residents are instructed to introduce themselves as the resident physician, and attendings introduce themselves as the supervising doctor or attending physician. This scripted introduction likely helps patients differentiate between the two groups. However, previous studies have shown that patients and their families have difficulty understanding the various training levels of the physicians taking care of them.10, 11 The degree to which this lack of differentiation affects our results remains unclear.
Further, Press Ganey data are not necessarily directly related to the quality of patient care. Many institutions and individuals have spoken out against the multitude of factors outside of the physician's control that affects the data.12, 13 However, to date, Press Ganey remains the industry standard for measuring patient satisfaction and therefore was the target of this study.
We carried out this study at a single site, large, urban, PGY‐1 to ‐4 EM program. These results are likely generalizable to similar residencies; however, it is unclear if our results are generalizable to variously structured EM residency programs. Further study is warranted.
Finally, at the study institution, it is quite rare to have attendings evaluate patients without residents. Further study could evaluate differences in patient satisfaction scores between patients seen primarily by attendings versus with residents.
Conclusions
In summary, patient satisfaction is an increasingly important quality metric in current emergency medicine practice, albeit a controversial one. It is feasible to add resident‐specific questions to a Press Ganey survey without decreasing overall department scores. This can be valuable for residents to receive targeted feedback directly from patients and to learn how to navigate the current system of patient satisfaction.
Supporting information
AEM Education and Training 2017;1:179–184.
The authors have no relevant financial information or potential conflicts to disclose.
References
- 1. ACEP . Patient Satisfaction. Available from: https://www.acep.org/patientsatisfaction/. Accessed Feb 16, 2016.
- 2. Centers for Medicare & Medicaid Services . Emergency Department Patient Experiences with Care (EDPEC) Survey. Available at: http://www.cms.gov/Research-Statistics-Data-and-Systems/Research/CAHPS/ed.html. Accessed Feb 16, 2016.
- 3. Watase T, Yarris LM, Fu R, Handel DA. Educating emergency medicine residents in emergency department administration and operations: needs and current practice. J Grad Med Educ 2014;6:770–3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4. Sayah A, Lai‐Becker M, Kingsley‐Rocker L, Scott‐Long T, O'Connor K, Lobon LF. Emergency department expansion versus patient flow improvement: impact on patient experience of care. J Emerg Med 2016;50:339–48. [DOI] [PubMed] [Google Scholar]
- 5. Handel DA, French LK, Nichol J, Momberger J, Fu R. Associations between patient and emergency department operational characteristics and patient satisfaction scores in an adult population. Ann Emerg Med 2014;64:604–8. [DOI] [PubMed] [Google Scholar]
- 6. Guss DA, Gray S, Castillo EM. The impact of patient telephone call after discharge on likelihood to recommend in an academic emergency department. J Emerg Med 2014;46:560–6. [DOI] [PubMed] [Google Scholar]
- 7. Bernard A, Martin D, Moseley M, et al. The impact of medical student participation in emergency medicine patient care on departmental Press Ganey scores. West J Emerg Med 2015;16:830–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8. Johnson MB, Castillo EM, Harley J, Guss DA. Impact of patient and family communication in a pediatric emergency department on likelihood to recommend. Pediatr Emerg Care 2012;28:243–6. [DOI] [PubMed] [Google Scholar]
- 9. Kipnis A, Rhodes KV, Burchill CN, Datner E. The relationship between patients' perceptions of team effectiveness and their care experience in the emergency department. J Emerg Med 2013;45:731–8. [DOI] [PubMed] [Google Scholar]
- 10. Santen SA, Hemphill RR, Prough EE, Perlowski AA. Do patients understand their physician's level of training? A survey of emergency department patients. Acad Med 2004;79:139–43. [DOI] [PubMed] [Google Scholar]
- 11. Hemphill RR, Santen SA, Rountree CB, Szmit AR. Patients' understanding of the roles of interns, residents, and attending physicians in the emergency department. Acad Emerg Med 1999;6:339–44. [DOI] [PubMed] [Google Scholar]
- 12. 2+2=7? Seven Things You May Not Know About Press Ganey Statistics [Internet]. Emergency Physicians Monthly. Available at: http://epmonthly.com/article/227-seven-things-you-may-not-know-about-press-gainey-statistics/. Accessed Feb 16, 2016.
- 13. Farley H, Enguidanos ER, Coletti CM, et al. Patient satisfaction surveys and quality of care: an information paper. Ann Emerg Med 2014;64:351–7. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.