Skip to main content
Western Journal of Emergency Medicine logoLink to Western Journal of Emergency Medicine
. 2016 Nov 8;18(1):56–59. doi: 10.5811/westjem.2016.9.30667

Teaching the Emergency Department Patient Experience: Needs Assessment from the CORD-EM Task Force

Kory S London *,, Jeffrey Druck , Matthew Silver , Douglas Finefrock §
PMCID: PMC5226764  PMID: 28116009

Abstract

Introduction

Since the creation of Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) patient satisfaction (PS) scores, patient experience (PE) has become a metric that can profoundly affect the fiscal balance of hospital systems, reputation of entire departments and welfare of individual physicians. While government and hospital mandates demonstrate the prominence of PE as a quality measure, no such mandate exists for its education. The objective of this study was to determine the education and evaluation landscape for PE in categorical emergency medicine (EM) residencies.

Methods

This was a prospective survey analysis of the Council of Emergency Medicine Residency Directors (CORD) membership. Program directors (PDs), assistant PDs and core faculty who are part of the CORD listserv were sent an email link to a brief, anonymous electronic survey. Respondents were asked their position in the residency, the name of their department, and questions regarding the presence and types of PS evaluative data and PE education they provide.

Results

We obtained 168 responses from 139 individual residencies, representing 72% of all categorical EM residencies. This survey found that only 27% of responding residencies provide PS data to their residents. Of those programs, 61% offer simulation scores, 39% provide third-party attending data on cases with resident participation, 37% provide third-party acquired data specifically about residents and 37% provide internally acquired quantitative data.

Only 35% of residencies reported having any organized PE curricula. Of the programs that provide an organized PE curriculum, most offer multiple modalities; 96% provide didactic lectures, 49% small group sessions, 47% simulation sessions and 27% specifically use standardized patient encounters in their simulation sessions.

Conclusion

The majority of categorical EM residencies do not provide either PS data or any organized PE curriculum. Those that do use a heterogeneous set of data collection modalities and educational techniques. American Osteopathic Association and Accreditation Council for Graduate Medical Education residencies show no significant differences in their resident PS data provision or formal curricula. Further work is needed to improve education given the high stakes of PS scores in the emergency physician’s career.

INTRODUCTION

In 1976, Ware, Snyder and Wright published the first rigorous and validated patient satisfaction (PS) healthcare questionnaire, the PSQ.1,2 Within a decade, two Notre Dame professors, Irwin Press and Rod Ganey, founded Press Ganey Associates whose mission of “improving the patient experience through compassionate, connected care” became the basis of a healthcare revolution.3 Hospitals saw the competitive advantage that could be gained by measuring their patients’ satisfaction and comparing these scores to other similar organizations. Service quality, as measured through PS scores, became a key component of measuring the quality and value of healthcare.4

As the single largest payer of healthcare dollars in the United States, the federal government followed suit. In 2002, through a partnership with the Agency for Healthcare Research and Quality (AHRQ), the Centers for Medicare and Medicaid (CMS) first developed and then implemented the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey. As part of the Deficit Reduction Act of 2005, and further through the Patient Protection and Affordable Care Act of 2010, hospitals received financial incentives for participating in the HCAHPS survey. The HCAHPS data are not only used to provide financial incentives to hospitals, but are also publicly reported on the CMS’ consumer-oriented website,5 further emphasizing the import of these scores to hospital systems and their administrators.

Several studies have linked PS to improved outcome measures,610 but physicians are still skeptical of the link between satisfaction and quality. A well-publicized trial published by Fenton et al in 2012, further sparked the controversy, revealing that higher PS scores were associated with higher overall healthcare and prescription drug expenditures, and increased mortality.10

Despite the conflicting evidence, PS scores have become a key component in the metric-driven environment in which physicians practice today. The Accreditation Council for Graduate Medical Education (ACGME), through the Next Accreditation System and Milestones, developed a framework for the assessment of residents in each of several core competency areas.11 Included in the milestones are several competencies relating to how well residents connect with their patients, including professionalism, interpersonal and communication skills, and system-based practice. Residency programs will need to train their residents in effective communication strategies, educate them on the importance of PS scores and prepare them for a practice where metrics drive hospital reimbursement and physician performance assessment.

The objective of this study was to determine the education and evaluation landscape for patient experience in categorical emergency medicine (EM) residencies in the U.S.

METHODS

The needs assessment survey was created using plain language and consensus questions developed by the authors and task force. In the interest of acquiring a large dataset, we kept the number of questions to a minimum to respect the varied duties of the respondents. Survey questions were tested for content and response process issues by the authors’ own departmental leadership prior to survey release. Further validity evidence was not collected. We collected data about participants’ departmental role and residency name, but that information was solely used to assist in culling duplicate program responses and to analyze ACGME vs. American Osteopathic Association (AOA) differences respectively. All data relating to identity were strictly separated from program responses. The institutional review board reviewed this study and deemed it exempt.

We obtained access to the faculty through the use of the Council of Residency Directors for Emergency Medicine (CORD-EM) faculty listserv. The CORD-EM membership includes the departments of categorical U.S. residencies, prospective U.S. residencies and select international EM residency programs. Specifically, the membership is restricted to program directors (PDs), assistant PDs and core faculty of the departments’ education divisions. While patient experience is an international movement, we decided to limit participation to categorical U.S. residencies that already exist.

The only inclusion criteria were that respondents had to work at currently running U.S. categorical residencies and participate in the CORD-EM faculty listserv. Exclusion criteria included international faculty and those of residencies not yet currently in operation. Given the likelihood of multiple responses from some institutions, it was decided that in the case of heterogeneity, the most senior respondent’s data would be used (PD>APD>core faculty).

The listserv contains 194 residencies that split into 30 AOA or joint AOA/ACGME accredited programs and 164 ACGME accredited programs. The AOA and joint accredited programs were combined for analysis given AOA accreditation was the variable being studied. The surveying itself was performed using the online survey service SurveyMonkey®. An initial attempt at data collection was made by a form email sent through the listserv. When responses began to decrease, we sent a second form email through the listserv to encourage those who had overlooked the first request. Finally, individual program directors from non-responsive departments were sent targeted emails asking for participation during the third and final round of data collection.

The authors analyzed data using the built-in tools from SurveyMonkey and Microsoft Excel. We performed comparison between AOA and ACGME programs using chi-square testing with p values set a 0.05.

RESULTS

We received a total of 168 individual responses from 139 programs. This represents a program participation rate of 72%. Of the 139 programs that provided data, 15 were AOA accredited, 119 were ACGME and five were joint AOA/ACGME. This represents 62.5% of AOA residencies that participate in CORD-EM, 72% of ACGME residencies and 83% of joint accreditation programs. There was no significant difference in rates of response between AOA/joint and ACGME programs (p=0.51).

Of those 168 responses, 107 were by PDs, 46 by APDs and 15 by academic core faculty. Given multiple responses by 29 programs, the final participant count was 107 PDs (77%), 24 APDs (17%) and eight academic core faculty (6%). No program had >2 responses. Categorical EM programs exist in 43 states and Puerto Rico. We obtained responses from 41 of those.

This survey found that only 27% of responding residencies provide any PS data to their residents. Of those programs, most offer multiple modalities; 37% provide internally acquired quantitative data, 21% provide internally acquired anecdotal data, 37% provide third-party metrics specifically about residents, 39% provide third-party attending metrics about resident cases, 61% provide simulation scores (quantitative data taken from simulation encounters), and 21% use other modalities.

Only 35% of residencies provided any organized patient experience (PE) curriculum. Of these programs, again, most offer multiple modalities: 96% provide didactic lectures, 49% small group sessions, 47% simulation sessions. and 27% specifically use standardized patient encounters in their simulation sessions. Finally, 35% provide online or asynchronous resources for their residents. There was no significant difference in numbers of AOA and ACGME programs providing curicula (p=0.32).

Of the programs that do provide PE education, 47% describe the differences between different PS surveys. Again, there was no significant difference between AOA and ACGME programs (p=0.27). Finally, 100% of all programs who provide PE curiculum describe methods to improve PS scores.

DISCUSSION

Our study demonstrates that residency programs do not have a uniform approach to resident instruction on PE training or satisfaction measurement, with 65% of all residency programs having no formal curriculum on PE at all. Other aspects of communication have also been assessed in resident education and seem to occur more consistently than those focused on the patient experience. In a recent study by Hern et al, 57% of residency programs had curriculum focused on transitions of care. Hern et al recently found 57% of residency programs have curriculum focused on handoffs, a much higher percentage than PE.12 Another study found 93% of residency programs had curriculum focused on operations and administration. 13

AOA and ACGME rates were similar and suboptimal. There were insignificant trends showing AOA as better at providing scores/educating their residents. This will likely only fall farther down the list of AOA program priorities given the preparation required for their merger with the ACGME, due in 2020.

Why is PE training a neglected area of medical education? Although a relatively new topic in medical care, private practice emphasis and incentive-based compensation have skewed dramatically towards focusing on PS scores.14 It is possible that as academic institutions have been slower to emphasize this, it has taken longer to introduce this critical element to residency education. Only 37% of programs provided resident-specific survey information about PE data; in private practice, almost all facilities provide provider-specific patient data in the form of PS scores. It is also possible that academic practitioners may discount the value of patient satisfaction, as there is controversy as to the usefulness of PS scores as a corollary for excellent care. Alternatively, as PE is a relatively new field, there is less definitive evidence regarding the elements that contribute to a successful patient experience, possibly making educators less willing to teach on a subject they know little about and believe has been inadequately studied.

LIMITATIONS

Our study does have a number of limitations. First, our response rate was not universal. Most likely, the bias associated with this response rate would be towards responders being more likely to have curriculum, and as a result, we expect that our results overestimate the implementation of curriculum and data collection for residents. In addition, we had 29 instances where two faculty members of the same residency program responded. Of the 29 programs, 13 had concordant responses (45%) and another four had the same responses except with respect to a single question (14%). This leaves 12 others with large and varied degrees of disagreement (41%). This variance has a minimal effect on the overall statistics, but it does deserve further evaluation. While the ultimate cause for this discordance is unclear, this likely represents evidence of a paucity of focus on PE in EM GME.

CONCLUSION

The overall message of our study is the need for a more robust emphasis on patient experience education for EM residents. As PS is an element that physicians are being judged upon and penalized for, EM residencies are doing their residents a disservice by not preparing them adequately for clinical practice. We hope future research on PS will demonstrate best practices in resident education and further national standardization on curricular elements that help to improve the EM patient experience and EM physician patient-satisfaction scores.

Footnotes

Section Editor: Sally A. Santen, MD, PhD

Full text available through open access at http://escholarship.org/uc/uciem_westjem

Conflicts of Interest: By the WestJEM article submission agreement, all authors are required to disclose all affiliations, funding sources and financial or management relationships that could be perceived as potential sources of bias. The authors disclosed none.

REFERENCES

  • 1.Ware JE, Snyder MK, Wright WR. Development and validation of scales to measure patient satisfaction with Medical Care Services. Vol I, Part A: review of literature, overview of methods, and results regarding construction of scales. Springfield, VA: National Technical Information Service; 1976. (NTIS Publication No. PB 288–329) [Google Scholar]
  • 2.Ware JE, Snyder MK, Wright WR. Development and validation of scales to measure patient satisfaction with Medical Care Services. Vol I, Part B: results regarding scales constructed from the patient satisfaction questionnaire and measures of other health care perceptions. Springfield, VA: National Technical Information Service; 1976. (NTIS Publication No. PB 288–300) [Google Scholar]
  • 3.Press Ganey. History and Mission. Available at: http://www.pressganey.com/about/history-mission.
  • 4.Graff L, Stevens C, Spaite D, et al. Measuring and improving quality in emergency medicine. Acad Emerg Med. 2002;9(11):1091–107. doi: 10.1111/j.1553-2712.2002.tb01563.x. [DOI] [PubMed] [Google Scholar]
  • 5.Medicare.gov. Hospital Compare. The Official U.S. Government Site for Medicare. http://www.medicare.gov/hospitalcompare.
  • 6.Zolnierek KB, Dimatteo MR. Physician communication and patient adherence to treatment: a meta-analysis. Med Care. 2009;47(8):826–34. doi: 10.1097/MLR.0b013e31819a5acc. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Glickman SW, Boulding W, Manary M, et al. Patient satisfaction and its relationship with clinical quality and inpatient mortality in acute myocardial infarction. Circ Cardiovasc Qual Outcomes. 2010;3(2):188–95. doi: 10.1161/CIRCOUTCOMES.109.900597. [DOI] [PubMed] [Google Scholar]
  • 8.Boulding W, Glickman SW, Manary MP, et al. Relationship between patient satisfaction with inpatient care and hospital readmission within 30 days. Am J Manag Care. 2011;17(1):41–8. [PubMed] [Google Scholar]
  • 9.Doyle C, Lennox L, Bell D. A systematic review of evidence on the links between patient experience and clinical safety and effectiveness. BMJ Open. 2013;3(1):3. doi: 10.1136/bmjopen-2012-001570. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Fenton JJ, Jerant AF, Bertakis KD, et al. The cost of satisfaction: a national study of patient satisfaction, health care utilization, expenditures, and mortality. Arch Intern Med. 2012;172(5):405–11. doi: 10.1001/archinternmed.2011.1662. [DOI] [PubMed] [Google Scholar]
  • 11.Nasca TJ, Philibert I, Brigham T, et al. The next GME accreditation system--rationale and benefits. N Engl J Med. 2012;366(11):1051–6. doi: 10.1056/NEJMsr1200117. [DOI] [PubMed] [Google Scholar]
  • 12.Hern HG, Jr, Gallahue FE, Burns BD, et al. Representing the Council of Residency Directors, Transitions of Care Task Force. Handoff Practices in Emergency Medicine: Are We Making Progress? Acad Emerg Med. 2016;23(2):197–201. doi: 10.1111/acem.12867. [DOI] [PubMed] [Google Scholar]
  • 13.Watase T, Yarris LM, Fu R, et al. Educating Emergency Medicine Residents in Emergency Department Administration and Operations: Needs and Current Practice. J Grad Med Educ. 2014;6(4):770–3. doi: 10.4300/JGME-D-14-00192.1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Zgierska A, Rabago D, Miller MM. Impact of patient satisfaction ratings on physicians and clinical care. Patient Prefer Adher. 2014;8:437–46. doi: 10.2147/PPA.S59077. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Western Journal of Emergency Medicine are provided here courtesy of The University of California, Irvine

RESOURCES