Skip to main content
AEM Education and Training logoLink to AEM Education and Training
. 2022 Apr 1;6(2):e10729. doi: 10.1002/aet2.10729

Emergency medicine resident clinical experience vs. in‐training examination content: A national database study

Melinda A Kizziah 1, Krystin N Miller 1, Jason J Bischof 1, Geremiha Emerson 1, Sorabh Khandelwal 1, Jennifer Mitzman 1, Lauren T Southerland 1, David P Way 1, Katherine M Hunold 1,
PMCID: PMC8908307  PMID: 35368501

Abstract

Objectives

Emergency medicine (EM) residents take the In‐Training Examination (ITE) annually to assess medical knowledge. Question content is derived from the Model of Clinical Practice of Emergency Medicine (EM Model), but it is unknown how well clinical encounters reflect the EM Model. The objective of this study was to compare the content of resident patient encounters from 2016–2018 to the content of the EM Model represented by the ITE Blueprint.

Methods

This was a retrospective cross‐sectional study utilizing the National Hospital Ambulatory Medical Care Survey (NHAMCS). Reason for visit (RFV) codes were matched to the 20 categories of the American Board of Emergency Medicine (ABEM) ITE Blueprint. All analyses were done with weighted methodology. The proportion of visits in each of the 20 content categories and 5 acuity levels were compared to the proportion in the ITE Blueprint using 95% confidence intervals (CIs).

Results

Both resident and nonresident patient visits demonstrated content differences from the ITE Blueprint. The most common EM Model category were visits with only RFV codes related to signs, symptoms, and presentations regardless of resident involvement. Musculoskeletal disorders (nontraumatic), psychobehavioral disorders, and traumatic disorders categories were overrepresented in resident encounters. Cardiovascular disorders and systemic infectious diseases were underrepresented. When residents were involved with patient care, visits had a higher proportion of RFV codes in the emergent and urgent acuity categories compared to those without a resident.

Conclusions

Resident physicians see higher acuity patients with varied patient presentations, but the distribution of encounters differ in content category than those represented by the ITE Blueprint.

INTRODUCTION

Emergency medicine (EM) residents take the In‐Training Examination (ITE) to gauge medical knowledge gained during each year of residency training. The content of the ITE examination and the associated Qualifying Examination (QE) is derived from the Model of Clinical Practice of Emergency Medicine (EM Model). 1  The EM Model is a regularly updated consensus document that reflects EM standards of care and educational expectations for residents training in EM. 1  The design of the ITE and the QE is based on an examination blueprint that is considered to reflect the common patient case mix confronted by emergency physicians.

Ideally, the clinical patient encounters that residents experience daily in the emergency department (ED) are reflected accurately by the EM Model, as these encounters provide the foundation of residency training. However, due to their complexity, the case mix of patient care encounters is difficult to characterize. 2 , 3 A recent study by Bischof et al. examined resident encounters in a single academic training program and demonstrated that the clinical case mix encountered by residents differed significantly from the ITE blueprint. The following categories were overrepresented: signs, symptoms and presentations, psychobehavioral disorders, and abdominal and gastrointestinal disorders. The following were underrepresented: procedures and skills, systemic infectious disorders, and thoracic‐respiratory disorders. 4 It is unknown if these single site results are generalizable to all of emergency medicine practice or training programs nationwide.

To expand on what we know from the single site study, we used the National Hospital Ambulatory Medical Survey (NHAMCS) database to further characterize the relationship between EM resident visits and the EM Model represented by the ITE content blueprint. Secondarily, we present the same data for nonresident encounters to provide a description of differences or similarities that may exist. Since the NHAMCS database estimates data for all emergency department patient visits across the U.S., we hope that our findings demonstrate a more generalizable picture of how EM residents’ clinical experience relates to the content knowledge assessed in their professional examinations.

METHODS

This was a retrospective cross‐sectional study utilizing the National Hospital Ambulatory Medical Care Survey (NHAMCS) from calendar years 2016–2018. The NHAMCS is conducted annually by the U.S. Census Bureau using a probability sampling design to characterize U.S. ED care and includes a representative sample of U.S. EDs. Trained Census interviewers travel to each hospital during their 4 week reporting period and collect data on a simpler random sample of visits. The sampling design allows the assignment of weights to each visit to allow national estimates from this dataset. Data collected include but are not limited to patient characteristics, presenting complaint, vital signs, medications administered, testing completed, diagnoses, disposition, and hospital characteristics. A complete description of NHAMCS is available from the National Center for Health Statistics. 5

Calendar years 2016–2018 were chosen as these were the three most recent complete years available since the publication of the 2016 American Board of Emergency Medicine (ABEM) ITE Blueprint. 6  The 2016 version of the EM Model is the current standard for testing and training. NHAMCS variable RESINT was used to identify all ED visits with physician trainee (“resident”) involvement. Patient acuity for each patient visit was determined using the NHAMCS variable IMMEDR, which ranks the immediacy with which a patient should be seen into five categories from immediate to nonurgent. The immediacy variable was known to have missing data either because the hospital does not assign acuity levels or it was truly missing. For this reason, data are presented both with and without missing for completeness.

For each visit in the NHAMCS dataset, we matched each Reason for Visit (RFV) code to one of the 20 content categories of the ITE Blueprint 6 (Appendix 1). This strategy allowed for direct comparison of our results to the study by Bischof et al. 4 as the RFV codes represent the chief complaint. For example, someone presenting with chest pain would be assigned RFV code 1050.2 for chest heaviness or 1050.3 for chest burning as appropriate. Two board‐certified or eligible EM physicians independently categorized all RFV codes according to the EM Model. Only one category was assigned to each RFV. Disagreements between the two physician reviewers were adjudicated by a third independent board‐certified EM physician. In the rare event in which all three reviewers disagreed, categorization was planned through consensus discussion among all authors. The two initial reviewers agreed in XX % of their initial categorizations. There were no disagreements after review by the third reviewer and the two initial reviewers agreed with the final decision. All three reviewers are involved in resident education.

In the NHAMCS dataset each resident–patient encounter could contain up to five RFV codes, presented in random order. Therefore, categories 2–20 were not mutually exclusive as each visit could be categorized into more than one category based on the up to five RFV codes per encounter. Many patients fit into multiple categories and/or have multiple complaints. By allowing each visit to fall into multiple EM Model Categories, we captured this reality of clinical care. A visit was categorized in category 1 (signs, symptoms, and presentations) or as unclassifiable only if other categories were not applicable. This was done to avoid overrepresentation of this general category.

The number of unweighted and weighted visits included in the analysis were reported. Characteristics of patient visits were described using survey‐weighted percentages and associated 95% confidence intervals (CI) for all encounters following standard NHAMCS methodology and stratified by resident involvement. The proportion of visits in each of the 20 content categories and 5 acuity levels were compared to the expected proportion based upon the ITE Blueprint using 95% CIs. P values are not presented as, due to sample size, statistical significance may be reached without clinical significance and thus weighted confidence intervals are more informative.

All data management was conducted in SAS 9.4 (SAS Institute, Inc., Cary, NC) and all analyses in STATA 16 (Stata Corp., College Station, TX) using the NHAMCS weights to obtain nationally representative estimates.

RESULTS

The NHAMCS dataset contained 56,467 unweighted visits in calendar years 2016–2018 representing 414,542,510 weighted visits: 145,591,209 in 2016, 138,977,360 in 2017, and 129,973,941 in 2018. Of these, 42,143,956 (10.2%) were seen by a resident: 8.2% in 2016, 11.3% in 2017, and 11.2% in 2018. Most patients were between the ages of 18–64 (60.5%). A majority of the patients were female (55.3%) and Caucasian (70.9%). Resident physicians did not see a different age distribution than nonresident encounters. However, they did see more patients from communities of color and more patients who arrived by emergency medical services (EMS) (Table 1).

TABLE 1.

Characteristics of visits with resident/intern involvement

All (n = 414,542,510) Resident/intern encounter
No (n = 372,398,554) Yes (n = 42,143,956)
Age
<18 23.0 (20.6–25.5) 22.2 (19.8–24.8) 29.6 (23.7–36.1)
18–64 60.5 (58.4–62.5) 61.1 (58.9–63.2) 55.1 (50.8–59.4)
>64 16.6 (15.5–17.7) 16.7 (15.6–17.9) 15.3 (11.8–19.7)
Female 55.3 (54.6–56.0) 55.7 (54.9–56.4) 52.0 (50.0–54.1)
Race
White 70.9 (67.5–74.1) 71.7 (68.2–75.1) 63.2 (58.6–67.6)
Black 25.5 (22.3–28.9) 24.8 (21.5–28.4) 31.2 (26.9–35.9)
Other 3.7 (3.1–4.4) 3.4 (2.9–4.1) 5.6 (4.2–7.3)
Arrived by EMS 15.4 (14.3–16.6) 14.5 (13.5–15.6) 23.3 (19.8–27.1)

Data presented as weighted counts (n) and percentages (95% confidence intervals).

Overall, 59.0% of visits had only one RFV code; fewer had 2 (27.8%), 3 (9.8%), 4 (2.3%), or 5 (0.5%). The majority of resident visits also had one RFV code (54.6%); fewer had two (28.3%), three (12.1%), four (4.1%), or five (0.7%). The most common EM Model category was signs, symptoms, and presentations regardless of resident involvement (40.3% overall, 40.7% without resident, and 36.6% with resident); thus, this category was overrepresented in clinical encounters compared to what is tested on the ITE (9%). Similarly, musculoskeletal disorders (nontraumatic), psychobehavioral disorders, and traumatic disorders categories were overrepresented in clinical practice though to a lesser degree. Conversely, cardiovascular disorders and systemic infectious diseases were underrepresented in clinical practice (5.1% and 1.7%, respectively) when compared to the ITE (10% and 5%, respectively) (Table 2, Figure 1).

TABLE 2.

Percentage of visits with a reason for visit code in each of the American Board of Emergency Medicine (ABEM) categories

ABEM category All (n = 414,542,510) Resident encounter ITE percentage
No (n = 372,398,554) Yes (n = 42,143,956)
1. Signs, symptoms, and presentations a 40.3 (38.2–42.4) 40.7 (38.6–42.9) 36.6 (34.0–39.2) 9
18. Traumatic disorders 16.5 (15.6–17.5) 16.6 (15.6–17.6) 15.9 (14.3–17.6) 10
11. Musculoskeletal disorders (nontraumatic) 7.1 (6.6–7.6) 7.2 (6.7–7.8) 6.1 (5.3–7.1) 3
14. Psychobehavioral disorders 6.8 (6.2–7.5) 6.5 (5.9–7.1) 9.7 (8.3–11.2) 4
7. Head, ear, eye, nose, and throat disorders 6.3 (6.0–6.7) 6.4 (6.0–6.8) 5.8 (4.9–6.8) 5
2. Abdominal and gastrointestinal disorders 6.3 (5.7–6.9) 6.1 (5.6–6.8) 7.3 (6.1–8.8) 8
12. Nervous system disorders 4.5 (4.1–4.9) 4.3 (4.0–4.7) 5.9 (4.8–7.2) 5
3. Cardiovascular disorders 4.2 (3.5–5.0) 4.1 (3.4–4.8) 5.1 (3.4–7.4) 10
4. Cutaneous disorders 3.9 (3.6–4.2) 3.9 (3.6–4.2) 4.0 (3.3–4.8) 1
19. Procedures and skills 3.8 (3.4–4.2) 3.7 (3.3–4.1) 5.1 (4.4–6.0) 8
16. Thoracic‐respiratory disorders 3.5 (3.0–4.0) 3.3 (2.9–3.8) 5.0 (3.8–6.4) 8
13. Obstetrics and gynecology 2.8 (2.5–3.0) 2.8 (2.5–3.1) 2.5 (2.0–3.1) 4
5. Endocrine, metabolic, and nutritional disorders 2.1 (1.8–2.6) 2.1 (1.7–2.5) 2.7 (1.8–4.0) 2
20. Other components 2.1 (1.7–2.6) 2.0 (1.6–2.5) 2.5 (1.8–3.4) 3
15. Renal and urogenital disorders 2.0 (1.8–2.2) 2.0 (1.8–2.3) 1.8 (1.5–2.2) 3
17. Toxicologic disorders 1.7 (1.5–1.9) 1.6 (1.4–1.8) 2.6 (1.9–3.4) 5
10. Systemic infectious diseases 1.4 (1.2–1.6) 1.3 (1.1–1.6) 1.7 (1.3–2.1) 5
6. Environmental disorders 1.3 (1.2–1.5) 1.4 (1.2–1.6) 0.9 (0.6–1.3) 3
9. Immune system disorders 0.7 (0.6–0.8) 0.7 (0.6–0.8) 0.9 (0.4–1.7) 2
8. Hematologic disorders 0.7 (0.5–0.8) 0.6 (0.5–0.8) 1.3 (0.9–1.7) 2
Unclassified a 0.8 (0.7–0.9) 0.8 (0.6–0.9) 1.0 (0.7–1.5)

Categories are not mutually exclusive. All counts (n), percentages and confidence intervals are presented using survey weights.

a

Visits with only RFVs belonging to this category.

FIGURE 1.

FIGURE 1

Percentage and 95% confidence intervals (CI) of resident visits with a reason for visit code in each of the categories from the 2016 American Board of Emergency Medicine (ABEM) model of care compared to the percentage on the ABEM In‐Training Examination (ITE)

In regard to acuity, most patients were categorized as presenting with urgent (33.6%), semi‐urgent (23.5%), or missing/unknown (28.7%) urgency. Visits with a resident had a higher proportion in the emergent and urgent acuity categories compared to those without a resident. Fewer resident visits were of immediate acuity than on the ITE (Table 3).

TABLE 3.

Acuity of visits with resident/intern involvement compared to American board of emergency medicine (ABEM) in‐training examination (ITE)

All (n = 414,542,510) Resident encounter ITE percentage
No (n = 372,398,554) Yes (n = 42,143,956)
Acuity
Immediate 0.8 (0.5–1.1) 0.7 (0.5–1.1) 1.6 (0.9–2.7) 30 (Critical)
Emergent 9.6 (8.3–11.1) 8.6 (7.4–10.0) 18.2 (14.9–22.0) 40 (Emergent)
Urgent 33.6 (30.4–36.9) 32.7 (29.4–36.2) 41.0 (36.4–45.8) 21 (Lower Acuity)
Semi‐urgent 23.5 (21.4–25.8) 24.3 (22.0–26.8) 16.7 (13.1–21.0)
Nonurgent 3.8 (3.0–4.7) 3.9 (3.1–4.9) 2.9 (2.1–3.9)
Missing/unknown 28.7 (23.2–35.0) 29.8 (24.0–36.3) 19.7 (12.8–28.9) 9 (None)
Immediate 11.0 (7.6–15.8) 9.9 (6.5–14.9) 19.7 (11.6–33.4) 30 (Critical)
Emergent 13.5 (12.1–14.9) 12.3 (11.0–13.7) 22.6 (19.1–26.6) 40 (Emergent)
Urgent 47.1 (45.0–49.2) 46.6 (44.3–48.9) 51.1 (48.0–54.1) 21 (Lower Acuity)
Semi‐urgent 33.0 (31.1–34.9) 34.6 (32.6–36.6) 20.8 (16.8–25.4)
Nonurgent 5.3 (4.3–6.5) 5.5 (4.4–6.7) 3.6 (2.6–4.8)

Data presented with and without missing data as weighted counts (n) and percentages (95% confidence intervals).

In addition to discrepancies between resident encounters and the ITE, there were observed discrepancies between resident encounters when compared to all physician encounters, which are also seen in Table 2. Specifically, signs, symptoms, and presentations made up 36.6% of resident encounters compared to 40.3% in all physician encounters. Residents also saw a higher percentage of hematologic disorders, nervous system disorders, psychobehavioral disorders, thoracic‐respiratory disorders, toxicologic disorders, and procedures and skills.

DISCUSSION

Using the NHAMCS dataset, we categorized RFV codes to describe the distribution of resident patient visits across the EM Model. Overall, our results demonstrated that resident physicians see a wide variety of patient presentations, high acuity patients, and a case mixture similar to general practice across the United States; all of these are desirable to help ensure high‐quality residency training. We did find both under‐ and overrepresentation of the EM Model compared to the ITE. We found that the signs, symptoms, and presentations; musculoskeletal disorders (nontraumatic); and traumatic disorders categories were overrepresented in general clinical practice and in resident encounters compared to the ITE, while cardiovascular disorders and systemic infectious diseases were relatively underrepresented. Resident visits also had a higher acuity compared to nonresident visits.

These discrepancies may be appropriate for several reasons. Some presentations in clinical emergency medicine are rare, yet require an immediate and skilled response. Accordingly, it is imperative that emergency medicine residents know what do to when confronted with those rare situations. Since residents are not likely to experience these rare events during clinical encounters, they must be prepared for them in alternative ways. Since assessment drives curriculum, the specialty of emergency medicine can enforce the importance of these rare topics by overemphasizing them on content examinations. Thus, it is appropriate for residents to be tested to manage rare, but critical, case presentations such as a need for a cricothyroidotomy (procedure or skill) or hypothermic arrest (environmental disorders).

Patients present with symptoms, and it is the physician's job to make a diagnosis. Therefore, the fact that we saw such a large proportion of visits with RFV‐only classifiable as signs and symptoms makes sense and is a logical consequence of the physician's role. An analysis of diagnostic codes may fix this but would have other limitations. For example, diagnostic codes fail to reliably identify important diagnoses such as sepsis 7 and pulmonary embolism 8 in ED data.

In many EDs, advanced practice providers, such as nurse practitioners or physician assistants, are utilized to see low‐acuity patients. Though not addressed by our analysis, this may help explain why resident visits were of higher acuity than nonresident visits. While residents should be encouraged to see complex, high‐acuity patients, care should also be taken by residency program leadership to ensure that residents are receiving appropriate education and exposure to the care of low‐acuity complaints that are seen in higher proportions in general EM practice.

A recent manuscript evaluated the clinical encounters at a single academic medical center and, like our findings, demonstrated discrepancies between the clinical experience of its resident physicians and the ITE content. 4  The overrepresentation of signs, symptoms, and presentations and psychobehavioral disorders categories seen in the NHAMCS dataset is consistent with the findings in Bischof et al. 4 ; however, the abdominal and gastrointestinal disorders category was not overrepresented in our national sample. This may be explained by the patient population variation between sites, and this highlights the potential impact of site‐by‐site variation in clinical experience.

Unique to this analysis and not present in the previous manuscript is a comparison of patient characteristics to all ED encounters in the United States. Based upon the overlapping confidence intervals, the resident experience is similar to the overall experience of emergency physicians but did demonstrate some important differences between those encounters with resident involvement and those without. Resident encounters saw a higher proportion of patients from communities of color and patients who arrived by EMS. There are several possible explanations for this difference, though the actual reason is unknown. One hypothesis is the location of residency programs at safety‐net hospitals that treat a higher proportion of these patients.

Taken together, this analysis and the previous single‐site analysis suggests the importance of understanding the pathology residents are exposed to in their clinical encounters and the alignment with the EM Model. To ensure residents are adequately prepared for the ITE, ABEM certification process, and independent clinical practice, gaps in the clinical environment must be supplemented through a variety of means including on‐shift teaching, simulation, and resident didactic programming.

LIMITATIONS

The limitations of the NHAMCS database are well described 9 ; we have followed all best practices and believe that the objective of this manuscript is well‐informed by this dataset despite the limitations. The primary critique of NHAMCS is data accuracy; a previous example is the recorded disposition of intubated patients to locations other than the intensive care unit. 10 Related specifically to this analysis, there were several limitations imposed by this database. First, we do not know how accurately the RFV code represents the patient encounter. For example, a patient with RFV of headache could be there for a variety of diagnoses that would fit into other categories such as an infectious disease or a neurological disorder among others. However, alternate methodologies such as diagnosis codes would have similar limitations that could only be overcome by reviewing the entirety of a patient encounter. Second, we do not know the residents’ level of training for each encounters in the NHAMCS database. Thus, we cannot comment on how the distribution of the EM Model of Clinical Practice may change as a trainee progresses throughout residency. For example, we do not know if senior residents see more high‐acuity patients compared to junior residents. Third, we do not know what type of residency program (for example, academic or community) and thus we cannot comment on how the distribution may differ between program types.

CONCLUSION

The NHAMCS database provided the opportunity for an overview of resident physician encounters in the United States. Resident encounters have a different distribution of acuity and clinical presentations than are tested on the ITE. Importantly, resident physicians see varied patient presentations and high‐acuity patients with only slight differences from clinical practice across the United States in this dataset. This information may be vital to help guide the curricula of resident training programs.

CONFLICT OF INTEREST

No conflicts of interest to report.

AUTHOR CONTRIBUTION

KMH and JJB conceived the idea for this manuscript. KMH, MAK, KNM, and JJB performed the data analysis. MAK, KNM, JJB, and KMH drafted the manuscript. GE, SK, JM, LTS, and DPW were responsible for critical revision of the manuscript for important intellectual content and their content expertise.

APPENDIX 1. The 20 content categories

1. Signs, symptoms and presentations.

2. Abdominal and gastrointestinal disorders.

3. Cardiovascular disorders.

4. Cutaneous disorders.

5. Endocrine, metabolic, and nutritional disorders.

6. Environmental disorders.

7. Head, ear, eye, nose and throat disorders.

8. Hematologic disorders.

9. Immune system disorders.

10. Systemic infectious diseases.

11. Musculoskeletal disorders (nontraumatic).

12. Nervous system disorders.

13. Obstetrics and gynecology.

14. Psychobehavioral disorders.

15. Renal and urogenital disorders.

16. Thoracic‐respiratory disorders.

17. Toxicologic disorders.

18. Traumatic disorders.

19. Procedures and skills.

20. Other components.

Kizziah MA, Miller KN, Bischof JJ, et al. Emergency medicine resident clinical experience vs. in‐training examination content: A national database study. AEM Educ Train. 2022;6:e10729. doi: 10.1002/aet2.10729

Funding information

This project was unfunded.

Decision Editor: Jaime Jordan, PhD.

REFERENCES

  • 1. Beeson MS, Ankel F, Bhat R, et al. The 2019 model of the clinical practice of emergency medicine. J Emerg Med. 2020;59:96‐120. [DOI] [PubMed] [Google Scholar]
  • 2. Langdorf MI, Strange G, Macneil P. Computerized tracking of emergency medicine resident clinical experience. Ann Emerg Med. 1990;19:764‐773. [DOI] [PubMed] [Google Scholar]
  • 3. Douglass A, Yip K, Lumanauw D, Fleischman RJ, Jordan J, Tanen DA. Resident clinical experience in the emergency department: patient encounters by postgraduate year. AEM Educ Train. 2019;3:243‐250. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Bischof JJ, Emerson G, Mitzman J, Khandelwal S, Way DP, Southerland LT. Does the emergency medicine in‐training examination accurately reflect residents’ clinical experiences? AEM Educ Train. 2019;3:317‐322. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. About the Ambulatory Health Care Surveys . Centers for Disease Control and Prevention. 2021. https://www.cdc.gov/nchs/ahcd/about_ahcd.htm
  • 6. In‐Training Examination. American Board of Emergency Medicine. 2021. Accessed May 12, 2021. https://www.abem.org/public/for‐program‐directors/in‐training‐examination
  • 7. Ibrahim I, Jacobs IG, Webb SA, Finn J. Accuracy of International classification of diseases, 10th revision codes for identifying severe sepsis in patients admitted from the emergency department. Crit Care Resusc. 2012;14:112‐118. [PubMed] [Google Scholar]
  • 8. Burles K, Innes G, Senior K, Lang E, McRae A. Limitations of pulmonary embolism ICD‐10 codes in emergency department administrative data: let the buyer beware. BMC Med Res Methodol. 2017;17:89. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. McCaig LF, Burt CW. Understanding and interpreting the National Hospital Ambulatory Medical Care Survey: key questions and answers. Ann Emerg Med. 2012;60(6):716‐721.e1. [DOI] [PubMed] [Google Scholar]
  • 10. Green SM. Congruence of disposition after emergency department intubation in the national hospital ambulatory medical care survey. Ann Emerg Med. 2013;61(4):423‐426.e8. [DOI] [PubMed] [Google Scholar]

Articles from AEM Education and Training are provided here courtesy of Wiley

RESOURCES