Skip to main content
AEM Education and Training logoLink to AEM Education and Training
. 2024 Mar 22;8(2):e10954. doi: 10.1002/aet2.10954

SAEM systematic online academic resource (SOAR) review: Gastrointestinal illnesses

Lisa Zhao 1,, Sabrina Tom 1, Neil Patel 1, Patricia Fermin 1, Ryan Pedigo 2, Shirley Whiinh Bae 3, JooYeon Jung 4, Teresa Chan 5,6,7, Jonie Hsiao 1
PMCID: PMC10958398  PMID: 38525362

Abstract

Background and Objectives

Free open access medical education (FOAM) has become an essential tool for emergency medicine (EM) education and can be valuable to clinicians as a point‐of‐care resource. The development of the revised Medical Education Translational Resources Impact and Quality (rMETRIQ) tool provides a standardized means of quality assessment. Previous entries of the Society for Academic Emergency Medicine systematic online academic resource (SOAR) series have focused on renal, endocrine, and sickle cell disorders. In this iteration, we strive to identify, curate, and describe FOAM topics specific to acute gastrointestinal (GI) illnesses.

Methods

We searched 389 keywords across 11 GI topics that were modified from the 2019 Model of the Clinical Practice of EM (EM Model) using the search engine Google FOAM and within the top 50 websites listed on Academic Life in Emergency Medicine's Social Media Index. The sites underwent preliminary screening to eliminate resources that were not relevant to EM or GI illnesses. Identified resources were evaluated with the rMETRIQ tool by five board‐certified EM physicians who received rMETRIQ tool rater training.

Results

After duplicates of the initial 39,505 resources were eliminated, 8059 remained. Primary screening resulted in a final 1202 resources. The most common categories were large bowel (18%), small bowel (13%), stomach (11%), esophagus (11%), biliary (11%), and liver (10%). Many resources covered multiple topics and subtopics. The final mean intraclass correlation coefficient among the five physicians was 0.95 (95% CI 0.92–0.98) for rMETRIQ scoring. We identified 256 sites considered “high quality” with a rMETRIQ score of 16 or higher as designated in prior reviews.

Conclusions

This iteration of the SOAR review resulted in the highest number of high‐quality resources compared to other SOAR reviews, with 21% of resources thus far scoring ≥ 16. A final list of high‐quality resources can guide trainees, educator recommendations, and FOAM authors.

Keywords: FOAM, gastrointestinal, SOAR

INTRODUCTION

Free open access medical (FOAM) education has become an increasingly common tool for emergency medicine (EM) trainee education. 1 , 2 , 3 It has been reported that primary sources of literature have had decreasing rates of subscribers. 2 Meanwhile asynchronous learning has become more prominent in residency programs and is now being used as an adjunct to traditional didactic synchronous learning. 2 , 3 , 4 , 5 , 6 Not surprisingly, due to its ease of use and accessibility, residents appear to prefer online resources over traditional primary literature and textbooks, and this may affect the depth and quality of information learned. 6 , 7 Mobile point‐of‐care resources are also utilized by seasoned physicians for teaching and clinical guidance. 8 Moreover, online resources can often augment primary literature viewership by highlighting and summarizing significant peer‐reviewed content. 9

FOAM has been criticized for its decentralized nature and inconsistent quality review process, therefore making it difficult to assess content reliability. 2 Anyone can publish information and opinions online, regardless of quality. Given these criticisms of FOAM, it is imperative for educators to guide trainees to more reliable and reputable resources. Evaluation by gestalt alone, particularly by smaller numbers of raters, has been shown to be unreliable. 10 , 11 , 12

The Society for Academic Emergency Medicine (SAEM's) systematic online academic resource (SOAR) series has aimed to comprehensively evaluate and curate FOAM resources within a specific EM category utilizing the revised Medical Education Translational Resources: Impact and Quality (rMETRIQ) score. 13 , 14 , 15 The rMETRIQ score (Table 1) is based on three main domains: content, credibility, and peer review, which are assessed using seven quality‐related questions on a 4‐point scale for a maximal 21 points. 16 Thus far, reviews have been performed for renal, endocrine, and sickle cell diseases for the SOAR bank of reviews. 13 , 14 , 15 This review focuses on acute gastrointestinal (GI) conditions relevant to EM.

TABLE 1.

rMETRIQ table.

Question Options
Q1: Does the resource provide enough background information to situate the user?

3—Yes, the resource provides sufficient background information to situate the user and also directs users to other valuable resources related to the topic.

2—Yes, the resource provides sufficient background information to situate the user.

1—No, the information presented within the resource cannot be situated within its broader context, but users are directed to resources with this information.

0—No, the information presented within the resource cannot be situated within its broader context without looking up information independently.

Q2: Does the resource contain an appropriate amount of information for its length?

3—No unnecessary, redundant, or missing content; all content was essential.

2—Some unnecessary, redundant, or missing content, but most content was essential.

1—Lots of unnecessary, redundant, or missing content.

0—Insufficient content.

Q3: Is the resource well written and formatted?

3—The resource is very well written and formatted in a way that optimized and benefits learning.

2—The resource is reasonably well written and formatted, but aspects of the organization or presentation are distracting or otherwise detrimental to learning.

1—The resource is somewhat well written and formatted but could benefit from substantive editing (e.g., grammatical errors are seen or could be better organized).

0—The resource is poorly written and/or formatted and should not be a resource for learning.

Q4: Does the resource cite its references?

3—Yes, the references are cited and clearly map to specific statements within the resource, and all statements of fact that are not common knowledge are supported with a reference.

2—Yes, the references are cited and clearly map to specific statements within the resource, but statements of fact that are not common knowledge are made without the support of a reference.

1—Yes, there are references listed but they do not map to specific statements within the resource,

0—No, no references are cited.

Q5: Is it clear who created the resource and do they have any conflicts of interest?

3—Yes, the identity and qualifications of the author are clear and they specify that they have no relevant conflicts of interest.

2—Yes, the identity and qualifications of the author are clear, but they do not disclose whether they have any conflicts of interest.

1—Yes, the identity of the author is clear, but they do not list their qualifications or disclose whether they have any conflicts of interest.

0—No, the author of the resource has significant conflicts of interest or is not clearly identified (e.g., no name or a pseudonym is used).

Q6: Are the editorial and prepublication peer review processes that were used to create the resource clearly outlined?

3—Yes, a clear review process is described on the website and it was clearly applied to the resource.

2—Yes, a clear review process is described on the website, but it was not clear whether it was applied to the resource.

1—Yes, a review process is mentioned on the website, but it was not clearly described.

0—No, it is unclear whether or not the website has a review process, or there is no process.

Q7: Is there evidence of post‐publication commentary on the resource's content by its users?

3—Yes, a robust discussion of the resource's content has occurred that expands on the content of the resource.

2—Yes, some comments have been made on the resource, but a robust discussion about the resource's content has not occurred.

1—There was a mechanism to leave comments but none had been made.

0—No, there was no mechanism to leave comments or comments that were present were either unrelated to the post or unprofessional.

Abbreviation: rMETRIQ, revised Medical Educational Translational Resources Impact and Quality.

METHODS

Study design

The design of this study was conducted similarly to previous SOAR reviews with adherence whenever possible to the Preferred Reporting Items for Systematic Reviews and Meta‐Analyses (PRISMA) guidelines. Although the SOAR is not considered a systematic review, the PRISMA approach was adopted to ensure an unbiased review of FOAM. 17 The content reviewers (LZ, JH, ST, NP, RP, TF) are all board‐certified emergency physicians who are affiliated with a U.S. academic institution and received their EM residency training in the United States, specifically southern California. The reviewers were recruited on a voluntary basis, including four females and two males of Caucasian, Asian, or mixed heritage. All reviewers are based in the United States; only blogs in English were reviewed.

Topic identification

Consistent with prior SOAR reviews, the initial search list was compiled based on the 2019 Model of the Clinical Practice of EM (EM Model) section on “abdominal and gastrointestinal disorders.” 18 The EM Model is reviewed and updated every 3–5 years by a collaborative task force (American Board of Emergency Medicine, American College of Emergency Physicians, Council of Emergency Medicine Residency Directors, Emergency Medicine Residents’ Association, Residency Review Committee for EM, and SAEM) and represents the minimum core content deemed necessary for practicing EM physicians. A total of 12 main topics, the same listed in the aforementioned section in the EM Model, were then further extrapolated into a list of keywords by study reviewers (LZ, JH, ST, NP, RP). Reviewers then met three times virtually and communicated through email over 2 months to reach final consensus on the search list. Broad search terms were preferred over specific ones as it favors more search results. Any disagreements were decided with a majority vote. A final comprehensive list of 389 keywords was decided.

Database search

The search was conducted on Google FOAM using the chosen keywords and the top 50 websites listed on Academic Life in Emergency Medicine's (ALiEM) Social Media Index (SMi). 19 A computer program, developed in previous reviews, was used to automate a search of each keyword within each website and then extract the top 100 results and combine the results into a single spreadsheet file. Duplicate results were removed, and the file was then reviewed manually to confirm accuracy of the search (JJ, SB).

Inclusion criteria

Open‐access FOAM resources were included from the search of the 389 keywords. Written summaries posted by the podcasts and videocasts were included in this review. Some resources included several topics within the webpage or diagnoses from multiple systems. If included, only the section pertaining to GI illnesses was scored.

Exclusion criteria

The resulting resources were then distributed amongst the authors (LZ, JH, ST, NP, RP) for initial review for exclusions. Resources were excluded if there was not at least one section pertaining to GI illnesses and if it did not pertain to EM. Of note, if a resource was considered better categorized under another EM Model topic, it was not included. For example, resources pertaining to traumatic or vascular abdominal injuries were excluded as they are categorized within other sections of the EM Model and to ensure a feasible workload as this review already contained the largest number of total resources compared to previous SOAR studies. Resources that did not contain substantive information regarding a GI illness were also excluded. For example, case studies with multiple choice answers without discussion. Likewise, resources that contained no or minimal text were excluded such as podcasts notes with only a brief description. Lastly, resources that were paid or required a subscription were excluded from the review.

Data extraction

During the study, one of the participating physicians could not continue (RP) and was replaced with another physician (TF) who performed the final scoring of resources. To assess for consistency among scoring, reviewers (LZ, JH, ST, NP, TF) underwent tool rater training using 20 resources randomly selected among the total resources. Reviewers scored the same sites individually then evaluated their scoring decision together for consensus building. Postscoring analysis was performed, and if there was a singular discrepancy involving one reviewer, that reviewer was provided feedback. Any discrepancies among multiple reviewers were discussed until a consensus could be obtained. The reviewers then underwent a second tool rater training involving 30 other sites among the total resources to ensure that the initial training was effective and to provide inter‐rater reliability estimates for the tool.

Data were abstracted and organized on individual excel files that included rMETRIQ scoring, topic and subtopic classification, type of media included, and audience level according to the reviewer's gestalt. Additional information was obtained from sites scoring ≥ 12 including authors, specialty other than EM, and academic affiliation. The author's intended audience was not captured in this review because in previous reviews, it was seldom stated by the author. This review, unlike prior reviews, also did not include a designation for the appropriate usage of the resources such as journal club, postshift reading, flipped resource. This is because when polled, reviewers lacked confidence in assigning usage of resources to these categories accurately and reliably. Additionally, the publication date was not compiled as it was in previous reviews as some sites did not readily display this information and searching for and documenting it for was considered too time‐consuming among reviewers.

The remaining sites after tool rater training were divided amongst five physicians (LZ, JH, ST, NP, TF) to perform rMETRIQ scoring independently. A score of ≥16 was designated a high‐quality resource, consistent with prior reviews, based on a modified Angoff score. 20

RESULTS

The initial search resulted in 39,505 resources. After duplicates and primary journal articles were excluded, 8059 resources remained. An initial screen eliminating resources not related to GI illnesses or EM led to 1286 resources. An additional 84 resources were further excluded due to dead links or having no content at the time of the final review. Of the final 1202 resources, 254 sites were deemed to be high quality based on our rMETRIQ scoring (Figure 1). Inter‐rater reliability scoring of the first 20 training resources was 0.93 (95% CI 0.87–0.97).

FIGURE 1.

FIGURE 1

Flow diagram from search results of GI resources to final resources after primary and secondary review. FOAM, free open access medical education; GI, gastrointestinal.

Of the final 30 training resources it was 0.95 (95% CI 0.92–0.98), indicating a very high correlation between the reviewers. Of the final 1202 resources, the highest score was 20 (n = 1), the lowest score was 2 (n = 1), the median score 13, and the mean (±SD) score was 13 (±3.23). The most common score was 14. The distribution of scores is displayed in Figure 2.

FIGURE 2.

FIGURE 2

Distribution of total rMETRIQ scores for all reviewed resources within the study. Resources scoring ≥ 16 have been indicated with a box as high quality. rMETRIQ, revised Medical Educational Translational Resources: Impact and Quality.

Topic coverage

Resources were sorted into their relevant EM Model topic and subtopic and could also be associated with multiple topics or subtopics. Reviewers also grouped resources under common EM topics that were not part of the EM Model such as abdominal pain and GI bleed. If a resource did not fit into any subtopic, it was placed into an “other” category. A few examples include jaundice, splenic abscess, and abdominal distention. Other topic categories were then added based on common themes to address what the reviewers determined were potential gaps in coverage.

During the review, we selected up to three topics and subtopics for each resource. Very few resources (2%) covered a third topic. The most common topics were large bowel (18%), small bowel (13%), stomach (11%), esophagus (11%), biliary (11%), and liver (10%). The most common subtopic was appendicitis (6%), followed by small bowel obstruction (5%). Additional topics based on common themes added later included imaging, pediatrics, point‐of‐care ultrasound (POCUS), and procedures. Imaging included radiologist‐read imaging studies only and therefore is separate from bedside POCUS. Of these, imaging (10%) and pediatrics (9%) were the most common. The least common topics were spleen (2%) and postsurgical complications (2%). The topic prevalence of high‐quality sites mirrored the total sites, with large bowel (20%) and small bowel (16%) predominating. Table 2 displays the subtopic distribution within the reviewed resources.

TABLE 2.

Subtopic distribution of reviewed resources. a

Subtopic Percent of total Percent of high‐quality
Imaging 10.2 14.5
Pediatrics 8.6 9.1
Appendicitis 6.2 5.5
Small bowel obstruction 5.3 5.1
Procedures 3.8 4.3
Cholecystitis 3.2 2.4
Cholelithiasis/choledocholithiasis 3.1 0.8
Pancreatitis 2.7 4.7
Esophageal foreign body 2.7 3.9
Esophageal varices 2.7 4.7
POCUS 2.7 2.8
Diverticula 2.3 2.3
Small bowel ischemia 2.1 3.1
Hernia 2.1 2.3
PUD with hemorrhage or perforation 1.8 3.5
PUD 1.5 2
Cholangitis 1.2 1.6
Spontaneous bacterial peritonitis 1.2 3.1

Abbreviations: POCUS, point‐of‐care ultrasound; PUD, peptic ulcer disease.

a

Resources typically covered more than one topic/subtopic.

Target audience

Of the total sites, only 56% of the sites were deemed appropriate for preclerkship trainees while 84%, 92%, 94%, and 82% were deemed appropriate for clerkship trainees, junior residents, senior residents, and faculty respectively based on the reviewer's clinical gestalt. For the high‐quality sites, 54%, 80%, 94%, 98%, and 96% were deemed appropriate for preclerkship trainees, clerkship trainees, junior residents, senior residents, and faculty, respectively.

Specialties of authors and academic affiliations

Outside of EM authors, high‐quality resources as determined by rMETRIQ in our review were written by radiologists and critical care physicians. In comparison to the total number of sites, the high‐quality sites had a higher proportion of authors that were affiliated with a university or training program versus community practitioners (36% vs. 61%).

Quality assessment

Of the total number of resources, 254 met our high‐quality designation (rMETRIQ ≥ 16) and are listed in Appendix S1. The most common resources were Radiopaedia (29%), emDocs.net (10%), and WikEM (10%). Of the total high‐quality sites, Radiopaedia encompassed the majority with 33% of the 254 sites, followed by EMDocs 29%, WikEM 7%, and EMCrit 6%. However, as related to the percentage of a site's total number resources, CoreEM (70%), EMDocs (65%), CanadiEM (65%), and EMCrit (47%) had the highest percentage of high‐quality resources related to GI illnesses.

Of the resources that did not meet our high‐quality designation (rMETRIQ ≥ 16), 276 resources were scored 14 or 15. The lowest scoring questions for these resources were compared to the high‐quality resources (Table 3). The mean scores for Question 4, the citation of references (1.9 on intermediate scoring vs. 2.6 on high scoring), and for Question 3, formatting (2.4 for intermediate scoring and 2.8 for high scoring), showed the biggest difference. Question 7, the presence of postpublication commentary (0.6 on intermediate scoring vs. 0.9 on high scoring), also showed a notable difference. Overall, Question 7 had the lowest on average score. About 86% and 78% of intermediate scoring and high scoring resources respectively received a 0 or 1. Additionally 53% of the intermediate scoring resources had no postpublication commentary available or were “locked,” and therefore comments could not be made in comparison to 38% of high‐quality resources.

TABLE 3.

Mean score distribution of each question in rMETRIQ including all resources, resources meeting high‐quality cutoff, and resources falling short of the high‐quality cutoff.

Mean scores Q1 Q2 Q3 Q4 Q5 Q6 Q7
All resources 2.04 2.1 2.3 1.56 2.15 1.76 0.67
High quality 2.56 2.58 2.84 2.57 2.57 2.55 0.90
Scores 14–15 2.32 2.21 2.39 1.22 2.51 2.47 0.59

Abbreviation: rMETRIQ, revised Medical Educational Translational Resources Impact and Quality.

DISCUSSION

One of the criticisms of FOAM is the disproportionate focus on specific topics, which may contribute to a more limited educational scope sought by trainees, who may frequent this platform for self‐directed learning. 21 , 22 As in previous SOAR reviews, topic distributions were uneven and the most common topics in this study were large bowel and small bowel. The proportion of these topics in our designated high‐quality sites were similar. All topics were covered at least once. There were several topics that did not fit into the categories defined by EM Model. These included ones that did not belong to one specific system and instead bridged multiple topics.

The appropriate audience for our designated high‐quality resources shifted from preclerkship and clerkship trainees to senior residents and faculty, likely because increasing complexity aligns with higher scores, e.g., use of references and peer review. In general, FOAM is more accessible to, and therefore more appropriate for, users with more advanced knowledge, who require less in‐depth background information compared to less experienced trainees who are limited by their stage of education. Additionally, there was a higher proportion of authors in academic medicine in higher‐quality resources. While the majority of the authors in these resources identified EM or pediatric EM as their specialty, there were several other specialties featured. Most notably, radiology was the highest non‐EM specialty owing to the volume of Radiopaedia sites. Critical care was the second most common due to the high number PulmCrit resources, Dr. Josh Farkas's critical care site, associated with EMCrit.

This SOAR review resulted in the highest number of resources compared to prior reviews on other EM topics. This is likely because GI illnesses are more commonly covered in EM FOAM than the topics in prior reviews. 21 , 22 Of the 254 designated high‐quality resources, Radiopaedia contributed the highest percentage of resources across all sites, but this is because Radiopaedia also had the largest number of total resources. When comparing the proportion of high‐quality resources to the site's total number of resources, CoreEM, EMDocs, CanadiEM, and EMCrit posted high‐quality content most frequently. This may be helpful in determining which sites could be considered higher quality and provide more reliable content for users, including educators and learners, acknowledging the typical caveats for all online content, such as accuracy and practical use of information, is at the user's discretion.

Of the 276 resources that scored 14–15 and did not meet our high‐quality designation, the largest impact questions with room for improvement were Questions 3 (formatting), 4 (citation of references), and 7 (postpublication commentary). Of all the questions, Question 7 had the lowest average across all sites. More than half of the intermediate scoring resources had no postpublication commentary available or were “locked,” compared to 38% of high‐scoring resources. Intermediate scoring resources could be categorized as high‐quality according to our designation by improving organization, mapping references, and allowing postpublication commentary.

Interestingly, Question 6 (peer review process) scores were similar between intermediate and high‐quality resources. However, it was one of the lower averages across all resources included in the study, which indicates that those resources scoring < 14 would likely improve their scores if a clearly delineated peer review process was included.

Another important theme for lower scoring sites was the difficulty in determining the name and/or credentials of the author or finding a declaration of conflicts of interest. Many resources utilized pseudonyms or failed to identify the full name or credentials of the author. At times, conflict of interest information was buried within other sections in the website and not readily accessible. Having this information clearly displayed would make it much easier for users to assess the quality and reliability of the resource.

LIMITATIONS

There were several limitations of this study. As stated in prior studies, the SOAR series is not aimed at assessing the accuracy of the content. The challenge of verifying the accuracy of any information online, including educational resources is not unique to the FOAM space and the potential for unreliability must be acknowledged as a risk by both content creators and readers. Though components of the rMETRIQ score can aid in evaluating for accuracy, such as the peer review processes and postpublication commentary, the review itself does not cross‐reference for accuracy. 13 , 14

There are several other tools in existence for the evaluation of FOAM. 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 The rMETRIQ score is a more recent tool developed but has yet to be formally assessed for validity and reliability, though initial testing indicates improved reliability. 16 In addition, we continued to use a high‐quality designation (rMETRIQ ≥ 16) as was done in prior SOAR reviews. Though the score cutoff reflects a general gestalt agreement among our reviewers for high quality, resources that others may still consider suitable for educational purposes may have scored lower. And our analysis suggests that these intermediate scoring sites may only lack in on or two components, most likely Questions 3, 4, and 7. Additionally, one might consider that scoring lower on formatting in comparison to peer review is less important.

Our reviewers were recruited on a voluntary basis, were all trained and employed in the same geographic area, and were of similar race and professional status. All these factors could introduce bias. During the initial review, one of the participating physicians could not continue with the study (RP) and was replaced with another physician (TF) who performed the tool rater training and was included in the inter‐rater reliability evaluation. This delay coupled with the lengthy process required to rate the large number of resources may have resulted in many new resources published in the interim that are not included. Additionally, some sites had changed, removed content, or migrated to paid content. Another consideration is that the initial search of this study was performed using the top 50 sites on the ALiEM SMi index, which is updated quarterly and will likely have changed by the time any topic review is completed and published. 19 One way to resolve this would be to assess a smaller volume of resources, e.g., dedicating entire reviews to specific subtopics rather than an entire organ system.

This also highlights the need to revisit these reviews, but an ideal time interval is yet to be determined. Potentially, a repository of these reviews can be created along with a reassessment of new resources in a specific cycle.

CONCLUSIONS

To date, this systematic online academic resource review of gastrointestinal illnesses resulted in the largest number of high‐quality resources, compared to prior systematic online academic resource reviews of other topics. This review provides an understanding of the landscape of free open access medical coverage of emergency medicine–related gastrointestinal topics and may help inform content creators, who could bridge potential gaps or bolster areas where content is less robust. We hope that this list of designated high‐quality resources may help guide educators and learners.

CONFLICT OF INTEREST STATEMENT

The authors declare no conflicts of interest.

Supporting information

Appendix S1.

AET2-8-e10954-s001.docx (49.2KB, docx)

Zhao L, Tom S, Patel N, et al. SAEM systematic online academic resource (SOAR) review: Gastrointestinal illnesses. AEM Educ Train. 2024;8:e10954. doi: 10.1002/aet2.10954

Presented at the Society for Academic Emergency Medicine Annual Meeting, Austin, TX, May 2023.

Supervising Editor: Jaime Jordan

REFERENCES

  • 1. Cadogan M, Thoma B, Chan TM, Lin M. Free open access Meducation (FOAM): the rise of emergency medicine and critical care blogs and podcasts (2002–2013). Emerg Med J. 2014;31:e76‐e77. [DOI] [PubMed] [Google Scholar]
  • 2. Brindley PG, Byker L, Carley S, Thoma B. Assessing on‐line medical education resources: a primer for acute care medical professionals and others. J Intensive Care. 2022;23(3):340‐344. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Scott KR, Hsu CH, Johnson NJ, Mamtani M, Conlon LW, DeRoos FJ. Integration of social media in emergency medicine residency curriculum. Ann Emerg Med. 2014;64:396‐404. [DOI] [PubMed] [Google Scholar]
  • 4. Reiter DA, Lakoff DJ, Trueger NS, Shah KH. Individual interactive instruction: an innovative enhancement to resident education. Ann Emerg Med. 2013;61:110‐113. [DOI] [PubMed] [Google Scholar]
  • 5. Khadpe J, Willis J, Silverberg MA, Grock A, Smith T. Integration of a blog into an emergency medicine residency curriculum. West J Emerg Med. 2015;16:936‐937. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6. Mallin M, Schlein S, Doctor S, Stroud S, Dawson M, Fix M. A survey of the current utilization of asynchronous education among emergency medicine residents in the United States. Acad Med. 2014;89:598‐601. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Purdy E, Thoma B, Bednarczyk J, Migneault D, Sherbino J. The use of free online educational resources by Canadian emergency medicine residents and program directors. CJEM. 2015;17:101‐106. [DOI] [PubMed] [Google Scholar]
  • 8. Patocka C, Lin M, Voros J, Chan T. Point‐of‐care resource use in the emergency department: a developmental model. AEM Educ Train. 2018;2:221‐228. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Hoang JK, McCall J, Dixon AF, Fitzgerald RT, Gaillard F. Using social media to share your radiology research: how effective is a blog post? J Am Coll Radiol. 2015;12:760‐765. [DOI] [PubMed] [Google Scholar]
  • 10. Thoma B, Sebok‐Syer S, Krishnan K, et al. Individual gestalt is unreliable for the evaluation of quality in medical education blogs: a METRIQ study. Ann Emerg Med. 2017;70(3):394‐401. [DOI] [PubMed] [Google Scholar]
  • 11. Krishnan K, Thoma B, Trueger NS, Lin M, Chan TM. Gestalt assessment of online educational resources may not be sufficiently reliable and consistent. Perspect Med Educ. 2017;6:91‐98. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Woods J, Chan TM, Roland D, Riddell J, Tagg A, Thoma B. Evaluating the reliability of gestalt quality ratings of medical education podcasts: A METRIQ study. Perspect Med Educ. 2020;9:302‐306. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Grock A, Bhalerao A, Thoma B, Wescott AB, Trueger NS. Systematic online academic resource (SOAR) review: renal and genitourinary. AEM Educ Train. 2019;3:375‐386. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Hsiao JJ, Pedigo R, Bae SW, et al. Systematic online academic resource (SOAR) review: endocrine, metabolic, and nutritional disorders. AEM Educ Train. 2021;5:e10716. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Alavian S, Asare‐Agbo P, Chan T. Systematic online academic resources (SOAR) review: sickle cell disorders. AEM Educ Train. 2022;6:e10812. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16. Colmers‐Gray IN, Krishnan K, Chan TM, et al. The revised METRIQ score: a quality evaluation tool for online educational resources. AEM Educ Train. 2019;3:387‐392. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Page MJ, McKenzie JE, Bossuyt PM, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372:n71. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. Beeson MS, Ankel F, Bhat R, et al. The 2019 Model of the Clinical Practice of Emergency Medicine. J Emerg Med. 2020;59:96‐120. [DOI] [PubMed] [Google Scholar]
  • 19. Thoma B, Sanders JL, Lin M, Paterson QS, Steeg J, Chan TM. The social media index: measuring the impact of emergency medicine and critical care websites. West J Emerg Med. 2015;16:242‐249. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20. George S, Haque MS, Oyebode F. Standard setting: comparison of two methods. BMC Med Educ. 2006;6:46. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. Stuntz R, Clontz R. An evaluation of emergency medicine core content covered by free open access medical education resources. Ann Emerg Med. 2016;67:649‐653. [DOI] [PubMed] [Google Scholar]
  • 22. Grock A, Chan W, Aluisio AR, Alsup C, Huang D, Joshi N. Holes in the FOAM: an analysis of curricular comprehensiveness in online educational resources. AEM Educ Train. 2021;5:1‐8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23. Thoma B, Chan T, Kapur P, et al. The social media index as an indicator of quality for emergency medicine blogs: a METRIQ study. Ann Emerg Med. 2018;72(6):696‐702. [DOI] [PubMed] [Google Scholar]
  • 24. Paterson QS, Thoma B, Milne WK, Lin M, Chan TM. A systematic review and qualitative analysis to determine quality indicators for health professions education blogs and podcasts. J Grad Med Educ. 2015;7(4):549‐554. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25. Chan T, Thoma B, Krishnan K, et al. Derivation of two critical appraisal scores for trainees to evaluate online educational resources: a METRIQ study. West J Emerg Med. 2016;17:574‐584. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26. Lin M, Joshi N, Grock A, et al. Approved instructional resources series: a national initiative to identify quality emergency medicine blog and podcast content for resident education. J Grad Med Educ. 2016;8:219‐225. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27. Thoma B, Sebok‐Syer SS, Colmers‐Gray I, et al. Quality evaluation scores are no more reliable than gestalt in evaluating the quality of emergency medicine blogs: a METRIQ study. Teach Learn Med. 2018;30:294‐302. [DOI] [PubMed] [Google Scholar]
  • 28. Grock A, Jordan J, Zaver F, Colmers‐Gray IN, Thoma B. The revised approved instructional resources score: an improved quality evaluation tool for online educational resources. AEM Educ Train. 2021;5:e10601. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29. Chan TM, Grock A, Paddock M, Kulasegaram K, Yarris LM, Lin M. Examining reliability and validity of an online score (ALiEM AIR) for rating free open access medical education resources. Ann Emerg Med. 2016;68(6):729‐735. [DOI] [PubMed] [Google Scholar]
  • 30. Lo A, Shappell E, Rosenberg H, et al. Four strategies to find, evaluate, and engage with online resources in emergency medicine. CJEM. 2018;20:293‐299. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Appendix S1.

AET2-8-e10954-s001.docx (49.2KB, docx)

Articles from AEM Education and Training are provided here courtesy of Wiley

RESOURCES