Skip to main content
Radiology: Artificial Intelligence logoLink to Radiology: Artificial Intelligence
. 2024 Oct 23;6(6):e240101. doi: 10.1148/ryai.240101

The RSNA Abdominal Traumatic Injury CT (RATIC) Dataset

Jeffrey D Rudie 1,, Hui-Ming Lin 1, Robyn L Ball 1, Sabeena Jalal 1, Luciano M Prevedello 1, Savvas Nicolaou 1, Brett S Marinelli 1, Adam E Flanders 1, Kirti Magudia 1, George Shih 1, Melissa A Davis 1, John Mongan 1, Peter D Chang 1, Ferco H Berger 1, Sebastiaan Hermans 1, Meng Law 1, Tyler Richards 1, Jan-Peter Grunz 1, Andreas Steven Kunz 1, Shobhit Mathur 1, Sandro Galea-Soler 1, Andrew D Chung 1, Saif Afat 1, Chin-Chi Kuo 1, Layal Aweidah 1, Ana Villanueva Campos 1, Arjuna Somasundaram 1, Felipe Antonio Sanchez Tijmes 1, Attaporn Jantarangkoon 1, Leonardo Kayat Bittencourt 1, Michael Brassil 1, Ayoub El Hajjami 1, Hakan Dogan 1, Muris Becircic 1, Agrahara G Bharatkumar 1, Eduardo Moreno Júdice de Mattos Farina 1, Errol Colak 1; for the Dataset Curator Group1; Dataset Contributor Group1; Dataset Annotator Group1
PMCID: PMC11605137  PMID: 39441109

Abstract

Supplemental material is available for this article.

Keywords: Trauma, Spleen, Liver, Kidney, Large Bowel, Small Bowel, CT


Summary

The RSNA Abdominal Traumatic Injury CT (ie, RATIC) dataset contains 4274 abdominal CT studies with annotations related to traumatic injuries and is available at https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection and https://imaging.rsna.org/dataset/5.

Key Points

  • ■ The RSNA Abdominal Traumatic Injury CT (ie, RATIC) dataset is the largest publicly available adult abdominal traumatic injury CT dataset, with contributions from 23 institutions across 14 countries and six continents.

  • ■ The dataset consists of medical images, segmentations, and image-level annotations, which were generated by subspecialist radiologists from the American Society of Emergency Radiology and the Society of Abdominal Radiology.

  • ■ This dataset was used for the Radiological Society of North America 2023 Abdominal Trauma Detection competition and is made freely available to the research community for noncommercial use.

Introduction

Trauma is the most common cause of fatal injuries in the United States among individuals younger than 45 years. Globally, an estimated 6 million individuals die of traumatic injuries each year (1). Early accurate diagnosis and grading of traumatic injuries is critical in guiding clinical management and improving patient outcomes. CT plays a central role in the initial evaluation of hemodynamically stable patients (2,3). For blunt and penetrating abdominal trauma, the American Association for the Surgery of Trauma (AAST) organ injury grading system is the most well-recognized system for grading solid organ injuries (4,5) and is critical in triaging patients for surgery, minimally invasive intervention, or conservative management (6).

While the AAST grading system is an important guide for assessing solid organ injury, rapid interpretation of trauma studies is challenging given the large number of images to review and the potential for subtle findings. Diagnostic errors in the interpretation of trauma are common (7), and there is high interrater variability in the AAST grading system (8,9). Furthermore, the large variation in protocols used at different hospitals, including a single portal venous phase, multiphasic imaging, and split bolus approaches (10), can further complicate this task.

Automated assessment of traumatic abdominal injuries is an excellent use case for artificial intelligence (AI) algorithms, given the potential to prioritize studies that may require more expedient interpretations, as well as to augment radiologist accuracy and efficiency, which may be particularly valuable in areas where subspecialists are in short supply. Recent work on AI-based assessment of abdominal trauma includes studies on automated detection of splenic (1114) and liver (15) injury, hemoperitoneum (16), and pneumoperitoneum (17). However, prior studies have typically been limited in scope to single organs and single institutions, limiting generalizability into clinical practice. Thus, there is a need for large multi-institutional publicly available annotated abdominal trauma datasets to address this challenge.

The Radiological Society of North America (RSNA) collaborated with the American Society of Emergency Radiology (ASER) and the Society of Abdominal Radiology (SAR) to curate a large, publicly available expert-labeled dataset of abdominal CT images for traumatic injuries focusing on injuries to the liver, spleen, kidneys, bowel, and mesentery and active extravasation. This dataset was used for the RSNA 2023 Abdominal Trauma Detection competition, which attracted 1500 competitors from around the world to develop innovative machine learning (ML) models that detect traumatic injuries at abdominal CT.

Dataset Curation and Annotation

Figure 1 shows a flowchart of the RSNA Abdominal Traumatic Injury CT (RATIC) dataset curation and annotation process, with a detailed description provided in Appendix S1. In brief, sites provided initial labels for the presence of different traumatic injuries according to clinical reports. Radiologist annotators recruited from the ASER and SAR then independently annotated solid organ injury grades and locations of bowel and mesenteric injuries and active extravasation. The annotator pool consisted of 43 attending radiologists (32 academic, four private practice, six hybrid, and one government), with 10.2 years ± 6.9 (SD) of experience as an attending, and two fellows from 38 different institutions. Annotator subspecialties consisted of 19 abdominal radiologists, 21 emergency radiologists, four general radiologists, and one interventional radiologist. Annotators annotated an average of 144 cases ± 123. Reference standard labels for the grading of each solid organ injury were established using majority voting among three different annotators and divided into low- (AAST I–III) and high-grade (IV and V) injury groups. In the event of label disagreement among annotators, a member of the organizing committee acted as an adjudicator. Image-level labels for bowel and mesenteric injuries and active extravasation were based on the consensus of different annotators. Voxelwise segmentations (Fig 2) were manually corrected after training an nnU-Net (18) on the TotalSegmentator dataset (19), focusing only on the organs being evaluated in the challenge: liver, spleen, left kidney, right kidney, and bowel (representing a combination of esophagus, stomach, duodenum, small bowel, and colon).

Figure 1:

Summary of the data curation and annotation process. * = bowel and mesenteric injuries were reviewed by two annotators. DICOM = Digital Imaging and Communications in Medicine.

Summary of the data curation and annotation process. * = bowel and mesenteric injuries were reviewed by two annotators. DICOM = Digital Imaging and Communications in Medicine.

Figure 2:

Example of abdominal organ segmentation, with each color representing different organs. (A) Axial CT DICOM image demonstrates a splenic laceration (arrow). (B) Image illustrates the segmentations for the liver (red), spleen (green), left kidney (blue), and gastrointestinal tract (brown) in the axial plane. (C) Image shows segmentation masks overlaying the corresponding CT image. (D) Image shows segmentation masks overlaying the corresponding organs on a reconstructed coronal CT DICOM image. DICOM = Digital Imaging and Communications in Medicine.

Example of abdominal organ segmentation, with each color representing different organs. (A) Axial CT DICOM image demonstrates a splenic laceration (arrow). (B) Image illustrates the segmentations for the liver (red), spleen (green), left kidney (blue), and gastrointestinal tract (brown) in the axial plane. (C) Image shows segmentation masks overlaying the corresponding CT image. (D) Image shows segmentation masks overlaying the corresponding organs on a reconstructed coronal CT DICOM image. DICOM = Digital Imaging and Communications in Medicine.

Dataset Description and Usage

The RATIC dataset is composed of CT scans of the abdomen and pelvis in 4274 adult (≥18 years) patients, with a total of 6481 image series from 23 institutions across 14 countries and six continents. A detailed breakdown of patient demographics and injuries across the different institutions is provided in Table 1. The demographic and case-level composition of the competition training and test set is presented in Table 2. The breakdown of injury severity of solid organs for the training set is found in Table 3.

Table 1:

Distribution of Positive and Negative Cases for Abdominal Injury with Breakdown of Injury Class per Institution

graphic file with name ryai.240101.tbl1.jpg

Table 2:

Distribution of Demographic and Case-level Breakdown for Abdominal Injury across Dataset Training and Test Subsets

graphic file with name ryai.240101.tbl2.jpg

Table 3:

Distribution of Injury Grades for Training Portion of Dataset

graphic file with name ryai.240101.tbl3.jpg

CT images are in Digital Imaging and Communications in Medicine (ie, DICOM) format. Study-level injury annotations and demographic information are provided in four comma-separated value files. The train_2024.csv file contains information about the presence of traumatic abdominal injuries (liver, kidney, spleen, bowel, and mesentery and active extravasation) for each patient. The image_level_labels_2024.csv file provides image-level labels for bowel and mesenteric injuries and active extravasation. The train_series_meta.csv file contains information regarding the phase of imaging and anatomic coverage of each CT series. The train_demographics_2024.csv file contains information about patient demographics. Pixel-level segmentations of abdominal organs are provided in Neuroimaging Informatics Technology Initiative (ie, NIfTI) format for a subset of 206 series from the training set. Data are available at https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection and https://imaging.rsna.org/dataset/5.

Discussion

We curated a large, high-quality dataset of abdominal trauma CT studies, with contributions from 23 institutions in 14 countries and six continents. This represents the largest and most diverse publicly available dataset of abdominal trauma CT scans. This dataset provides annotations relating to injuries of the liver, spleen, kidneys, bowel, and mesentery, as well as active extravasation. This rich dataset has further utility for investigators, as other injuries such as hematomas, fractures, and lower thoracic injuries are present within the dataset but were not explicitly annotated due to the challenge timeline and the intent of the challenge to focus on critical findings of the highest clinical importance for patients with trauma.

We chose broad inclusion criteria for the CT scans in the dataset. Our initial survey of potential contributing sites showed great variety in the protocols used for imaging patients with abdominal trauma. In fact, some institutions had multiple protocols and selected a protocol based on the severity of the trauma. Aspects that varied across protocols included the parts of the body that were imaged, phases of imaging, and image thickness. Stringent inclusion criteria that limited the dataset to only scans with a single homogeneous protocol (eg, thin-section, multiphasic CT scans of the abdomen and pelvis) would severely constrain the size and potential generalizability of this dataset. For this reason, we widened the inclusion criteria to facilitate a larger and more diverse dataset that could then be used to train more robust ML models. Biphasic (arterial and portal venous), split bolus, and portal venous phase protocols were considered acceptable.

Participating sites were asked to enrich the dataset with representative injuries given the relatively low prevalence of traumatic abdominal injuries at CT encountered in clinical practice. Despite this request, the number of cases with injuries submitted was lower than the organizing committee had anticipated. Addressing class imbalances in curated datasets is particularly important in improving ML model robustness and reducing bias (20,21). An explicit effort was made by the organizing committee to reduce potential biases in the dataset by considering factors such as sex, age, injuries, and contributing site when assigning scans to the training, public test, and private test datasets.

A challenge we faced in curating this dataset was the dramatic differences in z-axis coverage in the included CT scans. For example, some sites imaged from the skull vertex to feet, while many sites limited imaging to the abdomen and pelvis. To reduce the size of the dataset and to help ML model training by reducing the search space, we decided to limit scans to the abdomen and pelvis, using an upper bound of the mid heart and lower bound of the proximal femurs through an automated pipeline (22), and manually reviewed the processed scans.

Similar to prior challenges, we aimed to maximize use of the data and ensure high quality labels while not overburdening annotators. Contributing sites prelabeled submitted scans with information extracted from the clinical report that allowed annotators to focus on the abnormal scans. We considered a variety of annotation strategies that ranged from study- to pixel-level annotations. Our experience with the cervical spine fracture detection challenge (23) showed that the prizewinning models relied on study-level annotations and segmentations rather than bounding boxes (24). In addition, recent work has shown that strongly supervised models trained on slice-level labels from the RSNA Brain CT Hemorrhage dataset labels (25) do not outperform weakly supervised models trained on study-level labels (26). Pixel-level annotations of injuries, including bounding boxes, would be a time-consuming task with likely poor reproducibility, as abdominal injuries can be quite complex, with ill-defined borders. We settled on providing segmentations of the relevant abdominal organ systems to assist with localization and organ labels at the study level. Image-level labels were provided for bowel and mesenteric injuries and active extravasation, as these injuries can be subtle, manifest in variable anatomic locations, and manifest on a limited number of images.

Individual annotators were assigned a single organ system to annotate rather than providing annotations for multiple organ systems on their assigned CT scans. The organizing committee felt this would improve the efficiency of the annotation process and label quality by allowing an annotator to focus on a single task and AAST injury grading scale. The annotators provided granular labels using the AAST grading scale for solid organ injuries. Due to the well-documented issues with interrater agreement in the grading of solid organ injuries with AAST (9) and to help model training, AAST injury grades I–III were classified as low-grade injuries, while grades IV and V were classified as high-grade injuries. This grouping of injury grades still provides more information than a binary label for injury and reflects many clinical practices, as patients with grade IV and V injuries are more likely to undergo surgery or endovascular treatment (46). Rather than assigning a fixed number of CT scans for annotation, we utilized the crowdsourcing mode on the annotation platform. This allowed annotators to label as many cases as they wanted, with the public scoreboard providing motivation.

Each solid organ injury label was annotated independently by three radiologists, and the final reference standard labels were established by majority. In scenarios where all three annotators assigned different gradings (ie, no injury, low grade, and high grade), a member of the organizing committee adjudicated the case and assigned the final reference standard label. We felt that this approach would improve annotation quality by generating labels with better interrater agreement and avoiding the problem of poor-quality annotation in a single annotation scheme. With an approach that relies on a single annotator per scan, it can be difficult to detect poor annotators following completion of a set of training cases.

There are several limitations of this dataset. Reference standard labels for the grading of solid organ injuries were established through best of three majority voting. While this represents an improvement over a single annotator, there are inherent issues with AAST grading as a result of interrater variability. We recognize the absence of delayed phase imaging as a limitation because it forms part of the AAST imaging criteria for grade II–IV renal injuries in terms of collecting system injuries. Delayed phase imaging was not included because it was not part of the routine protocol for most contributing sites, and we were concerned that its inclusion in cases with renal injuries would bias models, potentially through spurious associations, rather than truly detecting collecting system injuries. Finally, reference standard labels for solid organ injuries, bowel and mesenteric injuries, and active extravasation were made using a web-based annotation platform, which is limited when compared with real-world clinical practice with access to high-resolution monitors, multiplanar thin-section imaging, clinical information, and prior imaging examinations.

In summary, the RATIC dataset represents the largest and most geographically diverse, publicly available expert-annotated dataset of abdominal traumatic injury CT studies. With the release of this dataset, we hope to facilitate research and development in ML and abdominal trauma that can lead to improved patient care and outcomes. This dataset is made freely available to all researchers for noncommercial use.

Acknowledgments

Acknowledgments

The authors would like to thank and acknowledge the contributions of Christopher Carr, MA, Sohier Dane, Maggie Demkin, MBA, and Michelle Riopel.

E.C. supported by the Odette Professorship in Artificial Intelligence for Medical Imaging, St Michael’s Hospital, Unity Health Toronto.

Dataset Annotator Group: Claire K. Sandstrom, Angel Ramon Sosa Fleitas, Joel Kosowan, Christopher J Welman, Sevtap Arslan, Mark Bernstein, Linda C. Chu, Karen S. Lee, Chinmay Kulkarni, Taejin Min, Ludo Beenen, Betsy Jacobs, Scott Steenburg, Sree Harsha Tirumani, Eric Wallace, Shabnam Fidvi, Helen Oliver, Casey Rhodes, Paulo Alberto Flejder, Adnan Sheikh, Muhammad Munshi, Jonathan Revels, Vinu Mathew, Marcela De La Hoz Polo, Apurva Bonde, Ali Babaei Jandaghi, Robert Moreland, M. Zak Rajput, James T. Lee, Nikhil Madhuripan, Ahmed Sobieh, Bruno Nagel Calado, Jeffrey D. Jaskolka, Lee Myers, Laura Kohl, Matthew Wu, Wesley Chan, Facundo Nahuel Diaz.

Dataset Contributor Group: Nitamar Abdala, Jason Adleberg, Waqas Ahmad, Christopher O. Ajala, Emre Altinmakas, Robin Ausman, Miguel Ángel Gómez Bermejo, Deniz Bulja, Jeddi Chaimaa, Lin-Hung Chen, Sheng-Hsuan Chen, Hsiu-Yin Chiang, Rahin Chowdhury, David Dreizin, Zahi Fayad, Yigal Frank, Sirui Jiang, Belma Kadic, Helen Kavnoudias, Alexander Kagen, Felipe C. Kitamura, Nedim Kruscica, Michael Kushdilian, Brian Lee, Jennifer Lee, Robin Lee, Che-Chen Lin, Karun Motupally, Eamonn Navin, Andrew S. Nencka, Christopher Newman, Akdi Khaoula, Shady Osman, William Parker, Jacob J. Peoples, Marco Pereañez, Christopher Rushton, Navee Sidi Mahmoud, Xueyan Mei, Beverly Rosipko, Muhammad Danish Sarfarz, Adnan Sheikh, Maryam Shekarforoush, Amber Simpson, Ashlesha Udare, Victoria Uram, Emily V. Ward, Conor Waters, Min-Yen Wu, Wanat Wudhikulprapan, Adil Zia.

Dataset Curator Group: Matthew Aitken, Patrick Chun-Yin Lai, Priscila Crivellaro, Jayashree Kalpathy-Cramer, Zixuan Hu, Reem Mimish, Aeman Muneeb, Mitra Naseri, Maryam Vazirabad, Rachit Saluja.

Disclosures of conflicts of interest: J.D.R. Consulting fees from Cortechs.ai; payment from Sutton Pierce for expert testimony; member of the Radiological Society of North America (RSNA) AI Committee; associate editor for Radiology: Artificial Intelligence; Radiology: Artificial Intelligence trainee editorial board alum; stock or stock options in Cortechs.ai and Subtle Medical. H.M.L. No relevant relationships. R.L.B. Support for present manuscript from the RSNA; consulting fees from the RSNA; leadership or fiduciary role in the RSNA AI Committee and the American Statistical Association Statistical Consulting Section. S.J. No relevant relationships. L.M.P. RSNA AI Committee member; associate editor for Radiology: Artificial Intelligence. S.N. University of British Columbia Health Innovation Funding Investment Award and the Canadian Institutes for Health Research Pandemic Preparedness and Health Emergencies Research Priority Grant; royalties from FRCPC; consulting fees from Siemens and Amgen. B.S.M. No relevant relationships. A.E.F. Grants or contracts from the Medical Imaging and Data Resource Center; member of the RSNA board of directors. K.M. Mentor for RSNA 2024 Medical Student Grant to Miriam Chisholm (award: $3000, paid to author’s institution); principal investigator for RSNA Research Scholar Grant, 2022 to present (award: $150 000, paid to author’s institution); principal investigator for Duke Center of Artificial Intelligence in Radiology: SPARK award, 2021–2022 (award: 50% research technician support for 1 year, paid to author’s institution); support for attending meetings and/or travel from the Society of Abdominal Radiology DEI Professional Development Award, the American Association for Women in Radiology Dr. Shaffer RLI Leadership Summit Award, the North Carolina Radiological Society RLI Summit Stipend Award, the American Association for Women in Radiology: AAMC Leadership Seminar Award, and the Society of Abdominal Radiology: Travel Scholarship for Trainees; patent for “Methods for non-invasive cancer identification, classification, and grading: machine Learning models using mixed exam-, region-, and voxel-wise supervision” (patent no. US20230410301A1, filed November 5, 2021, and issued December 21, 2023); associate editor for Radiology: Artificial Intelligence; advisory panel member for Radiology: Artificial Intelligence trainee editorial board; co-chair of the RSNA Radiology Reimagined: AI, Innovation and Interoperability in Practice Demonstration; co-chair of the Society of Abdominal Radiology Informatics committee; Radiology: Artificial Intelligence trainee editorial board alum. G.S. Member of the MD.ai board of directors and the Society for Imaging Informatics in Medicine (SIIM) board of directors; shareholder in MD.ai. M.A.D. No relevant relationships. J.M. Grants or contracts from Siemens, paid to author’s institution; royalties from GE, made via author’s institution; payment from Gibson Dunn for expert testimony, (patent litigation expert witness); support for attending meetings from RSNA; chair of the RSNA AI Committee; stock or stock options in Annexon Biosciences; associate editor for Radiology: Artificial Intelligence. P.D.C. Grants or contracts from Novocure; consulting fees from Canon Medical Bayer; stock or stock options in Avicenna.ai (co-founder). F.H.B. No relevant relationships. S.H. No relevant relationships. M.L. No relevant relationships. T.R. No relevant relationships. J.P.G. Grants or contracts from the Interdisciplinary Center of Clinical Research Würzburg (Z-3BC/02); speaker honoraria from Siemens Healthineers. A.S.K. No relevant relationships. S.M. No relevant relationships. S.G.S. No relevant relationships. A.D.C. No relevant relationships. S.A. No relevant relationships. C.C.K. Consulting fees from Everfortune.AI. L.A. No relevant relationships. A.V.C. No relevant relationships. A.S. No relevant relationships. F.A.S.T. No relevant relationships. A.J. No relevant relationships. L.K.B. No relevant relationships. M. Brassil No relevant relationships. A.E.H. No relevant relationships. H.D. No relevant relationships. M. Becircic No relevant relationships. A.G.B. No relevant relationships. E.M.J.d.M.F. Consulting fees from MD.ai; speaker payment or honoraria from Sharing Progress in Cancer Care; member of the SIIM subcommittee for ML Education; Radiology: Artificial Intelligence trainee editorial board member. E.C. No relevant relationships.

Abbreviations:

AAST
American Association for the Surgery of Trauma
AI
artificial intelligence
ASER
American Society of Emergency Radiology
ML
machine learning
RATIC
RSNA Abdominal Traumatic Injury CT
RSNA
Radiological Society of North America
SAR
Society of Abdominal Radiology

Contributor Information

Jeffrey D. Rudie, Email: Jeff.rudie@gmail.com.

Collaborators: Claire K. Sandstrom, Angel Ramon Sosa Fleitas, Joel Kosowan, Christopher J Welman, Sevtap Arslan, Mark Bernstein, Linda C. Chu, Karen S. Lee, Chinmay Kulkarni, Taejin Min, Ludo Beenen, Betsy Jacobs, Scott Steenburg, Sree Harsha Tirumani, Eric Wallace, Shabnam Fidvi, Helen Oliver, Casey Rhodes, Paulo Alberto Flejder, Adnan Sheikh, Muhammad Munshi, Jonathan Revels, Vinu Mathew, Marcela De La Hoz Polo, Apurva Bonde, Ali Babaei Jandaghi, Robert Moreland, M. Zak Rajput, James T. Lee, Nikhil Madhuripan, Ahmed Sobieh, Bruno Nagel Calado, Jeffrey D. Jaskolka, Lee Myers, Laura Kohl, Matthew Wu, Wesley Chan, Facundo Nahuel Diaz, Nitamar Abdala, Jason Adleberg, Waqas Ahmad, Christopher O. Ajala, Emre Altinmakas, Robin Ausman, Miguel Ángel Gómez Bermejo, Deniz Bulja, Jeddi Chaimaa, Lin-Hung Chen, Sheng-Hsuan Chen, Hsiu-Yin Chiang, Rahin Chowdhury, David Dreizin, Zahi Fayad, Yigal Frank, Sirui Jiang, Belma Kadic, Helen Kavnoudias, Alexander Kagen, Felipe C. Kitamura, Nedim Kruscica, Michael Kushdilian, Brian Lee, Jennifer Lee, Robin Lee, Che-Chen Lin, Karun Motupally, Eamonn Navin, Andrew S. Nencka, Christopher Newman, Akdi Khaoula, Shady Osman, William Parker, Jacob J. Peoples, Marco Pereañez, Christopher Rushton, Navee Sidi Mahmoud, Xueyan Mei, Beverly Rosipko, Muhammad Danish Sarfarz, Adnan Sheikh, Maryam Shekarforoush, Amber Simpson, Ashlesha Udare, Victoria Uram, Emily V. Ward, Conor Waters, Min-Yen Wu, Wanat Wudhikulprapan, Adil Zia, Matthew Aitken, Patrick Chun-Yin Lai, Priscila Crivellaro, Jayashree Kalpathy-Cramer, Zixuan Hu, Reem Mimish, Aeman Muneeb, Mitra Naseri, Maryam Vazirabad, and Rachit Saluja

Keywords: Trauma, Spleen, Liver, Kidney, Large Bowel, Small Bowel, CT

References

  • 1. Wiik Larsen J , Søreide K , Søreide JA , Tjosevik K , Kvaløy JT , Thorsen K . Epidemiology of abdominal trauma: An age- and sex-adjusted incidence analysis with mortality patterns . Injury 2022. ; 53 ( 10 ): 3130 – 3138 . [DOI] [PubMed] [Google Scholar]
  • 2. Soto JA , Anderson SW . Multidetector CT of blunt abdominal trauma . Radiology 2012. ; 265 ( 3 ): 678 – 693 . [DOI] [PubMed] [Google Scholar]
  • 3. Federle MP , Goldberg HI , Kaiser JA , Moss AA , Jeffrey RB Jr , Mall JC . Evaluation of abdominal trauma by computed tomography . Radiology 1981. ; 138 ( 3 ): 637 – 644 . [DOI] [PubMed] [Google Scholar]
  • 4. Kozar RA , Crandall M , Shanmuganathan K , et al . Organ injury scaling 2018 update: Spleen, liver, and kidney . J Trauma Acute Care Surg 2018. ; 85 ( 6 ): 1119 – 1122 . [DOI] [PubMed] [Google Scholar]
  • 5. Dixe de Oliveira Santo I , Sailer A , Solomon N , et al . Grading abdominal trauma: changes in and implications of the revised 2018 AAST-OIS for the Spleen, Liver, and Kidney . RadioGraphics 2023. ; 43 ( 9 ): e230040 . [DOI] [PubMed] [Google Scholar]
  • 6. Padia SA , Ingraham CR , Moriarty JM , et al . Society of Interventional Radiology Position Statement on Endovascular Intervention for Trauma . J Vasc Interv Radiol 2020. ; 31 ( 3 ): 363 – 369.e2 . [DOI] [PubMed] [Google Scholar]
  • 7. Patlas MN , Dreizin D , Menias CO , et al . Abdominal and pelvic trauma: misses and misinterpretations at multidetector CT: trauma/emergency radiology . RadioGraphics 2017. ; 37 ( 2 ): 703 – 704 . [DOI] [PubMed] [Google Scholar]
  • 8. Pretorius EJ , Zarrabi AD , Griffith-Richards S , et al . Inter-rater reliability in the radiological classification of renal injuries . World J Urol 2018. ; 36 ( 3 ): 489 – 496 . [DOI] [PubMed] [Google Scholar]
  • 9. Adams-McGavin RC , Tafur M , Vlachou PA , et al . Interrater agreement of CT grading of blunt splenic injuries: does the AAST grading need to be reimagined? Can Assoc Radiol J 2024. ; 75 ( 1 ): 171 – 177 . [DOI] [PubMed] [Google Scholar]
  • 10. Kim SJ , Ahn SJ , Choi SJ , Park DH , Kim HS , Kim JH . Optimal CT protocol for the diagnosis of active bleeding in abdominal trauma patients . Am J Emerg Med 2019. ; 37 ( 7 ): 1331 – 1335 . [DOI] [PubMed] [Google Scholar]
  • 11. Chen H , Unberath M , Dreizin D . Toward automated interpretable AAST grading for blunt splenic injury . Emerg Radiol 2023. ; 30 ( 1 ): 41 – 50 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Wang J , Wood A , Gao C , Najarian K , Gryak J . Automated spleen injury detection using 3D active contours and machine learning . Entropy (Basel) 2021. ; 23 ( 4 ): 382 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Cheng CT , Lin HS , Hsu CP , et al . The three-dimensional weakly supervised deep learning algorithm for traumatic splenic injury detection and sequential localization: an experimental study . Int J Surg 2023. ; 109 ( 5 ): 1115 – 1124 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Hamghalam M , Moreland R , Gomez D , et al . Machine learning detection and characterization of splenic injuries on abdominal computed tomography . Can Assoc Radiol J 2024. ; 75 ( 3 ): 534 – 541 . [DOI] [PubMed] [Google Scholar]
  • 15. Farzaneh N , Stein EB , Soroushmehr R , Gryak J , Najarian K . A deep learning framework for automated detection and quantitative assessment of liver trauma . BMC Med Imaging 2022. ; 22 ( 1 ): 39 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16. Dreizin D , Zhou Y , Fu S , et al . A multiscale deep learning method for quantitative visualization of traumatic hemoperitoneum at CT: assessment of feasibility and comparison with subjective categorical estimation . Radiol Artif Intell 2020. ; 2 ( 6 ): e190220 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Winkel DJ , Heye T , Weikert TJ , Boll DT , Stieltjes B . Evaluation of an AI-based detection software for acute findings in abdominal computed tomography scans: toward an automated work list prioritization of routine CT examinations . Invest Radiol 2019. ; 54 ( 1 ): 55 – 59 . [DOI] [PubMed] [Google Scholar]
  • 18. Isensee F , Jaeger PF , Kohl SAA , Petersen J , Maier-Hein KH . nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation . Nat Methods 2020. ; 18 ( 2 ): 203 – 211 . [DOI] [PubMed] [Google Scholar]
  • 19. Wasserthal J , Breit HC , Meyer MT , et al . TotalSegmentator: robust segmentation of 104 anatomic structures in CT images . Radiol Artif Intell 2023. ; 5 ( 5 ): e230024 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20. Johnson JM , Khoshgoftaar TM . Survey on deep learning with class imbalance . J Big Data 2019. ; 6 ( 1 ): 27 . [Google Scholar]
  • 21. Ren M , Zeng W , Yang B , Urtasun R . Learning to reweight examples for robust deep learning . In: Proceedings of the 35th International Conference on Machine Learning. PMLR , 2018. ; 4334 – 4343 . https://proceedings.mlr.press/v80/ren18a.html. Accessed December 25, 2023 . [Google Scholar]
  • 22. Chang PD . DeepATLAS: One-Shot Localization for Biomedical Data . arXiv 2402.09587v1 [preprint] https://arxiv.org/abs/2402.09587v1. Posted February 14, 2024. Accessed February 14, 2024.
  • 23. Lin HM , Colak E , Richards T , et al . The RSNA Cervical Spine Fracture CT Dataset . Radiol Artif Intell 2023. ; 5 ( 5 ): e230034 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24. Hu Z , Patel M , Ball RL , et al . Assessing the performance of models from the 2022 RSNA Cervical Spine Fracture Detection Competition at a level I trauma center . Radiol Artif Intell 2024. : e230550 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25. Flanders AE , Prevedello LM , Shih G , et al . Construction of a Machine Learning Dataset through Collaboration: The RSNA 2019 Brain CT Hemorrhage Challenge . Radiol Artif Intell 2020. ; 2 ( 3 ): e190211 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26. Teneggi J , Yi PH , Sulam J . Examination-Level Supervision for Deep Learning-based Intracranial Hemorrhage Detection on Head CT Scans . Radiol Artif Intell 2024. ; 6 ( 1 ): e230159 . [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Radiology: Artificial Intelligence are provided here courtesy of Radiological Society of North America

RESOURCES