Skip to main content
. Author manuscript; available in PMC: 2023 Jul 22.
Published in final edited form as: Emerg Radiol. 2023 Mar 13;30(3):267–277. doi: 10.1007/s10140-023-02121-0

Table 1.

Questions and summary of responses

Questions (n = number of respondents) Responses Number of respondents (n, (%)) Median
Likert score [IQR]
Respondent characteristics
  1. How would you describe your practice? (n = 113) Academic 74 (65%)
Community 21 (19%)
Teleradiology 8 (7%)
Mixed 10 (9%)
  2. Are you an attending, fellow, or resident? (n = 112) Attending 101 (90%)
Fellow 5 (4%)
Resident 3 (3%)
Other 3 (3%)
  3. How many years have you practiced radiology? (n = 112) > 20 years 51 (45%)
> 10–20 years 39 (35%)
5–10 years 18 (16%)
Less than 5 years 4 (4%)
Implementation and governance
  4. Do you use commercial AI tools in your practice? (n = 112) Yes 63 (56%)
No 49 (44%)
  5. Does your practice have streamlined processes in place to perform ongoing local validation/revalidation of implemented tools? (n = 89) Yes 29 (33%)
No 60 (67%)
  6. Who are the primary end-users of AI tools at your institution? (n = 74) Radiologists 49 (66%)
Radiologists and clinicians 25 (34%)
  7. Have AI CAD tools in use at your institution improved quality of care? (n = 66) Yes 42 (64%)
No 24 (36%)
  8. If yes to above, in what way? (n = 42) Improving triage and turnaround 24 (57%)
Providing second reader capability 30 (71%)
Other 0 (0%)
Needs assessment
Rate the level of impact the following AI tools could have on your practice in the future
  9. AI CAD tools that help with workflow prioritization based on detected pathology (n = 113) High impact 69 (61%) 7 [5, 9]
Some impact 31 (27%)
No impact 13 (12%)
  10. AI CAD tools that quantify pathology (n = 113) High impact 65 (58%) 7 [5, 8]
Some impact 32 (28%)
No impact 16 (14%)
  11. AI CAD tools that assist in grading injury or disease severity based on established classification systems (n = 112) High impact 67 (60%) 7 [5, 8]
Some impact 32 (28%)
No impact 13 (12%)
  12. AI CAD tools that provide prognostic information such as probability of poor clinical outcome (n = 113) High impact 34 (30%) 5 [4, 7]
Some impact 52 (46%)
No impact 27 (24%)
  13. AI tools that auto-populate structured reports (n = 113) High impact 69 (61%) 7 [5, 9]
Some impact 28 (25%)
No impact 16 (14%)
  14. List up to 3 pathologies for which you believe AI tools will be helpful in the ER Top 5 major categories (collated) Number of free-response mentions
1. Fractures 47
Rib 20
General 19
Spine 5
Pelvis 3
2. Pulmonary embolus 39
3. Ischemic stroke 37
General 32
Large vessel occlusion 4
Perfusion imaging 1
4. Intracranial hemorrhage 31
5. Intracavitary torso hemorrhage-related 21
Solid organ laceration 8
General 3
Active extravasation 3
Gastrointestinal bleed 3
Hemoperitoneum 2
Hemothorax 1
Aortic injury 1
System benevolence and trust
  15. How important is it to you that AI tools provide interpretable/verifiable results that can be rejected when perceived to be incorrect by the end-user? (1–3, not important; 4–6, uncertain; 7–9, very important) (n = 113) Very important 98 (87%) 9 [8, 9]
Uncertain 10 (9%)
Not important 5 (4%)
  16. AI tools are trained using expert annotation, and ground-truth agreement between experts can vary considerably by task. How important is it for you to know the level of expert annotation agreement? (1–3, not important; 4–6, uncertain; 7–9, very important) (n = 108) Very important 86 (80%) 8 [7, 9]
Uncertain 12 (11%)
Not important 10 (9%)
  17. Do you have any concerns that AI tools bias your interpretation of images? (1–3, no; 4–6, uncertain; 7–9, yes) (n = 101) Yes 28 (28%) 5 [3, 7]
No 33 (33%)
Uncertain 40 (39%)
  18. How often do you find yourself disagreeing with diagnostic AI tool results? (n = 65) < 5% of studies 8 (12%)
5–10% of studies 24 (37%)
10–20% of studies 23 (35%)
< 20% of studies 10 (16%)
  19. Do you have any of the following apprehensions/concerns with respect to AI tools in the ER setting? Select all that apply (n = 103): -Overdiagnosis 63 (61%)
-Reported performance may not generalize to local performance 59 (57%)
-Negatively impacts training 45 (44%)
-May slow workflow 42 (41%)
-Not enough data in literature to support its use 41 (40%)
-AI workflow bypasses radiology 36 (35%)
-Waste of money 34 (33%)
-Not enough knowledge available 26 (25%)
-Ethics concerns 26 (25%)
-Institution not capable of change 23 (22%)
Expectations
  20. What impact, if any, do you expect AI tools will have on radiologist job satisfaction (n = 109) Increased 78 (72%)
Decreased 11 (10%)
No impact 20 (18%)
  21. On a scale of 1–9, how likely is AI to reduce the need for 24/7 emergency radiology coverage in the next 20 years? (n = 113) Likely 9 (8%) 2 [1, 4]
Uncertain 22 (19%)
Unlikely 82 (73%)
  22. Will the emergence of AI tools in ER impact interest in pursuing emergency radiology fellowship among radiology residents? (n = 108) Decreased interest 8 (7%)
Increased interest 35 (33%)
No impact 65 (60%)

Numerator = total number of respondents. More than one entry can be provided per respondent