Summary
Background
The aim of this review was to evaluate evidence on the use of Artificial Intelligence (AI) to support diagnostics in radiology, including implementation, experiences, perceptions, quantitative, and cost outcomes.
Methods
We conducted a systematic scoping review (PROSPERO registration: CRD42024537518) and discussed emerging findings with relevant stakeholders (radiology staff, public members) using workshops. We searched four databases and the grey literature for articles published between 1st January 2020 and 31st January 2025. Articles were screened for eligibility (N = 8013), resulting in 140 included studies. Studies evaluated implementation (N = 7), perceptions (N = 74), experiences (N = 14), effectiveness (N = 53), and cost (N = 6).
Findings
Factors influencing AI adoption were identified, including the high technical demand, lack of guidance, training/knowledge, transparency, and expert engagement. Evidence demonstrated improvements in diagnostic accuracy and reductions in interpretation time. However, evidence was mixed regarding experiences of using AI, the risk of increasing false positives, and the wider impact of AI on workflow efficiency and cost-effectiveness.
Interpretation
The potential benefits of AI are evident, but there is a paucity of evidence in real-world settings, supporting cautiousness in how AI is perceived (e.g., as a complementary tool, not a solution). We outline wider implications for policy and practice and summarise evidence gaps.
Funding
This project is funded by the National Institute for Health and Care Research, Health and Social Care Delivery Research programme (Ref: NIHR156380). NJF and AIGR are supported by the National Institute for Health Research (NIHR) Central London Patient Safety Research Collaboration and NJF is an NIHR Senior Investigator. The views expressed are those of the author(s) and not necessarily those of the NIHR or the Department of Health and Social Care.
Keywords: Artificial intelligence, Diagnostics, Radiology, Clinical practice, Implementation
Research in context.
Evidence before this study
There is limited existing knowledge about how Artificial Intelligence (AI) is being implemented in radiology settings and the clinical implications of using these technologies. Existing reviews on the use of AI for diagnostics in radiology have not yet evaluated literature which integrates findings on: implementation; perceptions of staff, patients and the public; staff experiences; and impact, including effectiveness and cost.
Added value of this study
This review summarises current evidence on how AI is being implemented in diagnostic imaging, what staff, trainees, patients, and members of the public think about AI, experiences of using AI in practice, the quantitative impact, and cost. Furthermore, we build on existing work by discussing review findings with staff in radiology and members of the public who may experience AI-based care using workshops.
Implications of all the available evidence
Research suggests that current AI implementation is based on experimental learning rather than being informed by rigorous evidence. To understand how to best use AI in the complexities of radiology practice, we highlight the importance of evaluating how AI is implemented and used as a complementary tool in real-world settings. Future research should focus on key evidence gaps, including the process of implementation (including procurement), experiences of using AI in practice, long term cost-effectiveness, the risk of increasing false positives, and the impact on wider patient pathways and hospital systems.
Introduction
The use of Artificial Intelligence (AI) in healthcare is increasing globally due to its potential to address workforce shortages and rising healthcare demands.1, 2, 3 In radiology imaging, AI is being applied to assist with detecting abnormalities, enhance accuracy, reduce routine task time, and support clinical diagnoses.4, 5, 6 However, evidence of these potential impacts is inconsistent, making it challenging to draw conclusions.6 While there is excitement and optimism about the use of AI, there is limited research evaluating the effectiveness of AI in real-world healthcare settings, which goes beyond testing how AI could theoretically work.2,6,7
Recent guidelines for clinical implementation highlight the importance of addressing evidence gaps, emphasising that a strong evidence base is essential for the continued use of AI in radiology.8 With interest in AI on the rise, it is crucial to understand how AI is being adopted globally for diagnostic purposes, evaluate the overarching effectiveness and costs of these technologies, and consider the experiences and perceptions of relevant stakeholders. Existing reviews have not yet evaluated literature integrating all of these topics. To address this gap, we conducted a rapid systematic scoping review and stakeholder workshops. Our review addresses the following research questions:
-
1.
How have AI tools been implemented to support diagnostics in radiology?
-
2.
How have AI tools supporting diagnostics in radiology been experienced and perceived?
-
3.
What evidence exists on effectiveness and cost of AI tools to support diagnostics in radiology?
-
4.
How has evidence on implementation, experiences, perceptions, quantitative outcomes, and cost of AI been measured, collected, and analysed?
-
5.
What do stakeholders (staff and the public) think about the review findings?
Methods
Registration
This review was registered on PROSPERO (CRD42024537518).
Design
As part of a wider rapid evaluation of AI deployment for chest diagnostics in the English National Health Service (NHS) (part of the AI diagnostic fund, AIDF),9 we conducted a rapid systematic scoping review.10 We used the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) statement,11 supplemented by two stakeholder workshops (one with radiology staff and one with members of the public) to discuss review findings. Scoping review methodology was chosen to address the varied review topics in a field of emerging evidence12 and involves bringing together evidence and summarising the findings.13 In rapid scoping reviews, the scope/focus and processes are streamlined to enable a quicker evidence synthesis.13
Search strategy and selection criteria
Studies eligible for inclusion:
-
•
Focused on AI being used to support diagnostics in radiology (algorithmic use for image interpretation and decision-making, not image generation).
-
•
Empirical studies (covering implementation, experiences, perceptions, quantitative or cost outcomes). Quantitative studies needed to evaluate AI as a support tool for human decision-making, rather than being used in isolation; in line with current guidance that AI should be used with human supervision.14,15
-
•
Published between 1st January 2020 and 31st January 2025, due to the rapid advancements of technology within healthcare during this period, and the publication of policy documents on the use of AI.15
-
•
Covered United Kingdom (UK)-based and international evidence.
-
•
Written in English.
See Appendix S1 for detailed inclusion and exclusion criteria.
Four databases were searched: Medline-Ovid [PubMed], PsycInfo, CINAHL Plus, and Web of Science Core Collection. Grey literature was identified through policy websites, topic-specific websites, and grey literature databases (e.g., Royal College of Radiologists, The Health Foundation, and Google Scholar). We also searched reference lists of review articles2,16,17 and discussed identification of further papers with external stakeholders and clinical experts (TOR, FG). One researcher (RL) conducted the search and inputted records onto EndNote,18 followed by Rayyan,19 to remove duplicates.
The search strategy was developed iteratively using Ovid-Medline, with assistance from a UCL librarian (DM). Search terms were informed by previous research16 and guidance from clinical experts (TOR, FG). These terms covered areas such as AI, radiology imaging, clinical practice implementation, experiences and/or perceptions and quantitative and cost outcomes (Appendix S2).
Screening and study selection
Studies were screened in three phases: i) title, ii) abstracts/study summaries, and iii) full texts.10 One researcher (RL) screened all titles and abstracts. 10% of excluded papers were reviewed by one of three researchers (ED, EM, CSJ). Full texts were screened by one of four researchers (RL, ED, EM, CSJ), depending on the methodology of the paper. Disagreements were discussed within the team until a consensus was reached.
Data extraction and charting
A data extraction tool was developed. This included study characteristics, setting, implementation context, methods and design, and study findings pertaining to 1) implementation, 2) experiences, 3) perceptions, and 4) quantitative and cost outcomes (Appendix S3). One of four researchers (RL, ED, EM, CSJ) piloted the tool on one study relating to each of the different topics explored in the review. The data extraction tool was used to extract findings from all studies. Disagreements when developing and editing the tool were discussed within the team until a consensus was reached.
Synthesis of results
Narrative synthesis was used to analyse review findings.20 We extracted all data relating to implementation, experiences, and perceptions, including illustrative quotes from qualitative studies. Data were coded line-by-line and findings grouped thematically. Quantitative evidence on effectiveness was synthesised by organising results by diagnostic accuracy, time and workflow, and change in clinical decision making. The narrative synthesis for cost outcomes was supported by abridged data extraction tables, including incremental costs, incremental cost-effectiveness ratio, cost savings, net present value, and quality-adjusted life years (QALYs). Due to the heterogeneity of the studies, a quantitative synthesis was not feasible.
Critical appraisal
The Mixed Methods Appraisal Tool21 was used to evaluate study quality. The tool was applied to quantitative effectiveness studies by two researchers (ED, CSJ), qualitative studies, mixed-methods, and quantitative survey studies by one researcher (RL). The quality of cost studies was evaluated (EM) using the Drummond checklist22 and the rating scale developed by Doran.23
Stakeholder workshops
Two online stakeholder workshops were conducted. Workshops were held with members of the public and radiology staff working in an English healthcare setting (Appendix S4). Participants were recruited via public involvement in research channels, or via AIDF colleagues and clinical expert co-authors sharing the advert with radiology staff via email. Participants were selected based upon their experience in radiology and/or their experiences of diagnostic care, to ensure that a range of perspectives were included (Appendix S4). All participants were sent an information sheet and consent form ahead of the workshop. Preliminary findings were sent to participants ahead of the workshop and presented during the workshop (Appendix S5). Participants then discussed the findings. Workshops were audio-recorded and transcribed. Findings were analysed using thematic analysis24 structured around review findings.
Ethics
Ethical approval for the workshops was obtained from the University College London Ethics Committee (27037/001). Written informed consent was obtained from all participants before taking part in the workshops. Ethical approval was not required for the review.
Role of funding source
The funders did not have a role in study design, data collection and analysis, writing of the manuscript or the decision to publish. The views and opinions expressed are those of the authors and do not necessarily reflect those of the NIHR or the Department of Health and Social Care.
PPIE co-authors (RM, JL, AH) were involved in study conceptualisation and design. They also co-designed study materials for the workshops, including the summary document sent to participants and presentation slides.
Results
Study selection and characteristics
8013 studies were identified, and 140 studies included (see Fig. 1), of which 7 studied implementation, 14 experiences, 74 perceptions, 53 quantitative impact and effectiveness, and 6 cost-effectiveness. Forty of the included studies were published in 2024–January 2025, indicating rapid growth in the AI research field (N = 13 published in 2020, N = 27 in 2021, N = 25 in 2022, N = 35 in 2023, N = 38 in 2024, and N = 2 in January 2025). Some studies covered multiple topics and have been included more than once in Table 1. Included studies were conducted in varied countries, covered different imaging modalities, and applied AI to varied patient pathways (Table 1).
Fig. 1.
Preferred reporting items for systematic reviews and meta-analyses (PRISMA) flow diagram.
Table 1.
Summary of study characteristics.
| Study characteristics | Overall (N = 140) |
|---|---|
| Study focusa | |
| Implementation | 7 |
| Experience | 14 |
| Perceptions | 74 |
| Effectiveness (quantitative studies) | 53 |
| Cost | 6 |
| Focus across multiple topicsa | |
| Implementation, experiences, and perceptions | 3 |
| Experiences and perceptions | 3 |
| Implementation and perceptions | 1 |
| Effectiveness and cost effectiveness | 2 |
| Effectiveness and experiences | 2 |
| Study characteristics | Overall (N = 140) | Study focusa |
||||
|---|---|---|---|---|---|---|
| Implementation (N = 7) | Experiences (N = 14) | Perceptions (N = 74) | Effectiveness (quantitative) (N = 53) | Cost (N = 6) | ||
| Location | ||||||
| United Kingdom (UK) | 17 | 4 | 2 | 13 | 2 | 1 |
| China | 16 | 0 | 1 | 3 | 13 | 0 |
| United States of America (USA) | 15 | 0 | 3 | 5 | 8 | 2 |
| Korea | 11 | 0 | 3 | 1 | 8 | 0 |
| Germany | 9 | 0 | 1 | 5 | 3 | 1 |
| Saudi Arabia | 9 | 0 | 0 | 9 | 0 | 0 |
| Italy | 7 | 0 | 1 | 4 | 3 | 0 |
| The Netherlands | 6 | 3 | 1 | 2 | 2 | 0 |
| Australia | 5 | 0 | 0 | 3 | 2 | 0 |
| India | 3 | 0 | 0 | 2 | 1 | 0 |
| Japan | 3 | 0 | 0 | 0 | 2 | 1 |
| Spain | 3 | 0 | 0 | 3 | 0 | 0 |
| World-wide | 3 | 0 | 0 | 3 | 0 | 0 |
| Africa | 2 | 0 | 0 | 2 | 0 | 0 |
| Europe-wide | 2 | 0 | 1 | 1 | 0 | 0 |
| France | 2 | 0 | 0 | 0 | 2 | 0 |
| Nordic countries | 2 | 0 | 0 | 2 | 0 | 0 |
| Singapore | 2 | 0 | 0 | 1 | 1 | 0 |
| Switzerland | 2 | 0 | 0 | 0 | 2 | 0 |
| United Arab Emirates | 2 | 0 | 0 | 2 | 0 | 0 |
| Argentina | 1 | 0 | 1 | 0 | 0 | 0 |
| Australia & New Zealand | 1 | 0 | 0 | 1 | 0 | 0 |
| Austria | 1 | 0 | 0 | 1 | 0 | 0 |
| Canada | 1 | 0 | 0 | 1 | 0 | 0 |
| China and Germany | 1 | 0 | 0 | 0 | 1 | 0 |
| Egypt | 1 | 0 | 0 | 0 | 1 | 0 |
| Finland | 1 | 0 | 0 | 0 | 0 | 1 |
| Ghana | 1 | 0 | 0 | 1 | 0 | 0 |
| Ireland | 1 | 0 | 0 | 1 | 0 | 0 |
| Jordan | 1 | 0 | 0 | 1 | 0 | 0 |
| Malaysia | 1 | 0 | 0 | 1 | 0 | 0 |
| Malta | 1 | 0 | 0 | 1 | 0 | 0 |
| Middle East and India | 1 | 0 | 0 | 1 | 0 | 0 |
| Nigeria | 1 | 0 | 0 | 1 | 0 | 0 |
| Taiwan | 1 | 0 | 0 | 0 | 1 | 0 |
| Thailand | 1 | 0 | 0 | 1 | 0 | 0 |
| Unclear | 1 | 0 | 0 | 1 | 0 | 0 |
| Vietnam | 1 | 0 | 0 | 0 | 1 | 0 |
| Western Europe | 1 | 0 | 0 | 1 | 0 | 0 |
| Imaging modalities | ||||||
| Any (not specified—focused on AI more broadly) | 67 | 3 | 3 | 65 | 0 | 0 |
| X-ray | 19 | 1 | 2 | 1 | 16 | 1 |
| Computed tomography (CT) | 18 | 1 | 2 | 2 | 16 | 0 |
| Magnetic resonance Imaging (MRI) | 10 | 1 | 1 | 2 | 7 | 1 |
| Colonoscopy | 7 | 0 | 1 | 1 | 5 | 2 |
| Ultrasound | 7 | 0 | 0 | 0 | 7 | 0 |
| Radiographs | 5 | 0 | 5 | 1 | 0 | 0 |
| Mammography | 2 | 0 | 0 | 0 | 2 | 0 |
| MRI/MRI fusion biopsy | 1 | 0 | 0 | 1 | 0 | 0 |
| Computed tomography (CT) or magnetic resonance imaging (MRI) | 1 | 0 | 0 | 1 | 0 | 0 |
| Computed tomography angiography (CTA) | 1 | 0 | 0 | 0 | 0 | 1 |
| Images acquired during endoscopic procedures | 1 | 0 | 0 | 0 | 0 | 1 |
| X-ray, CT, DXA (bone density scan), and MRI | 1 | 1 | 0 | 0 | 0 | 0 |
| Patient pathways (including condition/s) | ||||||
| Any (not specified—focused on AI more broadly) | 66 | 3 | 3 | 63 | 0 | 1 |
| Chest/thoracic/lung | 23 | 1 | 5 | 3 | 17 | 0 |
| Colorectal | 7 | 0 | 1 | 1 | 5 | 2 |
| Prostate | 7 | 1 | 1 | 3 | 4 | 0 |
| Fractures | 5 | 0 | 2 | 0 | 4 | 0 |
| Stroke | 5 | 0 | 0 | 0 | 4 | 1 |
| Breast | 4 | 0 | 0 | 0 | 4 | 0 |
| Coronary artery disease | 3 | 0 | 0 | 0 | 3 | 0 |
| Pulmonary embolism | 3 | 0 | 0 | 0 | 3 | 0 |
| Thyroid | 3 | 0 | 0 | 0 | 3 | 0 |
| Dentistry | 2 | 0 | 0 | 1 | 1 | 1 |
| Ligament ruptures | 1 | 0 | 0 | 0 | 1 | 0 |
| Anaesthesia | 1 | 0 | 0 | 0 | 1 | 0 |
| Bone maturity | 1 | 1 | 0 | 0 | 0 | 0 |
| Covid-19 | 1 | 0 | 0 | 0 | 1 | 0 |
| Lumbar spinal stenosis | 1 | 0 | 0 | 0 | 1 | 0 |
| Musculoskeletal | 1 | 0 | 0 | 1 | 0 | 0 |
| Pneumonia | 1 | 0 | 1 | 0 | 0 | 0 |
| Possible gastric neoplasm | 1 | 0 | 0 | 0 | 0 | 1 |
| Skin | 1 | 0 | 0 | 1 | 0 | 0 |
| Varying: Pulmonary embolism, intercranial haemorrhage, and acute cervical spine fractures | 1 | 0 | 1 | 1 | 0 | 0 |
| Varying: Cardiac, pulmonary, and musculoskeletal | 1 | 0 | 0 | 0 | 1 | 0 |
| Varying: chest/lung nodules, Covid, fractures, scoliosis, prostate, neuro/dementia | 1 | 1 | 0 | 0 | 0 | 0 |
| Participant group | ||||||
| Radiology staff | 5 | 6 | 34 | 0 | 0 | |
| Wider clinical staff | 1 | 4 | 4 | 0 | 0 | |
| Residents/students/trainees | 0 | 2 | 10 | 0 | 0 | |
| Patients | 0 | 0 | 11 | 0 | 0 | |
| Members of the public | 0 | 0 | 2 | 0 | 0 | |
| Radiology staff, wider clinical staff, stakeholders, and patients | 1 | 1 | 1 | 0 | 0 | |
| Radiology and wider clinical staff | 0 | 1 | 0 | 0 | 0 | |
| Radiology staff and radiology students | 0 | 0 | 7 | 0 | 0 | |
| Radiologists and computer scientists/industry and IT staff | 0 | 0 | 3 | 0 | 0 | |
| Staff (radiology and wider) and members of the public | 0 | 0 | 1 | 0 | 0 | |
| Staff (radiology and wider), patients, and members of the public (including some carers) | 0 | 0 | 1 | 0 | 0 | |
| Diagnostic images and readers | 0 | 0 | 0 | 53 | 0 | |
| Diagnostic images | 0 | 0 | 0 | 0 | 6 | |
The total will not always add up to 140 as some papers had multiple areas of focus (as shown above), were conducted across countries, covered different imaging modalities and patient pathways and recruited multiple stakeholder groups.
Methods used for data collection and analysis
All implementation studies (N = 7) used qualitative methods (see Appendix S6). Experiences (N = 14) and perceptions (N = 74) were evaluated using quantitative, qualitative, and mixed methods. Of the quantitative studies that measured effectiveness of AI (N = 53), only 23 studies25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47 evaluated AI in a live pathway, of which 7 measured diagnostic accuracy. The remaining studies simulated typical workflow, often using a bespoke tool developed and validated as part of the study or a companion study. The effect of reader experience was the most common influencing factor measured to assess effect on findings (25/53).
The cost-related studies (N = 6) used Markov based models37,38,48, 49, 50 and Monte Carlo forecasting methods51 to demonstrate the costs and health outcomes of AI-aided and non-AI strategies. Analyses were conducted from a societal perspective in the context of the origin country, using observational data and assumptions as well as parameters from randomised trials. Costs were estimated using the origin country's public and private reference costs.
The AI tools used in quantitative studies of effectiveness were either a pre-existing commercially available tool or a bespoke tool designed as part of the study or companion study (Appendix S6 and Supplementary File S1).
Measuring effectiveness and cost effectiveness
Quantitative measures used to evaluate effectiveness covered three categories: (1) Diagnostic accuracy, (2) effect on time and workflow, and (3) effect on clinical decision-making. General measures of diagnostic accuracy included sensitivity, specificity, area under the curve (AUC), positive predictive value (PPV), negative predictive value (NPV), and accuracy (proportion of correct results). Cost-effectiveness studies measured health outcomes including tooth retention time (in years) until carries detection for recipients of oral health care and quality-adjusted life years (QALYs). Other outcomes measured included incidence, mortality, scan time, and withdrawal time (Supplementary File S1).
Implementation
Out of the seven implementation studies, four explored the process of implementation, including the integration of AI into clinical practice and associated barriers and facilitators.52, 53, 54, 55 Three studies56, 57, 58 explored experiences of implementing AI in practice, with two identifying the main barriers and facilitators.56,58 One study referenced procurement when discussing the high cost of implementing AI,55 but no papers explored key processes preceding implementation (e.g., procurement).
Function of AI tools
AI was used as a tool to assist with diagnosis.52, 53, 54 One study reported 15 different applications being used in clinical practice (8 fully integrated and 7 in exploration or implementation phases) at one medical centre, including lung nodule detection, fracture assessment, measurement, and quantification.54
The process of implementation
The process of implementation involved integrating AI into existing radiology systems used to store and transmit images and reports.52,54,55 This enabled AI to run automatically in the background.52 One study described a holistic approach to implementing AI, where integration aligned social and technological aspects of clinical practice.54 In this study, multiple AI algorithms were working in clinical workflows across one medical centre,54 achieved by having one central workflow engine where data analysed by AI was sent to relevant data repositories.54 Reasons for integrating AI into existing systems were centred around minimising workflow disruption, although some discussed teething problems (e.g., AI causing interruptions as staff adapted to viewing AI outputs).54,55
Experiences of using AI
Fourteen studies explored staff and trainees' experiences of using AI (Table 1). In most studies, AI was viewed positively as a reliable and useable tool.33,52,59, 60, 61, 62, 63, 64, 65 AI was mainly used as a second reader (a support tool/second pair of eyes for clinicians).52,55,60,62,63 However, there were mixed findings about using AI in practice. In some studies, staff felt AI helped to reduce reading times,52,63,66 improve accuracy,33,63,65,66 and efficiency.58,65 Despite this, studies reported concerns about limited evidence,55,62,66 the risk of false positives,52,62 and reduced efficiency.33 In the quantitative studies evaluating effectiveness, three studies measured reader confidence level in diagnostic decisions. Findings from these studies were mixed, some found improved self-reported confidence with the use of AI67 but others found AI did not impact confidence.34,67,68
Perceptions about using AI
Seventy-four studies explored perceptions (Table 1). AI was viewed favourably in radiology52,55,65,69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87 as a diagnostic support tool.57,69,73,76,77,79,88, 89, 90, 91, 92, 93, 94, 95 However, one paper highlighted the importance of balancing positivity towards AI in radiology with an element of scepticism.96 Research indicated a consistent view that AI should not replace humans,52,55,60,65,70,80,83,84,86,89,90,97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108 demonstrating the perceived importance of continued human input. For patients, members of the public and staff, trusting AI was often reliant on human oversight,72,108, 109, 110, 111, 112, 113, 114, 115 transparency, and explainability.58,73,114, 115, 116 Some radiology staff felt their profession is safeguarded by their autonomy,74 as the use of AI alone is not viewed favourably90 or legally.14 Although it was clear that AI should not replace the human element of diagnostics, studies reported that AI will likely change the radiology profession58,73,74,80,95,98,102,117 and there were some concerns about job security and reduced demand,118,119 despite increased need for medical imaging.1,2 Regarding how AI may change radiology, studies suggested that AI may upskill and enhance roles52,58 but reported a risk of reduced autonomy.58
Factors influencing AI adoption
Table 2 provides an overview of influencing factors. Barriers included the high technical demand of AI (N = 6), lack of knowledge and training (N = 20), absence of evidence-based guidance (N = 5), complex governance processes (N = 2), and how AI lacks empathy and human connection (N = 8). Facilitators included integrating AI into existing systems (N = 4), transparency (N = 13), willingness to learn (N = 21), and having AI champions/experts (N = 8). The associated risks were staff becoming over-reliant (due to developing algorithmic bias and/or blindness) (N = 5), AI being inaccurate/making errors (N = 6), deskilling staff (N = 5), the cost of AI (N = 11), and the ethical implications (N = 7). The benefits were improving reporting times (N = 7), improving diagnostic accuracy/identifying missed diagnoses (N = 12), improving workflow efficiency (N = 5), and helping with routine tasks (N = 10).
Table 2.
Overview of factors influencing AI adoption.
| Experienced/actual (either during implementation or clinical use) | Perceived (what people think when asked about AI) | |
|---|---|---|
| Barriers |
|
|
| Facilitators | ||
| Risks | ||
| Benefits |
Note: Where factors are experienced, we are referring to actual experiences of implementing or using AI in practice. Where factors are perceived, we are referring to what people think are the key influencing factors.
AI – Artificial Intelligence.
Quantitative evidence on impact and effectiveness
Of the 53 quantitative studies, 33 measured diagnostic accuracy of AI.26,28,31,34,37,40,44,67,137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160 Across these studies, there was consistent evidence that AI reduces the number of incorrect positive or negative results, although some found improvements were more pronounced among less experienced staff, and any reported improvements were likely to be highly dependent on study design, implementation setting, and imaging modality.
Of the 25 studies that measured sensitivity, 19 reported improvements, 5 reported no change, and one study reported a reduction when AI was used as a diagnostic tool to assist readers (Table 3). However, some of the findings were specific to less experienced staff. Sixteen studies assessed sensitivity by reader experience level and nine reported improvements among less experienced staff only (Table 3). Of the 23 studies measuring specificity, findings were varied, with a smaller proportion reporting improvements: 13 studies reported improvements, 7 reported no change, and 3 a reduction in specificity. Ten of the 16 studies that measured the effect of reader experience on specificity reported improvements with AI assistance (Table 3).
Table 3.
Number of studies that measured diagnostic accuracy and influence of reader experience.
| Improved | No change | Reduction | Total | |
|---|---|---|---|---|
| Sensitivity | ||||
| Total papers that measured sensitivity | 1926,31,37,137, 138, 139,141, 142, 143, 144, 145,149,150,152,153,155,156,158,159 | 528,35,147,148,157 | 167 | 25 |
| Papers that assessed effect of reader experiencea | ||||
| For all levels of experience | 531,138,141,150,156 | 2147,157 | 0 | 7 |
| For less experienced readers only | 9b26,139,143, 144, 145,149,155,158,159 | 0 | 0 | 9 |
| Specificity | ||||
| Total papers that measured specificity | 1431,67,137, 138, 139,141,144,149,151,153,156,158 | 728,35,145,150,155,157,159 | 326,142,148 | 24 |
| Papers that assessed effect of reader experiencea | ||||
| For all levels of experience | 431,138,151,156 | 5145,150,155,157,159 | 126 | 10 |
| For less experienced readers only | 6139,141,143,144,149,158 | 0 | 0 | 6 |
We have included three studies that measured the effect of clinical specialism and reader self-efficacy here as proxies for reader experience.
One paper found improvements for less experienced staff doing more difficult tasks.
Evidence of improvement in time and workflow was variable, with improvements primarily seen in both reducing image interpretation time and time to report. Few studies reported on workflow impacts further along the pathway (Table 4). Although there were minor variations in definitions for interpretation time and time to report due to different study designs, interpretation time referred to time taken for a reader to interpret a single image, and time to report referred to time between image acquisition and documentation of the report. Of the 18 studies that measured image interpretation time, 13 found a reduction, while 5 reported an increase. There was also a lack of consistently observed benefits across different levels of reader experience (seniority or number of years in post) and suspicion of images (Table 4). Findings on time to report were more variable. Of the 10 studies that measured this outcome, 5 reported a reduction in reporting time, either for all images,27,46,47 for critical and urgent cases,142 or less experienced readers.155 However, this study155 also found an increase in reporting time for non-urgent/normal images, as these images were deprioritised despite having shorter interpretation times with the aid of AI. Four studies found no change when using AI for all images or less experienced readers (Table 4).
Table 4.
Number of studies by workflow outcome measures and findings.
| Reduction | No change | Increase | Total | |
|---|---|---|---|---|
| Interpretation time | ||||
| Total papers that measured interpretation time | 13ab34,36,40,45,47,139,142,145,150,152,153,155,160 | 228,138 | 5a39,47,142,147,161 | 18a |
| Papers that assessed effect of reader experience | ||||
| For all levels of experience | 4139,145,150,155 | 1138 | 1161 | 6 |
| For more experienced staff only | 0 | 0 | 1147 | 1 |
| Papers that assessed effect of urgency of image findings | ||||
| For non-urgent/normal images | 1a142 | 1b36 | 0 | 2 |
| For critical/urgent/more suspicious images | 1b36 | 0 | 2a39,142 | 3 |
| Reporting time | ||||
| Total papers that measured reporting time | 5c27,46,47,142,155 | 427,28,36,159 | 2c26,142 | 10c |
| Papers that assessed effect of reader experience/priority of images | ||||
| For all levels of experience | 0 | 0 | 126 | 1 |
| For less experienced readers only | 1155 | 0 | 0 | 1 |
| Papers that assessed effect of urgency of image findings | ||||
| For non-urgent/normal images | 0 | 0 | 1c142 | 1 |
| For critical/urgent/more suspicious images | 1c142 | 0 | 0 | 1 |
Two papers reported both a reduction and an increase in interpretation time depending on selected confounders, hence the reported total does not equal the sum of studies.
One paper reported both a reduction and no change in interpretation time depending on selected confounders.
One paper reported both a reduction and an increase in reporting time depending on selected confounders, hence the reported total does not equal the sum of studies.
Six studies focused specifically on the effectiveness of AI in emergency departments (ED).25,28,36,41,42,158 Readers in these studies were emergency department physicians and non-specialists in radiology. Two28,36 found no difference in the length of stay, the rate of revisiting the ED within 30 days, nor communication times. However, one study25 found shortened ED lengths of stay for patients with a confirmed diagnosis, whilst another found a small increase in median wait time to discharge.41 Two studies assessing the effect of AI on clinical decision-making found that changes to the final report, patient management, and image recommendations were more likely for more critical images, regardless of reader experience.30 One study162 found that incorrect AI results can influence a radiologist to make wrong decisions (Table 4).
Evidence on cost-effectiveness
Findings from cost-effectiveness studies (N = 6) were mixed. One study demonstrated that although accuracy using AI was improved, the cost-effectiveness was not, as more invasive treatment approaches generated costs over the patient's lifetime and diminished possible effectiveness advantages.37 Five papers demonstrated monetary benefits and cost-effectiveness of the AI-aided pathways.38,48, 49, 50, 51
Feedback from stakeholder workshops
Both groups discussed how review findings reflected the NHS being in the early stages of using AI and learning through ongoing implementation. The complexity and varied implementation of AI were described as the ‘wild west’ by staff, with a lack of guidance and structure. However, when it comes to implementing AI in real-world settings, staff spoke on the importance of integrating AI into existing systems effectively, which causes minimal disruptions to workflow. Discussions suggested that AI would be challenging in clinical practice unless the existing context is considered. Thus, the integration of AI needs to be managed within the infrastructure of existing radiology systems.
Both groups were unsurprised about the mixed review findings in terms of experiences (e.g., some reporting AI had benefits and others reduced efficiency). The complexity of how AI is being used was noted (e.g., in different ways to achieve a range of objectives), making it difficult to provide a single answer about whether AI is beneficial. Therefore, the importance of returning to the purpose of using AI and how these technologies can help, was emphasised.
Both groups felt the review was relevant because it highlighted current evidence limitations, especially as AI is often viewed as a transformative solution. Stakeholders noted the importance of addressing evidence gaps (e.g., patient, carer and public experiences, health inequalities), and among public members, including patient voices as AI advances (example quotes in Appendix S7).
Quality appraisal of included studies
Qualitative (N = 15), quantitative survey, and mixed-methods studies (N = 68) met most of the criteria but there were some reports of sampling bias and limited description of qualitative analysis in mixed-methods studies (Supplementary File S2). The quantitative studies (N = 53) met most of the criteria for randomised or non-randomised studies, but many did not address potential confounders such as patient characteristics and co-morbidities. Since the findings depended on the interpretation of individual readers, the sample of readers in many studies was low, leading to risk of individual bias. Cost studies rated good (N = 6) (Supplementary File S3) but acknowledged factors that might impact the reliability of conclusions, including the identified range of the relevant costs, consequences for each alternative, the earliness in costs, country and facility-specific considerations and consequences identification and measurement.
Discussion
Our review highlights the paucity of research conducted in real-world settings. From the current evidence, we conclude that AI can have positive outcomes in relation to improving diagnostic accuracy and reducing interpretation times, which aligns with some early staff experiences and perceived benefits of using AI. Factors influencing implementation (e.g., high technical demand, lack of guidance, training and knowledge, transparency, and expert engagement) were also identified. However, we do not know enough about the system-wide impact of AI, the process of procurement to implementation, experiences of using AI and/or receiving AI-based care. Current and future implementation should consider if and how AI can address the needs of healthcare systems, the implementation context and educational training needs.
The limited number of studies conducted in real-world settings aligns with research gaps highlighted in evidence generation plans.8 Existing research6 and findings from stakeholder workshops suggest this is because services are in the early stages of implementing AI,57 with further work emerging.6 The positive outcomes in relation to improving diagnostic accuracy and reducing interpretation times resonate with previous literature.2,163,164 However, findings illustrated that improvements in diagnostic accuracy were more likely among less experienced staff2 and there was evidence of AI overcalling negative findings; a risk reported in previous studies.163 Furthermore, there was inconsistent evidence regarding experiences, how AI can improve workflow efficiency and whether these technologies are cost-effective, with few papers studying cost specifically. Variation in the quantitative findings were also likely to be dependent on study design, the imaging modality, and clinical application. From the current evidence, we cannot draw conclusions on how the potential benefits of AI may impact longer-term patient outcomes and the wider healthcare system (e.g., changes in volumes of patients for diagnostic/treatment services). Findings support previous research which shows AI has the potential to positively impact diagnostics in radiology.2 In parallel, the evidence highlights caution in how AI is perceived (e.g., as a complementary tool which can help to navigate current demand, rather than a solution).165
We extend previous work by reviewing literature on real-world implementation, demonstrating AI has been used to support diagnostics in a complementary role and not a replacement. This aligns with previous research,15,166 user guidance,14 and stakeholder views,167 which highlight the central role of clinicians in maintaining human continuity (ensuring that humans have oversight and AI is not used with complete autonomy). Although AI has the potential to positively impact radiology, the synthesised literature shows that continued human oversight and transparency about how AI makes decisions, are needed to foster a sense of trust when using AI for diagnostic purposes. These findings relate to ethical concerns often associated with AI, with existing evidence recommending that AI implementation should promote safety and transparency whilst reducing risk of harm.168
Furthermore, although few papers have evaluated implementation, our findings highlight the complexity of integrating AI into existing healthcare systems, especially when organisations may not be ready to support such technical advancements,166 with an absence of clear guidance.56 Integrating AI will likely cause initial disruptions; our findings suggest that it is essential to evaluate the implementation context and ensure there is capacity to support the integration process.166 Otherwise, as highlighted in our workshops, attempts to improve workflow efficiency by using AI may have the opposite effect. Variation in how AI is being used, complexity surrounding the ethical landscape and how AI can be used effectively alongside clinicians, emphasises a need for further evidence that can continue to inform clear implementation guidance and/or practice frameworks,58,168 such as the recently developed European Union AI Act.169
We advance previous research by synthesising staff experiences of using AI. Importantly, staff and trainees had limited experience using AI; consistent with survey papers where only a small proportion of participants had used AI in a clinical setting59 and likely reflected limited clinical implementation. No studies explored patient, carer, or public experiences of AI-based care, although workshop discussions and current policy guidance3 highlight the importance of patient voice/engagement. Therefore, the evidence needs to include the experiences of groups whose acceptance and trust are important for the ongoing use of these technologies.17,109
Although evidence shows potential value in using AI, our findings suggest that implementation and use are happening ahead of developing a robust evidence base.8 This was reflected further in stakeholder workshops, where implementation was described as a continuous learning process rather than being evidence informed. However, AI needs to be implemented to build an evidence base that explores real-world implementation. Therefore, to ensure future use can be evidence informed, there needs to be a careful balance between implementing AI safely and conducting robust evaluations, to enable learning from important technological advancements. For evidence users, our review highlights what is already known and what needs to considered moving forward, when interest continues to grow165 and AI is used in other clinical areas.170
For example, it is essential to be clear on the specific needs of healthcare systems (e.g., improving clinical outcomes and administrative efficiency), whether AI can effectively meet these needs over other solutions, and that these needs are communicated to AI developers, so that implementation is problem-driven rather than product-driven.171 Additionally, considerations including the healthcare pathway, country, and clinical conditions are needed, as there might be differences from setting-to-setting. Another factor is the population size as AI tools seem more effective in high-prevalence populations.37 Furthermore, we highlight the need for tailored educational programmes with input from experts, that acknowledge current knowledge and the complexity of using AI across different clinical contexts.2,172
Future research should evaluate the process of implementing AI into live clinical pathways or in shadow/testing mode,8 including pre-implementation processes and patient, carer and public experiences of AI-based care (e.g., experiences of the diagnostic pathway featuring demonstrations,2,172 ethical factors like consent and transparency, patient safety, trust in AI and the impact of AI on empathy and human connection/relationships17). Such research may also evaluate how unsupervised AI (used without human supervision) is implemented, used, perceived, and experienced by different stakeholders. Although we did not include these papers, this could be an emerging focus as AI continues to be implemented globally and may be used with greater autonomy. Secondly, further research is needed on the effect of AI on patient outcomes, wider hospital systems (e.g., time to treatment, changes in volumes for other diagnostic or treatment services), diagnostic outcomes, and inequalities.8 Future research on cost-effectiveness of AI solutions in radiology is also needed.8,37 Finally, understanding the long-term impact and sustainability of AI in clinical settings is essential.37
The review had a broad and inclusive focus, supported by guidance from clinical experts (FG, TOR). Findings present a summary about how AI is implemented, used, and experienced globally, as well as current evidence on effectiveness and cost, which may be relevant for healthcare systems worldwide. However, it may be difficult to generalise findings across different health systems and only papers in English were included. As AI is a rapidly evolving field, we may not have captured all evidence and papers where AI was used autonomously were out of scope. Although we searched four databases and the grey literature, not all databases were used. There is also a risk of publication bias in AI research, as in other fields. Lastly, stakeholder workshops strengthened findings by illustrating implications, but only in the context of the English NHS.
To conclude, our review suggests potential value in using AI for diagnostics in radiology, mirrored in the ongoing interest in AI. However, to assist with safe and effective procurement, implementation, validation, and evaluation, research must be planned, commissioned, and used to address the current gaps in the evidence base. This will help to draw conclusions about how best to use AI as a complementary tool in the complexities of radiology practice.
Contributors
All authors contributed to the conceptualisation and development of the study.
Data curation: RL, ED, EM, CSJ.
Formal analysis: RL, ED, EM, CSJ.
Funding acquisition: CSJ, AIGR, NC, PLN, RM, SM, NJF.
Investigation: RL, ED, EM, CSJ, AIGR, HW.
Methodology: RL, ED, EM, CSJ, AIGR, HW, TOR, FG, NC, KH, SM, NJF.
Project administration: HE, PLN.
Resources: RL, ED, EM, CSJ, AIGR, HW, TOR, FG, HE, RM, JL, AH.
Supervision: HW, AIGR, CSJ, SM, NJF.
Verification of the underlying data: RL, ED, EM, CSJ, HW, AIGR.
Writing – original draft: RL, ED, EM.
TOR and FG provided clinical expertise and guidance. All authors contributed to the interpretation of findings, visualisation, revising, and finalising the paper. All authors have read and approved the final manuscript.
Data sharing statement
Data will be made available upon request made to the corresponding author.
Declaration of interests
AIGR is a trustee at Health Services Research UK. RM is Chair of the Board of Trustees of the Middlesex Association for the Blind; Vice-Chair on the Board of Trustees of the Research Institute for Disabled Consumers; Trustee on the Board of Thomas Pocklington Trust; Non-Executive Director on the Board of Evenbreak; Co-chair and Director on the Board of Shaping Our Lives. SM is currently (2022-) a member of the Small Business Research Initiative (SBRI) Healthcare panel. His post is funded in part by RAND Europe, a non-profit research organisation. SM is also Deputy Director of Applied Research Collaboration East of England (NIHR ARC EoE) at Cambridgeshire and Peterborough NHS Foundation Trust. NJF was a Non-Executive Director at Whittington Health NHS Trust until October 2024, a trustee at Health Services Research UK until 2022 and is a Non-Executive Director at Covid-19 Bereaved Families for Justice UK. TOR is part of the AXREM AI Special Focus Group, the British Institute of Radiology AI Special Interest Group, NHSE AI Deployment Fund Oversight Committee and Society of Radiographers AI Advisory Group. FG is a shareholder in Optellum Ltd, is a co-founder and Chairman of the RAIQC Ltd, was an advisor to NICE on the use of chest x-ray AI in the NHS and a committee member of the RCR Advisory group. All other authors report no declarations of interest.
Acknowledgements
We would like to acknowledge Debora Marletta (DM) from UCL library services for their support with the search strategy. We would like to acknowledge all those who took part in our stakeholder workshops for their contributions to the study. This project is funded by the National Institute for Health and Care Research, Health and Social Care Delivery Research programme (Ref: NIHR156380). NJF and AIGR are supported by the National Institute for Health Research (NIHR) Central London Patient Safety Research Collaboration and NFJ is an NIHR Senior Investigator. The views expressed are those of the author(s) and not necessarily those of the NIHR or the Department of Health and Social Care.
Footnotes
Supplementary data related to this article can be found at https://doi.org/10.1016/j.eclinm.2025.103228.
Appendix ASupplementary data
Overall review summary table.
Quality assessment.
Quality assessment (cost papers only).
References
- 1.Bekbolatova M., Mayer J., Ong C.W., Toma M. Transformative potential of AI in healthcare: definitions, applications, and navigating the ethical landscape and public perspectives. Healthcare (Basel) 2024;12(2) doi: 10.3390/healthcare12020125. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.He C., Liu W., Xu J., et al. Efficiency, accuracy, and health professional's perspectives regarding artificial intelligence in radiology practice: a scoping review. Iradiology. 2024;2(2):156–172. [Google Scholar]
- 3.Darzi A. Summary letter from Lord Darzi to the Secretary of State for Health and Social Care. 2024. https://www.gov.uk/government/publications/independent-investigation-of-the-nhs-in-england/summary-letter-from-lord-darzi-to-the-secretary-of-state-for-health-and-social-care Available from:
- 4.Geis J.R., Brady A.P., Wu C.C., et al. Ethics of artificial intelligence in radiology: summary of the joint European and North American multisociety statement. Radiology. 2019;293(2):436–440. doi: 10.1148/radiol.2019191586. [DOI] [PubMed] [Google Scholar]
- 5.Brady A.P., Allen B., Chong J., et al. Developing, purchasing, implementing and monitoring AI tools in radiology: practical considerations. A multi-society statement from the ACR, CAR, ESR, RANZCR & RSNA. J Med Imaging Radiat Oncol. 2024;68(1):7–26. doi: 10.1111/1754-9485.13612. [DOI] [PubMed] [Google Scholar]
- 6.Gleeson F., Revel M.P., Biederer J., et al. Implementation of artificial intelligence in thoracic imaging-a what, how, and why guide from the European Society of Thoracic Imaging (ESTI) Eur Radiol. 2023;33(7):5077–5086. doi: 10.1007/s00330-023-09409-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.van Leeuwen K.G., de Rooij M., Schalekamp S., van Ginneken B., Rutten M.J.C.M. How does artificial intelligence in radiology improve efficiency and health outcomes? Pediatr Radiol. 2022;52(11):2087–2093. doi: 10.1007/s00247-021-05114-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.National Institute for Health and Care Excellence (NICE) Evidence generation plan. 2024. https://www.nice.org.uk/consultations/2576/4/evidence-gaps Available from:
- 9.Rapid Service Evaluation Team Using artificial intelligence in chest diagnostics for lung disease. 2024. https://www.nuffieldtrust.org.uk/rset-rapid-evaluations-of-new-ways-of-providing-care/projects/using-artificial-intelligence-in-chest-diagnostics-for-lung-disease Available from:
- 10.Tricco C.A., Langlois V.E., Straus E.S. Rapid Reviews to Strengthen Health Policy Systems: A Practical Guide. World Health Organization; Geneva: 2017. [Google Scholar]
- 11.Moher D., Liberati A., Tetzlaff J., Altman D.G., PRISMA Group Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009;6(7) doi: 10.1371/journal.pmed.1000097. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Peters M.D., Godfrey C.M., Khalil H., McInerney P., Parker D., Soares C.B. Guidance for conducting systematic scoping reviews. Int J Evid Based Healthc. 2015;13(3):141–146. doi: 10.1097/XEB.0000000000000050. [DOI] [PubMed] [Google Scholar]
- 13.Garritty C., Gartlehner G., Nussbaumer-Streit B., et al. Cochrane Rapid Reviews Methods Group offers evidence-informed guidance to conduct rapid reviews. J Clin Epidemiol. 2021;130:13–22. doi: 10.1016/j.jclinepi.2020.10.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.UK Statutory Instruments The ionising radiation (medical exposure) (amendment) regulations 2024. 2024. https://www.legislation.gov.uk/uksi/2024/896/made Available from:
- 15.National Institute for Health and Care Excellence (NICE) Artificial intelligence-derived software to analyse chest X-rays for suspected lung cancer in primary care referrals: early value assessment. 2023. https://www.nice.org.uk/guidance/hte12 Available from:
- 16.Hogg H.D.J., Al-Zubaidy M., Technology Enhanced Macular Services Study Reference Group, et al. Stakeholder perspectives of clinical artificial intelligence implementation: systematic review of qualitative evidence. J Med Internet Res. 2023;25 doi: 10.2196/39742. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Young A.T., Amara D., Bhattacharya A., Wei M.L. Patient and general public attitudes towards clinical artificial intelligence: a mixed methods systematic review. Lancet Digit Health. 2021;3(9):e599–e611. doi: 10.1016/S2589-7500(21)00132-1. [DOI] [PubMed] [Google Scholar]
- 18.The EndNote Team . Clarivate; Philadelphia, PA: 2013. EndNote. [Google Scholar]
- 19.Ouzzani M., Hammady H., Fedorowicz Z., Elmagarmid A. Rayyan — a web and mobile app for systematic reviews. Systematic Reviews. Syst Rev. 2016;5(1):210. doi: 10.1186/s13643-016-0384-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Popay J., Roberts H., Sowden A., et al. Guidance on the Conduct of Narrative Synthesis in Systematic Reviews. Lancaster University; Lancaster: 2006. [Google Scholar]
- 21.Hong Q.N., Pluye P., Fàbregues S., et al. Improving the content validity of the mixed methods appraisal tool: a modified e-Delphi study. J Clin Epidemiol. 2019;111:49–59.e1. doi: 10.1016/j.jclinepi.2019.03.008. [DOI] [PubMed] [Google Scholar]
- 22.Drummond M.F., Sculpher M.J., Claxton K., Stoddart G.L., Torrance G.W. Methods for the Economic Evaluation of Health Care Programmes. Oxford University Press; Oxford: 2015. [Google Scholar]
- 23.Doran C.M. Economic evaluation of interventions to treat opiate dependence: a review of the evidence. Pharmacoeconomics. 2008;26(5):371–393. doi: 10.2165/00019053-200826050-00003. [DOI] [PubMed] [Google Scholar]
- 24.Braun V., Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3(2):77–101. [Google Scholar]
- 25.Chien H.C., Yang T.L., Juang W.C., Chen Y.A., Li Y.J., Chen C.Y. Pilot report for intracranial hemorrhage detection with deep learning implanted head computed tomography images at emergency department. J Med Syst. 2022;46(7):49. doi: 10.1007/s10916-022-01833-z. [DOI] [PubMed] [Google Scholar]
- 26.Forookhi A., Laschena L., Pecoraro M., et al. Bridging the experience gap in prostate multiparametric magnetic resonance imaging using artificial intelligence: a prospective multi-reader comparison study on inter-reader agreement in PI-RADS v2.1, image quality and reporting time between novice and expert readers. Eur J Radiol. 2023;161 doi: 10.1016/j.ejrad.2023.110749. [DOI] [PubMed] [Google Scholar]
- 27.Hong W., Hwang E.J., Park C.M., Goo J.M. Effects of implementing artificial intelligence-based computer-aided detection for chest radiographs in daily practice on the rate of referral to chest computed tomography in pulmonology outpatient clinic. Korean J Radiol. 2023;24(9):890–902. doi: 10.3348/kjr.2023.0255. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Hwang E.J., Goo J.M., Nam J.G., Park C.M., Hong K.J., Kim K.H. Conventional versus artificial intelligence-assisted interpretation of chest radiographs in patients with acute respiratory symptoms in emergency department: a pragmatic randomized clinical trial. Korean J Radiol. 2023;24(3):259–270. doi: 10.3348/kjr.2022.0651. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Ishiyama M., Kudo S.E., Misawa M., et al. Impact of the clinical use of artificial intelligence-assisted neoplasia detection for colonoscopy: a large-scale prospective, propensity score-matched study (with video) Gastrointest Endosc. 2022;95(1):155–163. doi: 10.1016/j.gie.2021.07.022. [DOI] [PubMed] [Google Scholar]
- 30.Jones C.M., Danaher L., Milne M.R., et al. Assessment of the effect of a comprehensive chest radiograph deep learning model on radiologist reports and patient outcomes: a real-world observational study. BMJ Open. 2021;11(12) doi: 10.1136/bmjopen-2021-052902. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Li J., Zhou L., Zhan Y., et al. How does the artificial intelligence-based image-assisted technique help physicians in diagnosis of pulmonary adenocarcinoma? A randomized controlled experiment of multicenter physicians in China. J Am Med Inform Assoc. 2022;29(12):2041–2049. doi: 10.1093/jamia/ocac179. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Liu P., Wang P., Glissen Brown J.R., et al. The single-monitor trial: an embedded CADe system increased adenoma detection during colonoscopy: a prospective randomized study. Therap Adv Gastroenterol. 2020;13 doi: 10.1177/1756284820979165. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Nehme F., Coronel E., Barringer D.A., et al. Performance and attitudes toward real-time computer-aided polyp detection during colonoscopy in a large tertiary referral center in the United States. Gastrointest Endosc. 2023;98(1):100–109.e6. doi: 10.1016/j.gie.2023.02.016. [DOI] [PubMed] [Google Scholar]
- 34.Nhat P.T.H., Van Hao N., Tho P.V., et al. Clinical benefit of AI-assisted lung ultrasound in a resource-limited intensive care unit. Crit Care. 2023;27(1):257. doi: 10.1186/s13054-023-04548-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Rosa F., Buccicardi D., Romano A., Borda F., D'Auria M.C., Gastaldo A. Artificial intelligence and pelvic fracture diagnosis on X-rays: a preliminary study on performance, workflow integration and radiologists' feedback assessment in a spoke emergency hospital. Eur J Radiol Open. 2023;11 doi: 10.1016/j.ejro.2023.100504. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Schmuelling L., Franzeck F.C., Nickel C.H., et al. Deep learning-based automated detection of pulmonary embolism on CT pulmonary angiograms: no significant effects on report communication times and patient turnaround in the emergency department nine months after technical implementation. Eur J Radiol. 2021;141 doi: 10.1016/j.ejrad.2021.109816. [DOI] [PubMed] [Google Scholar]
- 37.Schwendicke F., Rossi J.G., Göstemeyer G., et al. Cost-effectiveness of artificial intelligence for proximal caries detection. J Dent Res. 2021;100(4):369–376. doi: 10.1177/0022034520972335. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Thiruvengadam N.R., Solaimani P., Shrestha M., et al. The efficacy of real-time computer-aided detection of colonic neoplasia in community practice: a pragmatic randomized controlled trial. Clin Gastroenterol Hepatol. 2024;22(11):2221–2230.e15. doi: 10.1016/j.cgh.2024.02.021. [DOI] [PubMed] [Google Scholar]
- 39.Wenderott K., Krups J., Luetkens J.A., Gambashidze N., Weigl M. Prospective effects of an artificial intelligence-based computer-aided detection system for prostate imaging on routine workflow and radiologists' outcomes. Eur J Radiol. 2024;170 doi: 10.1016/j.ejrad.2023.111252. [DOI] [PubMed] [Google Scholar]
- 40.Yacoub B., Varga-Szemes A., Schoepf U.J., et al. Impact of artificial intelligence assistance on chest CT interpretation times: a prospective randomized study. AJR Am J Roentgenol. 2022;219(5):743–751. doi: 10.2214/AJR.22.27598. [DOI] [PubMed] [Google Scholar]
- 41.Chiramal J.A., Johnson J., Webster J., et al. Artificial Intelligence-based automated CT brain interpretation to accelerate treatment for acute stroke in rural India: an interrupted time series study. PLOS Glob Public Health. 2024;4(7) doi: 10.1371/journal.pgph.0003351. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Herpe G., Nelken H., Vendeuvre T., et al. Effectiveness of an artificial intelligence software for limb radiographic fracture recognition in an emergency department. J Clin Med. 2024;13(18) doi: 10.3390/jcm13185575. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Glessgen C., Crowe L.A., Wetzl J., et al. Automated vs manual cardiac MRI planning: a single-center prospective evaluation of reliability and scan times. Eur Radiol. 2025 doi: 10.1007/s00330-025-11364-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Deng R., Liu Y., Wang K., et al. Comparison of MRI artificial intelligence-guided cognitive fusion-targeted biopsy versus routine cognitive fusion-targeted prostate biopsy in prostate cancer diagnosis: a randomized controlled trial. BMC Med. 2024;22(1):530. doi: 10.1186/s12916-024-03742-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Del Gaizo A.J., Osborne T.F., Shahoumian T., Sherrier R. Deep learning to detect intracranial hemorrhage in a national teleradiology program and the impact on interpretation time. Radiol Artif Intell. 2024;6(5) doi: 10.1148/ryai.240067. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46.Hunter J.G., Bera K., Shah N., et al. Real-world performance of pneumothorax-detecting artificial intelligence algorithm and its impact on radiologist reporting times. Acad Radiol. 2024;32(3):1165–1174. doi: 10.1016/j.acra.2024.10.012. [DOI] [PubMed] [Google Scholar]
- 47.Savage C.H., Elkassem A.A., Hamki O., et al. Prospective evaluation of artificial intelligence triage of incidental pulmonary emboli on contrast-enhanced CT examinations of the chest or abdomen. AJR Am J Roentgenol. 2024;223(3) doi: 10.2214/ajr.24.31067. [DOI] [PubMed] [Google Scholar]
- 48.van Leeuwen K.G., Meijer F.J.A., Schalekamp S., et al. Cost-effectiveness of artificial intelligence aided vessel occlusion detection in acute stroke: an early health technology assessment. Insights Imaging. 2021;12(1):133. doi: 10.1186/s13244-021-01077-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Thiruvengadam N.R., Coté G.A., Gupta S., et al. An evaluation of critical factors for the cost-effectiveness of real-time computer-aided detection: sensitivity and threshold analyses using a microsimulation model. Gastroenterology. 2023;164(6):906–920. doi: 10.1053/j.gastro.2023.01.027. [DOI] [PubMed] [Google Scholar]
- 50.Yonazu S., Ozawa T., Nakanishi T., et al. Cost-effectiveness analysis of the artificial intelligence diagnosis support system for early gastric cancers. DEN Open. 2024;4(1) doi: 10.1002/deo2.289. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51.Brix M.A.K., Järvinen J., Bode M.K., et al. Financial impact of incorporating deep learning reconstruction into magnetic resonance imaging routine. Eur J Radiol. 2024;175 doi: 10.1016/j.ejrad.2024.111434. [DOI] [PubMed] [Google Scholar]
- 52.Faric N., Hinder S., Williams R., et al. Early experiences of integrating an artificial intelligence-based diagnostic decision support system into radiology settings: a qualitative study. Stud Health Technol Inform. 2023;309:240–241. doi: 10.3233/SHTI230787. [DOI] [PubMed] [Google Scholar]
- 53.Strohm L., Hehakaya C., Ranschaert E.R., Boon W.P.C., Moors E.H.M. Implementation of artificial intelligence (AI) applications in radiology: hindering and facilitating factors. Eur Radiol. 2020;30(10):5525–5532. doi: 10.1007/s00330-020-06946-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54.Kim B., Romeijn S., van Buchem M., Mehrizi M.H.R., Grootjans W. A holistic approach to implementing artificial intelligence in radiology. Insights Imaging. 2024;15(1):22. doi: 10.1186/s13244-023-01586-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55.Wenderott K., Krups J., Luetkens J.A., Weigl M. Radiologists' perspectives on the workflow integration of an artificial intelligence-based computer-aided detection system: a qualitative study. Appl Ergon. 2024;117 doi: 10.1016/j.apergo.2024.104243. [DOI] [PubMed] [Google Scholar]
- 56.Royal College of Radiologists . Overcoming Barriers to AI Implementation in Imaging: Outcomes of an RCR Expert Stakeholder Day. The Royal College of Radiologists; London: 2023. [Google Scholar]
- 57.NHS England . Understanding Healthcare Workers’ Confidence in AI. NHS England; England: 2022. [Google Scholar]
- 58.Stogiannos N., O'Regan T., Scurr E., et al. Lessons on AI implementation from senior clinical practitioners: an exploratory qualitative study in medical imaging and radiotherapy in the UK. J Med Imaging Radiat Sci. 2025;56(1) doi: 10.1016/j.jmir.2024.101797. [DOI] [PubMed] [Google Scholar]
- 59.European Society of Radiology (ESR). Current practical experience with artificial intelligence in clinical radiology: a survey of the European Society of Radiology. Insights Imaging. 2022;13(1):107. doi: 10.1186/s13244-022-01247-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 60.Choi H., Sunwoo L., Cho S.J., et al. A nationwide web-based survey of neuroradiologists' perceptions of artificial intelligence software for neuro-applications in Korea. Korean J Radiol. 2023;24(5):454–464. doi: 10.3348/kjr.2022.0905. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 61.Carlile M., Hurt B., Hsiao A., Hogarth M., Longhurst C.A., Dameff C. Deployment of artificial intelligence for radiographic diagnosis of COVID-19 pneumonia in the emergency department. J Am Coll Emerg Physicians Open. 2020;1(6):1459–1464. doi: 10.1002/emp2.12297. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 62.Rabinovich D., Mosquera C., Torrens P., Aineseder M., Benitez S. User satisfaction with an AI system for chest X-ray analysis implemented in a hospital's emergency setting. Stud Health Technol Inform. 2022;294:8–12. doi: 10.3233/SHTI220386. [DOI] [PubMed] [Google Scholar]
- 63.Shin H.J., Lee S., Kim S., Son N.H., Kim E.K. Hospital-wide survey of clinical experience with artificial intelligence applied to daily chest radiographs. PLoS One. 2023;18(3) doi: 10.1371/journal.pone.0282123. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 64.Yoon D.H., Heo S., Yu J.Y., et al. Effect of an artificial-intelligent chest radiographs reporting system in an emergency department. Signa Vitae. 2023;19(6):144–151. doi: 10.22514/sv.2023.108. [DOI] [Google Scholar]
- 65.Liu W., Wu Y., Zheng Z., Yu W., Bittle M.J., Kharrazi H. Evaluating artificial intelligence's role in lung nodule diagnostics: a survey of radiologists in two pilot tertiary hospitals in China. J Clin Imaging Sci. 2024;14:31. doi: 10.25259/jcis_72_2024. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 66.Hoppe B.F., Rueckel J., Dikhtyar Y., et al. Implementing artificial intelligence for emergency radiology impacts physicians' knowledge and perception: a prospective pre- and post-analysis. Invest Radiol. 2024;59(5):404–412. doi: 10.1097/RLI.0000000000001034. [DOI] [PubMed] [Google Scholar]
- 67.Meng F., Kottlors J., Shahzad R., et al. AI support for accurate and fast radiological diagnosis of COVID-19: an international multicenter, multivendor CT study. Eur Radiol. 2023;33(6):4280–4291. doi: 10.1007/s00330-022-09335-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 68.Bowness J.S., Macfarlane A.J.R., Burckett-St Laurent D., et al. Evaluation of the impact of assistive artificial intelligence on ultrasound scanning for regional anaesthesia. Br J Anaesth. 2023;130(2):226–233. doi: 10.1016/j.bja.2022.07.049. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 69.Agrawal A., Khatri G.D., Khurana B., Sodickson A.D., Liang Y., Dreizin D. A survey of ASER members on artificial intelligence in emergency radiology: trends, perceptions, and expectations. Emerg Radiol. 2023;30(3):267–277. doi: 10.1007/s10140-023-02121-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 70.Akinmoladun J.A., Smart A.E., Atalabi O.M. Knowledge, attitude, and perception of radiologists about artificial intelligence in Nigeria. W Afr J Radiol. 2022;29(2):112–117. doi: 10.4103/wajr.wajr_42_21. [DOI] [Google Scholar]
- 71.Al Mohammad B., Aldaradkeh A., Gharaibeh M., Reed W. Assessing radiologists' and radiographers' perceptions on artificial intelligence integration: opportunities and challenges. Br J Radiol. 2024;97(1156):763–769. doi: 10.1093/bjr/tqae022. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 72.Aldhafeeri F.M. Perspectives of radiographers on the emergence of artificial intelligence in diagnostic imaging in Saudi Arabia. Insights Imaging. 2022;13(1):178. doi: 10.1186/s13244-022-01319-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 73.Botwe B.O., Akudjedu T.N., Antwi W.K., et al. The integration of artificial intelligence in medical imaging practice: perspectives of African radiographers. Radiography (Lond) 2021;27(3):861–866. doi: 10.1016/j.radi.2021.01.008. [DOI] [PubMed] [Google Scholar]
- 74.Chen Y., Stavropoulou C., Narasinkan R., Baker A., Scarbrough H. Professionals' responses to the introduction of AI innovations in radiology and their implications for future adoption: a qualitative study. BMC Health Serv Res. 2021;21(1):813. doi: 10.1186/s12913-021-06861-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 75.Coppola F., Faggioni L., Regge D., et al. Artificial intelligence: radiologists' expectations and opinions gleaned from a nationwide online survey. Radiol Med. 2021;126(1):63–71. doi: 10.1007/s11547-020-01205-y. [DOI] [PubMed] [Google Scholar]
- 76.Eschert T., Schwendicke F., Krois J., Bohner L., Vinayahalingam S., Hanisch M. A survey on the use of artificial intelligence by clinicians in dentistry and oral and maxillofacial surgery. Medicina (Kaunas) 2022;58(8):5. doi: 10.3390/medicina58081059. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 77.Pedersen M.R.V., Kusk M.W., Lysdahlgaard S., Mork-Knudsen H., Malamateniou C., Jensen J. Nordic radiographers' and students' perspectives on artificial intelligence - a cross-sectional online survey. Radiography (Lond) 2024;30(3):776–783. doi: 10.1016/j.radi.2024.02.020. [DOI] [PubMed] [Google Scholar]
- 78.Ryan M.L., O'Donovan T., McNulty J.P. Artificial intelligence: the opinions of radiographers and radiation therapists in Ireland. Radiography (Lond) 2021;27(Suppl 1):S74–S82. doi: 10.1016/j.radi.2021.07.022. [DOI] [PubMed] [Google Scholar]
- 79.Scheetz J., Rothschild P., McGuinness M., et al. A survey of clinicians on the use of artificial intelligence in ophthalmology, dermatology, radiology and radiation oncology. Sci Rep. 2021;11(1):5193. doi: 10.1038/s41598-021-84698-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 80.Sharip H., Che Zakaria W.F.W., Leong S.S., Ali Masoud M., Mohd Junaidi M.Z.H. Radiographers’ acceptance on the integration of artificial intelligence into medical imaging practice. Environ Behav Proc J. 2023;8(25):255–260. doi: 10.21834/e-bpj.v8i25.4872. Kuala Terengganu, Malaysia. [DOI] [Google Scholar]
- 81.Hardie T., Horton T., Willis M., Warburton W. Switched on: How Do We Get The Best Out of Automation and AI in Health Care? The Health Foundation; United Kingdom (UK): 2021. [Google Scholar]
- 82.The Health Foundation AI in health care: what do the public and NHS staff think? 2024. https://www.health.org.uk/publications/long-reads/ai-in-health-care-what-do-the-public-and-nhs-staff-think Available from:
- 83.Adelsmayr G., Janisch M., Pohl M., Fuchsjäger M., Schöllnast H. Facing the AI challenge in radiology: lessons learned from a regional survey among Austrian radiologists in academic and non-academic settings on perceptions and expectations towards artificial intelligence. Digit Health. 2024;10 doi: 10.1177/20552076241298472. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 84.Cè M., Ibba S., Cellina M., et al. Radiologists' perceptions on AI integration: an in-depth survey study. Eur J Radiol. 2024;177 doi: 10.1016/j.ejrad.2024.111590. [DOI] [PubMed] [Google Scholar]
- 85.Goyal S., Sakhi P., Kalidindi S., Nema D., Pakhare A.P. Knowledge, attitudes, perceptions, and practices related to artificial intelligence in radiology among Indian radiologists and residents: a multicenter nationwide study. Cureus. 2024;16(12) doi: 10.7759/cureus.76667. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 86.Huang W., Li Y., Bao Z., et al. Knowledge, attitude and practice of radiologists regarding artificial intelligence in medical imaging. J Multidiscip Healthc. 2024;17:3109–3119. doi: 10.2147/jmdh.S451301. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 87.Pedersen M.R.V., Kusk M.W., Lysdahlgaard S., Mork-Knudsen H., Malamateniou C., Jensen J. A Nordic survey on artificial intelligence in the radiography profession - is the profession ready for a culture change? Radiography (Lond) 2024;30(4):1106–1115. doi: 10.1016/j.radi.2024.04.020. [DOI] [PubMed] [Google Scholar]
- 88.Currie G., Nelson T., Hewis J., et al. Australian perspectives on artificial intelligence in medical imaging. J Med Radiat Sci. 2022;69(3):282–292. doi: 10.1002/jmrs.581. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 89.Huisman M., Ranschaert E., Parker W., et al. An international survey on AI in radiology in 1041 radiologists and radiology residents part 2: expectations, hurdles to implementation, and education. Eur Radiol. 2021;31(11):8797–8806. doi: 10.1007/s00330-021-07782-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 90.Lombi L., Rossero E. How artificial intelligence is reshaping the autonomy and boundary work of radiologists. A qualitative study. Sociol Health Illn. 2024;46(2):200–218. doi: 10.1111/1467-9566.13702. [DOI] [PubMed] [Google Scholar]
- 91.Ng C.T., Roslan S.N.A., Chng Y.H., et al. Singapore radiographers' perceptions and expectations of artificial intelligence - a qualitative study. J Med Imaging Radiat Sci. 2022;53(4):554–563. doi: 10.1016/j.jmir.2022.08.005. [DOI] [PubMed] [Google Scholar]
- 92.Rainey C., O'Regan T., Matthew J., et al. Beauty is in the AI of the beholder: are we ready for the clinical integration of artificial intelligence in radiography? An exploratory analysis of perceived AI knowledge, skills, confidence, and education perspectives of UK radiographers. Front Digit Health. 2021;3 doi: 10.3389/fdgth.2021.739327. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 93.Wadhwa V., Alagappan M., Gonzalez A., et al. Physician sentiment toward artificial intelligence (AI) in colonoscopic practice: a survey of US gastroenterologists. Endosc Int Open. 2020;8(10):E1379–E1384. doi: 10.1055/a-1223-1926. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 94.Abuzaid M.M., Elshami W., Tekin H., Issa B. Assessment of the willingness of radiologists and radiographers to accept the integration of artificial intelligence into radiology practice. Acad Radiol. 2022;29(1):87–94. doi: 10.1016/j.acra.2020.09.014. [DOI] [PubMed] [Google Scholar]
- 95.Arif W.M. Radiologic technology students' perceptions on adoption of artificial intelligence technology in radiology. Int J Gen Med. 2024;17:3129–3136. doi: 10.2147/ijgm.S465944. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 96.Drogt J., Milota M., Veldhuis W., Vos S., Jongsma K. The promise of AI for image-driven medicine: qualitative interview study of radiologists' and pathologists' perspectives. JMIR Hum Factors. 2024;11 doi: 10.2196/52514. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 97.Abuzaid M.M., Elshami W., McConnell J., Tekin H.O. An extensive survey of radiographers from the Middle East and India on artificial intelligence integration in radiology practice. Health Technol (Berl) 2021;11(5):1045–1050. doi: 10.1007/s12553-021-00583-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 98.Meshari Ali A A.A., Ahmad Alhadlaq F., Saleh Alabdullatif G., et al. Medical student's attitudes and perceptions toward artificial intelligence applications. J Educ Teach Train. 2022;13(4):151–157. doi: 10.47750/jett.2022.13.04.021. [DOI] [Google Scholar]
- 99.Alsharif W., Qurashi A., Toonsi F., et al. A qualitative study to explore opinions of Saudi Arabian radiologists concerning AI-based applications and their impact on the future of the radiology. BJR Open. 2022;4(1) doi: 10.1259/bjro.20210029. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 100.Coakley S., Young R., Moore N., et al. Radiographers' knowledge, attitudes and expectations of artificial intelligence in medical imaging. Radiography (Lond) 2022;28(4):943–948. doi: 10.1016/j.radi.2022.06.020. [DOI] [PubMed] [Google Scholar]
- 101.Edzie E.K.M., Dzefi-Tettey K., Asemah A.R., et al. Perspectives of radiologists in Ghana about the emerging role of artificial intelligence in radiology. Heliyon. 2023;9(5) doi: 10.1016/j.heliyon.2023.e15558. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 102.Eltorai A.E.M., Bratt A.K., Guo H.H. Thoracic radiologists' versus computer scientists' perspectives on the future of artificial intelligence in radiology. J Thorac Imaging. 2020;35(4):255–259. doi: 10.1097/RTI.0000000000000453. [DOI] [PubMed] [Google Scholar]
- 103.Jungmann F., Jorg T., Hahn F., et al. Attitudes toward artificial intelligence among radiologists, IT specialists, and industry. Acad Radiol. 2021;28(6):834–840. doi: 10.1016/j.acra.2020.04.011. [DOI] [PubMed] [Google Scholar]
- 104.Shiang T., Garwood E., Debenedectis C.M. Artificial intelligence-based decision support system (AI-DSS) implementation in radiology residency: introducing residents to AI in the clinical setting. Clin Imaging. 2022;92:32–37. doi: 10.1016/j.clinimag.2022.09.003. [DOI] [PubMed] [Google Scholar]
- 105.Rainey C., O'Regan T., Matthew J., et al. UK reporting radiographers' perceptions of AI in radiographic image interpretation - current perspectives and future developments. Radiography (Lond) 2022;28(4):881–888. doi: 10.1016/j.radi.2022.06.006. [DOI] [PubMed] [Google Scholar]
- 106.Lim S.S., Phan T.D., Law M., et al. Non-radiologist perception of the use of artificial intelligence (AI) in diagnostic medical imaging reports. J Med Imaging Radiat Oncol. 2022;66(8):1029–1034. doi: 10.1111/1754-9485.13388. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 107.Alelyani M., Alamri S., Alqahtani M.S., et al. Radiology community attitude in Saudi Arabia about the applications of artificial intelligence in radiology. Healthcare (Basel) 2021;9(7) doi: 10.3390/healthcare9070834. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 108.Fransen S.J., Kwee T.C., Rouw D., et al. Patient perspectives on the use of artificial intelligence in prostate cancer diagnosis on MRI. Eur Radiol. 2025;35(2):769–775. doi: 10.1007/s00330-024-11012-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 109.Clements W., Thong L.P., Zia A., Moriarty H.K., Goh G.S. A prospective study assessing patient perception of the use of artificial intelligence in radiology. Asia Pac J Health Manag. 2022;17(1):46–55. doi: 10.24083/apjhm.v17i1.861. [DOI] [Google Scholar]
- 110.Ibba S., Tancredi C., Fantesini A., et al. How do patients perceive the AI-radiologists interaction? Results of a survey on 2119 responders. Eur J Radiol. 2023;165 doi: 10.1016/j.ejrad.2023.110917. [DOI] [PubMed] [Google Scholar]
- 111.Lennartz S., Dratsch T., Zopfs D., et al. Use and control of artificial intelligence in patients across the medical workflow: single-center questionnaire study of patient perspectives. J Med Internet Res. 2021;23(2) doi: 10.2196/24221. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 112.Rodler S., Kopliku R., Ulrich D., et al. Patients' trust in artificial intelligence-based decision-making for localized prostate cancer: results from a prospective trial. Eur Urol Focus. 2024;10(4):654–661. doi: 10.1016/j.euf.2023.10.020. [DOI] [PubMed] [Google Scholar]
- 113.York T., Jenney H., Jones G. Clinician and computer: a study on patient perceptions of artificial intelligence in skeletal radiography. BMJ Health Care Inform. 2020;27(3) doi: 10.1136/bmjhci-2020-100233. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 114.Baghdadi L.R., Mobeirek A.A., Alhudaithi D.R., et al. Patients' attitudes toward the use of artificial intelligence as a diagnostic tool in radiology in Saudi Arabia: cross-sectional study. JMIR Hum Factors. 2024;11 doi: 10.2196/53108. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 115.Xuereb F., Portelli D.J.L. The knowledge and perception of patients in Malta towards artificial intelligence in medical imaging. J Med Imaging Radiat Sci. 2024;55(4) doi: 10.1016/j.jmir.2024.101743. [DOI] [PubMed] [Google Scholar]
- 116.Aldhafeeri F.M. Navigating the ethical landscape of artificial intelligence in radiography: a cross-sectional study of radiographers' perspectives. BMC Med Ethics. 2024;25(1):52. doi: 10.1186/s12910-024-01052-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 117.Rainey C., O'Regan T., Matthew J., et al. An insight into the current perceptions of UK radiographers on the future impact of AI on the profession: a cross-sectional survey. J Med Imaging Radiat Sci. 2022;53(3):347–361. doi: 10.1016/j.jmir.2022.05.010. [DOI] [PubMed] [Google Scholar]
- 118.Antwi W.K., Akudjedu T.N., Botwe B.O. Artificial intelligence in medical imaging practice in Africa: a qualitative content analysis study of radiographers' perspectives. Insights Imaging. 2021;12(1):80. doi: 10.1186/s13244-021-01028-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 119.Chen Y., Wu Z., Wang P., et al. Radiology residents' perceptions of artificial intelligence: nationwide cross-sectional survey study. J Med Internet Res. 2023;25 doi: 10.2196/48249. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 120.Huisman M., Ranschaert E., Parker W., et al. An international survey on AI in radiology in 1,041 radiologists and radiology residents part 1: fear of replacement, knowledge, and attitude. Eur Radiol. 2021;31(9):7058–7066. doi: 10.1007/s00330-021-07781-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 121.Stogiannos N., O'Regan T., Scurr E., et al. AI implementation in the UK landscape: knowledge of AI governance, perceived challenges and opportunities, and ways forward for radiographers. Radiography (Lond) 2024;30(2):612–621. doi: 10.1016/j.radi.2024.01.019. [DOI] [PubMed] [Google Scholar]
- 122.Allam A.H., Eltewacy N.K., Alabdallat Y.J., et al. Knowledge, attitude, and perception of Arab medical students towards artificial intelligence in medicine and radiology: a multi-national cross-sectional study. Eur Radiol. 2024;34(7):1–14. doi: 10.1007/s00330-023-10509-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 123.Caparros Galan G., Sendra Portero F. Medical students' perceptions of the impact of artificial intelligence in radiology. Radiologia (Engl Ed) 2022;64(6):516–524. doi: 10.1016/j.rxeng.2021.03.008. [DOI] [PubMed] [Google Scholar]
- 124.Qurashi A.A., Alanazi R.K., Alhazmi Y.M., Almohammadi A.S., Alsharif W.M., Alshamrani K.M. Saudi radiology personnel's perceptions of artificial intelligence implementation: a cross-sectional study. J Multidiscip Healthc. 2021;14:3225–3231. doi: 10.2147/JMDH.S340786. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 125.Hamd Z.Y., Alorainy A.I., Aldhahi M.I., et al. Evaluation of the impact of artificial intelligence on clinical practice of radiology in Saudi Arabia. J Multidiscip Healthc. 2024;17:4745–4756. doi: 10.2147/jmdh.S465508. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 126.Adams S.J., Tang R., Babyn P. Patient perspectives and priorities regarding artificial intelligence in radiology: opportunities for patient-centered radiology. J Am Coll Radiol. 2020;17(8):1034–1036. doi: 10.1016/j.jacr.2020.01.007. [DOI] [PubMed] [Google Scholar]
- 127.Bahakeem B.H., Alobaidi S.F., Alzahrani A.S., et al. The general population's perspectives on implementation of artificial intelligence in radiology in the western region of Saudi Arabia. Cureus. 2023;15(4) doi: 10.7759/cureus.37391. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 128.Akudjedu T.N., Torre S., Khine R., Katsifarakis D., Newman D., Malamateniou C. Knowledge, perceptions, and expectations of Artificial intelligence in radiography practice: a global radiography workforce survey. J Med Imaging Radiat Sci. 2023;54(1):104–116. doi: 10.1016/j.jmir.2022.11.016. [DOI] [PubMed] [Google Scholar]
- 129.Zhang Z., Citardi D., Wang D., Genc Y., Shan J., Fan X. Patients' perceptions of using artificial intelligence (AI)-based technology to comprehend radiology imaging data. Health Informatics J. 2021;27(2) doi: 10.1177/14604582211011215. [DOI] [PubMed] [Google Scholar]
- 130.Jutzi T.B., Krieghoff-Henning E.I., Holland-Letz T., et al. Artificial intelligence in skin cancer diagnostics: the patients' perspective. Front Med (Lausanne) 2020;7 doi: 10.3389/fmed.2020.00233. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 131.Miro Catalina Q., Femenia J., Fuster-Casanovas A., et al. Knowledge and perception of the use of AI and its implementation in the field of radiology: cross-sectional study. J Med Internet Res. 2023;25 doi: 10.2196/50728. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 132.Stogiannos N., Litosseliti L., O'Regan T., et al. Black box no more: a cross-sectional multi-disciplinary survey for exploring governance and guiding adoption of AI in medical imaging and radiotherapy in the UK. Int J Med Inform. 2024;186 doi: 10.1016/j.ijmedinf.2024.105423. [DOI] [PubMed] [Google Scholar]
- 133.Barreiro-Ares A., Morales-Santiago A., Sendra-Portero F., Souto-Bayarri M. Impact of the rise of artificial intelligence in radiology: what do students think? Int J Environ Res Public Health. 2023;20(2) doi: 10.3390/ijerph20021589. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 134.Angkurawaranon S., Inmutto N., Bannangkoon K., et al. Attitudes and perceptions of Thai medical students regarding artificial intelligence in radiology and medicine. BMC Med Educ. 2024;24(1):1188. doi: 10.1186/s12909-024-06150-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 135.Hashmi O.U., Chan N., de Vries C.F., Gangi A., Jehanli L., Lip G. Artificial intelligence in radiology: trainees want more. Clin Radiol. 2023;78(4):e336–e341. doi: 10.1016/j.crad.2022.12.017. [DOI] [PubMed] [Google Scholar]
- 136.Sur J., Bose S., Khan F., Dewangan D., Sawriya E., Roul A. Knowledge, attitudes, and perceptions regarding the future of artificial intelligence in oral radiology in India: a survey. Imaging Sci Dent. 2020;50(3):193–198. doi: 10.5624/isd.2020.50.3.193. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 137.Farouk S., Osman A.M., Awadallah S.M., Abdelrahman A.S. The added value of using artificial intelligence in adult chest X-rays for nodules and masses detection in daily radiology practice. Egypt J Radiol Nucl Med. 2023;54(1):142. doi: 10.1186/s43055-023-01093-y. [DOI] [Google Scholar]
- 138.Fujioka T., Kubota K., Hsu J.F., et al. Examining the effectiveness of a deep learning-based computer-aided breast cancer detection system for breast ultrasound. J Med Ultrason (2001) 2023;50(4):511–520. doi: 10.1007/s10396-023-01332-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 139.Han X., He Y., Luo N., et al. The influence of artificial intelligence assistance on the diagnostic performance of CCTA for coronary stenosis for radiologists with different levels of experience. Acta Radiol. 2023;64(2):496–507. doi: 10.1177/02841851221089263. [DOI] [PubMed] [Google Scholar]
- 140.Kim T., Goh T.S., Lee J.S., Lee J.H., Kim H., Jung I.D. Transfer learning-based ensemble convolutional neural network for accelerated diagnosis of foot fractures. Phys Eng Sci Med. 2023;46(1):265–277. doi: 10.1007/s13246-023-01215-w. [DOI] [PubMed] [Google Scholar]
- 141.Kim J.H., Han S.G., Cho A., Shin H.J., Baek S.E. Effect of deep learning-based assistive technology use on chest radiograph interpretation by emergency department physicians: a prospective interventional simulation-based study. BMC Med Inform Decis Mak. 2021;21(1):311. doi: 10.1186/s12911-021-01679-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 142.Nam J.G., Kim M., Park J., et al. Development and validation of a deep learning algorithm detecting 10 common abnormalities on chest radiographs. Eur Respir J. 2021;57(5) doi: 10.1183/13993003.03061-2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 143.Liu Y., Chen C., Wang K., et al. The auxiliary diagnosis of thyroid echogenic foci based on a deep learning segmentation model: a two-center study. Eur J Radiol. 2023;167 doi: 10.1016/j.ejrad.2023.111033. [DOI] [PubMed] [Google Scholar]
- 144.Wang D.Y., Liu S.G., Ding J., et al. A deep learning model enhances clinicians' diagnostic accuracy to more than 96% for anterior cruciate ligament ruptures on magnetic resonance imaging. Arthroscopy. 2024;40(4):1197–1205. doi: 10.1016/j.arthro.2023.08.010. [DOI] [PubMed] [Google Scholar]
- 145.Yang W., Chen C., Yang Y., et al. Diagnostic performance of deep learning-based vessel extraction and stenosis detection on coronary computed tomography angiography for coronary artery disease: a multi-reader multi-case study. Radiol Med. 2023;128(3):307–315. doi: 10.1007/s11547-023-01606-9. [DOI] [PubMed] [Google Scholar]
- 146.Liu J., Zhao L., Han X., Ji H., Liu L., He W. Estimation of malignancy of pulmonary nodules at CT scans: effect of computer-aided diagnosis on diagnostic performance of radiologists. Asia Pac J Clin Oncol. 2021;17(3):216–221. doi: 10.1111/ajco.13362. [DOI] [PubMed] [Google Scholar]
- 147.Mehralivand S., Harmon S.A., Shih J.H., et al. Multicenter multireader evaluation of an artificial intelligence-based attention mapping system for the detection of prostate cancer with multiparametric MRI. AJR Am J Roentgenol. 2020;215(4):903–912. doi: 10.2214/AJR.19.22573. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 148.Hwang E.J., Kim H., Yoon S.H., Goo J.M., Park C.M. Implementation of a deep learning-based computer-aided detection system for the interpretation of chest radiographs in patients suspected for COVID-19. Korean J Radiol. 2020;21(10):1150–1160. doi: 10.3348/kjr.2020.0536. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 149.Li C., Li J., Tan T., Chen K., Xu Y., Wu R. Application of ultrasonic dual-mode artificially intelligent architecture in assisting radiologists with different diagnostic levels on breast masses classification. Diagn Interv Radiol. 2021;27(3):315–322. doi: 10.5152/dir.2021.20018. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 150.van Winkel S.L., Rodríguez-Ruiz A., Appelman L., et al. Impact of artificial intelligence support on accuracy and reading time in breast tomosynthesis image interpretation: a multi-reader multi-case study. Eur Radiol. 2021;31(11):8682–8691. doi: 10.1007/s00330-021-07992-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 151.Sim Y., Chung M.J., Kotter E., et al. Deep convolutional neural network-based software improves radiologist detection of malignant lung nodules on chest radiographs. Radiology. 2020;294(1):199–209. doi: 10.1148/radiol.2019182465. [DOI] [PubMed] [Google Scholar]
- 152.Zhang B., Jia C., Wu R., et al. Improving rib fracture detection accuracy and reading efficiency with deep learning-based detection software: a clinical evaluation. Br J Radiol. 2021;94(1118) doi: 10.1259/bjr.20200870. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 153.Bennani S., Regnard N.E., Ventre J., et al. Using AI to improve radiologist performance in detection of abnormalities on chest radiographs. Radiology. 2023;309(3) doi: 10.1148/radiol.230860. [DOI] [PubMed] [Google Scholar]
- 154.Seah J.C.Y., Tang C.H.M., Buchlak Q.D., et al. Effect of a comprehensive deep-learning model on the accuracy of chest x-ray interpretation by radiologists: a retrospective, multireader multicase study. Lancet Digit Health. 2021;3(8):e496–e506. doi: 10.1016/S2589-7500(21)00106-0. [DOI] [PubMed] [Google Scholar]
- 155.Ahn J.S., Ebrahimian S., McDermott S., et al. Association of artificial intelligence-aided chest radiograph interpretation with reader performance and efficiency. JAMA Netw Open. 2022;5(8) doi: 10.1001/jamanetworkopen.2022.29289. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 156.Ha E.J., Lee J.H., Lee D.H., et al. Artificial intelligence model assisting thyroid nodule diagnosis and management: a multicenter diagnostic study. J Clin Endocrinol Metab. 2024;109(2):527–535. doi: 10.1210/clinem/dgad503. [DOI] [PubMed] [Google Scholar]
- 157.Tong W.J., Wu S.H., Cheng M.Q., et al. Integration of artificial intelligence decision aids to reduce workload and enhance efficiency in thyroid nodule management. JAMA Netw Open. 2023;6(5) doi: 10.1001/jamanetworkopen.2023.13674. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 158.Choi S.Y., Kim J.H., Chung H.S., Lim S., Kim E.H., Choi A. Impact of a deep learning-based brain CT interpretation algorithm on clinical decision-making for intracranial hemorrhage in the emergency department. Sci Rep. 2024;14(1) doi: 10.1038/s41598-024-73589-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 159.Novak A., Ather S., Gill A., et al. Evaluation of the impact of artificial intelligence-assisted image interpretation on the diagnostic performance of clinicians in identifying pneumothoraces on plain chest X-ray: a multi-case multi-reader study. Emerg Med J. 2024;41(10):602–609. doi: 10.1136/emermed-2023-213620. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 160.Liang F., Song Y., Huang X., et al. Assessing breast disease with deep learning model using bimodal bi-view ultrasound images and clinical information. iScience. 2024;27(7) doi: 10.1016/j.isci.2024.110279. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 161.Lim D.S.W., Makmur A., Lei Z., et al. Improved productivity using deep learning-assisted reporting for lumbar spine MRI. Radiology. 2022;305(1):160–166. doi: 10.1148/radiol.220076. [DOI] [PubMed] [Google Scholar]
- 162.Bernstein M.H., Atalay M.K., Dibble E.H., et al. Can incorrect artificial intelligence (AI) results impact radiologists, and if so, what can we do about it? A multi-reader pilot study of lung cancer detection with chest radiography. Eur Radiol. 2023;33(11):8263–8269. doi: 10.1007/s00330-023-09747-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 163.Hosny A., Parmar C., Quackenbush J., Schwartz L.H., Aerts H.J.W.L. Artificial intelligence in radiology. Nat Rev Cancer. 2018;18(8):500–510. doi: 10.1038/s41568-018-0016-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 164.Khalifa M., Albadawy M. AI in diagnostic imaging: revolutionising accuracy and efficiency. Comput Methods Progr Biomed Update. 2024;5 doi: 10.1016/j.cmpbup.2024.100146. [DOI] [Google Scholar]
- 165.Morley J. “AI and the NHS: Is it the Silver Bullet That Will Improve the Health Service’s Productivity?”. Nuffield Trust; United Kingdom (UK): 2024. Guest blog. [Google Scholar]
- 166.Recht M.P., Dewey M., Dreyer K., et al. Integrating artificial intelligence into the clinical practice of radiology: challenges and recommendations. Eur Radiol. 2020;30(6):3576–3584. doi: 10.1007/s00330-020-06672-5. [DOI] [PubMed] [Google Scholar]
- 167.Vo V., Chen G., Aquino Y.S.J., Carter S.M., Do Q.N., Woode M.E. Multi-stakeholder preferences for the use of artificial intelligence in healthcare: a systematic review and thematic analysis. Soc Sci Med. 2023;338 doi: 10.1016/j.socscimed.2023.116357. [DOI] [PubMed] [Google Scholar]
- 168.Geis J.R., Brady A., Wu C.C., et al. Ethics of artificial intelligence in radiology: summary of the joint European and North American multisociety statement. Insights Imaging. 2019;10(1):101. doi: 10.1186/s13244-019-0785-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 169.European Union . Resgulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 Laying Down Harmonised Rules on Artificial Intelligence and Amending Regulations. European Union; Europe: 2024. [Google Scholar]
- 170.National Institute for Health and Care Excellence (NICE) Artificial intelligence (AI)-derived software to help clinical decision making in stroke. 2024. https://www.nice.org.uk/guidance/dg57/resources/artificial-intelligence-aiderived-software-to-help-clinical-decision-making-in-stroke-pdf-1053876693445 Available from:
- 171.Karpathakis K., Morley J., Floridi L. A justifiable investment in AI for healthcare: aligning ambition with reality. Minds Mach. 2024;34:38. [Google Scholar]
- 172.Doherty G., McLaughlin L., Hughes C., McConnell J., Bond R., McFadden S. A scoping review of educational programmes on artificial intelligence (AI) available to medical imaging staff. Radiography (Lond) 2024;30(2):474–482. doi: 10.1016/j.radi.2023.12.019. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Overall review summary table.
Quality assessment.
Quality assessment (cost papers only).

