Skip to main content
. 2025 Aug 7;27:e73374. doi: 10.2196/73374

Table 1.

Characteristics of the included studies.

Study Region Study design Content Participants Main findings
Graziani et al [13], 2023 Worldwide Qualitative Explainability Health care professionals, industry practitioners, and academic researchers
  • Explainability in AIa refers to the ability to understand and interpret how models make decisions. It is typically categorized into 3 types: intrinsic explainability, where models are inherently interpretable or designed to be transparent; global explainability, which focuses on understanding the overall behavior of a model through methods such as feature importance and visualization; and local explainability, which aims to explain individual predictions using techniques such as local approximations, counterfactual explanations, or adversarial examples.

Marco-Ruiz et al [29], 2024 Europe and America Qualitative Integrability AI technology developers in hospitals, clinicians using AI, and clinical managers involved in adopting AI, among others
  • Interviewees highlighted that varying protocols across health care organizations can affect AI system effectiveness, emphasizing the need for local validation to assess performance and workflow impact. They also stressed the importance of tools such as process mining and visualizations to analyze patient pathways and optimize complex workflows using real-world data.

Liaw et al [23], 2023 Multicountry Mixed method Integrability Clinicians managing diabetes
  • Interviewees expressed concerns that AI tools might affect patient outcomes and clinical workflows, leading to overdiagnosis, increased costs, and exacerbating health disparities and alert fatigue. The tool’s utility is limited for doctors familiar with their patients. Inaccurate data can cause false alarms or missed diagnoses, with a lack of evidence supporting its accuracy. A user-centered design is recommended to improve the system.

Panagoulias et al [30], 2023 Greece Quantitative Explainability Medical personnel (including medical students and medical practitioners)
  • Clinicians rated diagnostic information, certainty, and related reasoning as very important, particularly when their diagnoses conflicted with AI recommendations

Wang et al [22], 2021 China Qualitative Integrability Clinicians in rural clinics
  • Heavy workload: rural doctors handle many patients daily with frequent interruptions, leaving little time for detailed communication or documentation, hindering AI-CDSSb use.

  • Resource limitations: lack of necessary equipment and medications in clinics makes many AI-CDSS recommendations impractical, reducing their utility.

  • Design mismatch: AI-CDSS are designed for time-consuming, standardized processes that do not fit the fast-paced rural practice, and poor integration with other systems leads to data and recommendation issues.

Zheng et al [31], 2024 America Qualitative Explainability and integrability Pediatric asthma clinicians
  • User needs and challenges in using MLc systems fell into 3 main areas: how well the system fits into daily workflows (eg, avoiding alert fatigue), the need for clear explanations behind system decisions, and difficulties adapting the tool to real-world settings.

Wolf and Ringland [32], 2020 America Qualitative Explainability Users and developers involved in the design and use of XAId systems
  • Nonexpert users preferred simple, intuitive explanations of AI decisions, using clear language, visual tools, and real-life examples. They focused on fairness, transparency, and the impact on daily life, needing straightforward charts and simplified explanations to build trust.

Morais et al [33], 2023 Brazil Qualitative Explainability Oncologists
  • Visualization helps: experts found visual elements useful for identifying major and minor influencing features.

  • They also expressed a need for more detail: participants wanted more traceability to see how results are generated, enhancing confidence in decisions.

Helman et al [34], 2023 America Qualitative Explainability and integrability Doctors, nurse practitioners, and physician assistants
  • Doctors primarily focused on several key aspects when using AI tools: analytic transparency, graphical explainability, the impact on clinical practice, the value of integrating dynamic patient data trends, decision weighting (how much to trust and balance AI outputs in real decisions), and display location—including usability and how the interface is viewed by patients and families.

Ghanvatkar and Rajan [11], 2024 Singapore Quantitative Explainability Clinicians
  • The integration of XGBe with SHAPf, as well as the combination of LRg with SHAP, showed high usefulness because of their strong conceptual explanations, with the XGB and SHAP combination performing best in prediction but lowest in fidelity. Usefulness scores also improved during neural network training, indicating better alignment between explanation importance and predictive power over time.

Kinney et al [35], 2024 Portugal Qualitative Explainability and integrability Doctors, educators, and students
  • Transparency in sources: users need to know where AI obtains its information to trust it, similar to how students cite sources.

  • Impact on doctor-patient relationships: doctors fear AI could reduce personal interaction, mirroring issues with EHRsh.

  • Increased burnout: additional AI-driven tasks may increase physician stress and lead to uncritical reliance on AI suggestions.

Burgess et al [36], 2023 America Qualitative Integrability Endocrinology clinicians
  • The study proposes these design principles: (1) ensure algorithms are practical for clinical settings to avoid unrealistic insights; (2) allow clinicians to consider patient-specific factors and maintain control over model outputs; (3) avoid adding “research” tasks to patient visits; and (4) focus on aiding complex decisions in the workflow, not repeating known information.

Yoo et al [37], 2023 South Korea Qualitative Integrability Medical and nursing staff in emergency departments and intensive care units of tertiary care hospitals
  • Anticipated benefits: most participants believe medical AI can reduce decision-making time and handle repetitive tasks, easing workloads and improving efficiency.

  • Main concerns: worries include workflow disruptions, added tasks, reduced clinical autonomy, overreliance on algorithms, skill decline, alert fatigue, and the inability to integrate information beyond electronic records.

Schoonderwoerd et al [38], 2021 Netherlands Quantitative Explainability Pediatrician clinicians
  • Diagnosis explanations are essential: clinicians agree that understanding how CDSSi arrives at a diagnosis is important for trust and decision-making.

  • Need for personalized explanations: while most information elements are seen as valuable, preferences vary, suggesting that explanations should be tailored to individual needs.

  • Balance detail and overreliance: key explanation elements include evidence used, supporting or contradicting data, certainty level, missing information, alternative diagnoses, and past performance—but too much detail may lead to blind trust in the system.

Hong et al [39], 2020 America Qualitative Explainability Practitioners in various industries, such as health care, software companies, and social media
  • Understanding model behavior: during validation, builders need to know why a model produces a specific output for a given case, especially when it performs unexpectedly. Methods such as LIMEj and SHAP help provide these insights.

  • Feature importance analysis: builders assess model logic by examining feature importance, focusing not only on key features but also on less important ones to gain a complete understanding of decision-making.

Gu et al [16], 2023 America Qualitative Integrability Medical professionals in pathology
  • Enhanced accuracy and efficiency: the xPath system improved diagnostic accuracy and efficiency, reducing workload and boosting confidence.

  • Traceable evidence for transparency: it provides a layered evidence chain (eg, heat maps and confidence scores), making AI diagnoses transparent and verifiable.

  • User-friendly design: the system aligns with pathologists’ workflow, supporting easy verification and adjustment of AI recommendations, thus enhancing usability and adoption.

Wenderott et al [40], 2024 Germany Qualitative Integrability Radiologists
  • The key barriers to AI adoption are (1) workflow delays, (2) extra steps, and (3) inconsistent AI-CADk performance. The key facilitators are (1) good self-organization and (2) software usability.

Verma et al [41], 2023 Switzerland Qualitative Explainability and integrability Clinicians involved in cancer care (large health care organizations)
  • Integration challenges: integrating AI into clinical practice is difficult because of issues with data integration, ontologies, and generating actionable insights.

  • Trust and generalization: clinicians distrust “black-box” models, and AI performance varies across different populations, limiting widespread use.

Tonekaboni et al [42], 2019 Canada Qualitative Explainability Clinicians in intensive care units and emergency departments
  • Transparency: doctors need to know the model’s context and limitations, such as missing patient information, to trust it even if accuracy is not perfect.

  • Feature explanation: clearly explaining the features used in decisions helps build trust and guides appropriate use in different patient groups.

  • Visualization: well-designed visualizations enhance understanding and support clinical reasoning.

Brennen [43], 2020 America Qualitative Explainability End users and policy makers
  • Model debugging and understanding: XAI tools should help users understand model behavior (eg, using LIME or SHAP), but they often require advanced knowledge of ML.

  • Bias detection: tools should identify and explain systemic bias in models and provide context to assess fairness and reliability.

  • Building trust: clearly presenting the data and logic behind decisions helps users understand and trust AI systems.

Fogliato et al [44], 2022 America Quantitative Integrability Radiologists
  • One-stage workflow boosts AI reliance: participants more closely followed AI suggestions, especially on noncritical points.

  • AI outperforms but risks overtrust: AI performed better, but improvements were largely because of reliance on AI, even when incorrect.

  • Workflow impacts experience: 1-stage users felt increased confidence and speed; 2-stage users found it more complex and burdensome.

Salwei et al [45], 2021 America Qualitative Integrability Emergency physicians
  • The study identified 25 components for integrating a human factors–based CDSSl into emergency departments, organized into 4 dimensions: time (when the CDSS is used), flow (how it integrates into workflows), patient journey scope (which care stages it covers), and level (integration at individual, team, and organizational levels).

aAI: artificial intelligence.

bAI-CDSS: artificial intelligence clinical decision support systems.

cML: machine learning.

dXAI: explainable artificial intelligence.

eXGB: extreme gradient boosting.

fSHAP: Shapley additive explanations.

gLR: logistic regression.

hEHR: electronic health record.

iCDSS: clinical decision support systems.

jLIME: local interpretable model-agnostic explanations.

kAI-CAD: artificial intelligence–based computer-aided detection.

lCDSS: clinical decision support system.