Table 2.
Contextual factors for the 5 case studies.
|
Environment | Actors | Framing factors | Causes of trust | Effects of trust |
Diagnostic AIa (chest x-rays) | Image-driven diagnostics (radiology) | Medical professionals and AI system; patients to a limited extent | Discourse regarding job security and potential AI replacement | Accuracy, design transparency, and human competencies and virtues | Acceptance of systems by physicians, potentially at the cost of deskilling |
Predictive AI (ICUb setting) | Clinical setting of an ICU | Physicians, nurses, and AI system; patients to a limited extent; and potentially caregivers and family members | Stressful situations potentially with a need to act under time pressure, risk of severe consequences, the need to synthesize too much information, and alert fatigue | Accuracy, transparency, and explainability; fairness; exclusion of harm; and rigorous testing (eg, in the form of an RCTc) | Acceptance and use of the system, potentially at the risk of erroneous clinical decisions following misleading predictions |
Public health AI (disease outbreak model) | Nonclinical setting—publicly accessible web-based tool for the analysis of heterogeneous data | Developers, public health practitioners, policy makers, and the public | Stage and severity of disease outbreak; usability aspects (eg, intuitive interface or data visualization); and, potentially, antiscience sentiments and conspiracy theories with regard to the disease and health service providers | Historical accuracy and endorsement by authorities | Acceptance and use of the system by public decision makers (public health experts and policy makers) |
Assistive AI (neurorehabilitation) | Clinical neurorehabilitation; elective use of different technologies for different activities, potentially every day | Patients and their caregivers and social circle, potentially including employers, engineers, and regulators | Clinical setting, science fiction literature and cinema, and public attitudes and policies on related technologies | Accuracy, privacy, lack of conflicts of interest, independence, long-term technical support, and user understanding of the underlying technology | Technology acceptance by users, health care professionals, and health care providers; potentially facilitated reimbursement and increased affordability and accessibility |
Resource-allocating AI (predicting costs and needs) |
Health service providers and health care system | The developing company providing the algorithm, the health system implementing it, the clinicians interacting with it, the patients having their care influenced by the algorithm, and regulatory bodies and algorithmic auditors | Media reporting on algorithms and their impact and theories of institutional trust | Reliability, accuracy, transparency, design, and model-centric explanations | Acceptance and use in health care systems |
aAI: artificial intelligence.
bICU: intensive care unit.
cRCT: randomized controlled trial.