Skip to main content
. 2023 Jul 6;9:20552076231186064. doi: 10.1177/20552076231186064

Table 2.

Main findings from the three themes of empirical studies on medical AI ethics.

Three types of studies Main findings
Stakeholders’ knowledge and attitude toward medical AI and ethics (n = 21)
  • Patients and family members expressed concern that AI technologies would disengage physicians from the healthcare process.

  • Clinicians shared concerns about responsibility, data security, privacy, bias, human interaction/effective clinician–patient communication.

  • Autonomy was another common ethical concern identified by clinicians including patient autonomy and clinician autonomy.

  • Different stakeholders held diverse views toward medical AI and had different ethical concerns.

Creating theoretical models of medical AI adoption (n = 5)
  • Ethics have a small effect on medical AI adoption.

  • Ethical concerns included: privacy, mistrust of AI, transparency, responsibility, fairness, autonomy/shared decision-making, and explainability.

Identify and correct bias in medical AI (n = 8)
  • Most studies identified AI underperformance for underserved populations, such as women, racial and ethnic minorities, and patients on public insurance, but the results were inconsistent overall.

  • Highlighted the importance of testing the AI performance across different demographic groups during each step of algorithm development in future projects.