Skip to main content
Radiology: Artificial Intelligence logoLink to Radiology: Artificial Intelligence
editorial
. 2019 Mar 27;1(2):e194001. doi: 10.1148/ryai.2019194001

Do the Right Thing

Charles E Kahn Jr
PMCID: PMC8017388  PMID: 33937790

Introduction

Ethics is knowing the difference between what you have a right to do and what is right to do.

--Potter Stewart

As researchers, developers, physicians, and scientists, it’s our job to assure that we use proper methods so that the results we achieve are valid. In other words, we try to do things right.

But it’s even more important that we do the right thing.

Recent advances in artificial intelligence (AI) have engendered much hope that this technology can be harnessed to improve medical care. Yet many in the field have raised concerns about issues such as bias, lack of transparency, and loss of privacy.

The foundation of medical ethics is primum non nocere—first, do no harm. The ethical obligations entailed in developing and deploying AI systems in medicine engender the same obligation: we must strive to identify and mitigate any possibilities where the use of AI could harm patients or health care workers.

To help ensure that AI realizes the benefits it promises for patients, physicians, and the health care community, the American Medical Association has adopted new policy statements to provide a broad framework for the use of AI in health care (1). Ethicists have begun to explore whether an AI program that appears to have a better success rate than humans might be used to replace or augment humans. Concerns include AI’s “black-box problem” (the inability to understand how output is derived from input) and automation bias (overreliance on AI) (2). As Jalal et al write, “[t]he challenge will come from not only the development of these systems and how to make them accurate, but also how to ensure they are utilized ethically” (3).

Ethical principles should guide all of us in planning, conducting, reporting, and publishing work on AI. To that end, I’m pleased to write about the recent release of two statements on ethics of AI in radiology.

A consortium of European and North American radiology societies has released their draft guidelines on the ethics of AI in radiology (4). The contributing organizations, in alphabetical order, are: American Association of Physicists in Medicine (AAPM), American College of Radiology (ACR), Canadian Association of Radiologists (CAR), European Society of Medical Imaging Informatics (EuSoMII), European Society of Radiology (ESR), Radiological Society of North America (RSNA), and Society for Imaging Informatics in Medicine (SIIM).

The statement is available online and is open for comments from the community until April 15, 2019, at https://www.acrdsi.org/News-and-Events/Call-for-Comments.

The 34-page document discusses in detail the ethical issues arising from three main AI topics: data, algorithms and trained models, and practice. The document’s structure follows that outlined in an earlier report on the subject (5).

The multisociety report states: “Everyone involved with radiology AI has a duty to understand it deeply, to appreciate when and how hazards may manifest, to be transparent about them, and to do all they can to mitigate any harm they might cause. In particular, radiologists have a duty to understand both the rewards and risks of AI agents they use, to alert patients and stakeholders to risks, and to monitor AI products to guard against harm.”

In addition, the Royal Australian and New Zealand College of Radiologists (RANZCR) recently released a draft statement entitled “Ethical Principles for AI in Medicine” (6). Their statement sets out eight overarching principles—such as safety, transparency, explainability, and avoidance of bias—to guide considerations of AI in radiology. The statement also seeks to address privacy and protection of data, diagnostic and therapeutic decision making, and liability for decisions made. They link these principles to broader ethical frameworks for the practice of medicine in Australia and New Zealand and to RANZCR’s Code of Ethics for clinical radiologists and radiation oncologists.

The RANZCR statement is available for review online (6); comments can be submitted to fcr@ranzcr.edu.au until April 26, 2019.

I encourage everyone interested in radiology AI to read through these statements and to provide feedback to the sponsoring organizations. Putting ethical principles first will help define a way forward that assures that AI achieves the goals we’ve set for it.

As we seek to advance the use of AI in medicine, ethical guidelines such as these will help us do the right thing.

References


Articles from Radiology. Artificial intelligence are provided here courtesy of Radiological Society of North America

RESOURCES