Table 2.
Risks | Ethical principles | |||
|
Beneficence | Nonmaleficence | Autonomy | Justice |
Errors | —a | The chatbot makes the wrong recommendation to patients based on a bug in the system. | — | — |
Discrimination | — | The chatbot has a bias that causes it not to understand requests based on women’s health. | — | The chatbot provides more appropriate recommendations for men than for women. |
Stereotyping | The chatbot responds to the patient in derogatory terms. | The chatbot’s recommendations based on stereotypes lead to harm the patient. | — | The chatbot gives unfair and derogatory responses to patients. |
Exclusion | — | The chatbot excludes certain users because of language and literary skills, withholding medical support. | — | The chatbot excludes certain patients and no alternative is provided. |
Stigma | — | Use of the chatbot is not anonymous and leads to stigmatization of certain patients. | — | — |
Lack of privacy | — | — | There are data leaks from the chatbot system leading to a breach of confidentiality. | — |
Poor data governance | — | — | Patients do not consent to have their data collected by the chatbot and mechanisms for data governance are not clear. | — |
Overconfidence and trust decay | — | The chatbot harms the relationship between the patient and their physician by providing contradictory recommendations. | — | — |
Technological solutionism | A chatbot is not the best option for providing medical recommendations to certain patients. | — | — | — |
aNot applicable.