Skip to main content
. 2023 Jul 26;25:e43068. doi: 10.2196/43068

Table 1.

Known risks in conversational chatbots.

Risk category and description Definition
Human rights

Discrimination The chatbot makes different recommendations or has a higher error rate based on the patient’s group (gender, ethnicity, race, religion, etc).

Stereotyping The chatbot interprets or uses language that propagates harmful prejudices, such as the inferiority of certain groups, sexualization, or lack of credibility.

Exclusion Development, governance, or use of the chatbot does not include certain already marginalized groups.
Data protection

Lack of privacy The data generated by the chatbot is not protected.

Poor data governance The data generated by the chatbot is governed improperly or without including the patient.

Stigma The data generated by the chatbot can lead to stereotyping or marginalizing certain individuals.
Technical

Error tolerance Errors, even if they are not discriminatory, cause harm to the patients.

Overconfidence and trust decay Patients place excessive trust in chatbots resulting in overconfidence and relative decay of trust in human health professionals.

Technological solutionism Investment in chatbot technology diverts from an actual societal problem.