Skip to main content
. 2023 May 17;11(10):1454. doi: 10.3390/healthcare11101454

Table 2.

List of recommendations.

Category Recommendations
Human agency
and oversight
  1. To avoid end-users’ full trust in AI systems.

  2. To avoid that, the system inadvertently affects human autonomy.

  3. To provide training to exercise oversight (in-the-loop, on-the-loop, in-command).

  4. To clarify all potential negative consequences for end-users or targets (e.g., develop attachments).

  5. To provide means for end-users to have control of the interactions and preserve autonomy.

  6. To have means to reduce the risk of manipulation (clear information about ownership and aims).

  7. To establish detection and response mechanisms in case of undesirable effects for the end-users.

  8. To establish control measures that reflect the self-learning/autonomous nature of the system.

  9. To involve experts from other disciplines, such as psychology and social work.

Technical robustness
and safety
  1. To assess risks of attacks to which the AI system could be vulnerable.

  2. To assess AI system threats and their consequences (design, technical, environmental, human).

  3. To assess the risk of possible malicious use, misuse, or inappropriate use of the AI system.

  4. To assess the dependency of the critical system’s decisions on its stable and reliable behavior.

  5. To control the AI system changes and its technical robustness and safety permanently.

Privacy and data
governance
  1. To adopt mechanisms that flag privacy and data protection issues.

  2. To implement the rights to withdraw consent, object, and be forgotten in the AI systems.

  3. To protect privacy and personal data during the lifecycle of an AI system (data processing).

  4. To protect non-personal data during the lifecycle of an AI system (data processing).

  5. To align the AI system with widely accepted standards (e.g., ISO) and protocols.

Transparency
  1. To continuously survey users about their decisions and understanding of AI systems.

  2. To continuously assess the quality of the input data to the AI system.

  3. To explain to the end-users the AI system decisions or suggestions (answers).

  4. To explain to the end-users that AI system is an interactive machine (that he/she communicates with).

Diversity,
non-discrimination,
and fairness
  1. To teach/educate the AI system developers about potential system bias.

  2. To implement fair AI systems and be sensitive to the variety of preferences/abilities in society.

  3. To build accessible AI systems and interfaces for all people (Universal Design principles).

  4. To assess the AI systems’ disproportional impacts considering individuals and groups.

  5. To assess the AI systems’ bias related to algorithm design (data inputs) permanently.

  6. To build algorithms that include diversity and representativeness of individuals and groups.

  7. To assess permanently the AI systems’ bias related to discrimination (e.g., race, gender, age).

  8. To adopt mechanisms to identify subjects (in) directly affected by the AI system.

  9. To adopt mechanisms that flag diversity, non-discrimination, and fairness issues.

  10. To adopt mechanisms to continuously measure the risk of bias.

  11. To provide AI systems with widely accepted definitions, concepts, and frameworks.

  12. To involve or consult the end-users in all phases of AI system development.

  13. To provide publicly available educational materials based on research and state of the art.

  14. To assess “Conflicts of Interest” of the team/individuals involved in building the AI system.

Societal and environmental well-being
  1. To adopt mechanisms to identify AI systems’ positive/negative impacts on the environment.

  2. To define measures to reduce the environmental impact of AI system’s lifecycle.

  3. To involve the AI systems to tackle societal, environmental, and well-being problems.

  4. To reduce the AI systems’ negative impacts to the work and workers.

  5. To provide people with re-skill educational tools to counteract de-skilling based on AI systems.

  6. To ensure people understand the AI systems’ positive/negative impacts very well.

Accountability
  1. To ensure AI systems’ auditability, modularity, and traceability (also by third parties).

  2. To fit the best practices and industry standards available and acknowledged.

  3. To ensure that all conflicts of values or tradeoffs be well-documented and explained.

  4. To include a non-technical method to assess the trustworthiness of AI (e.g., “ethical review board”).

  5. To consistently provide multisectoral and multidisciplinary auditing or guidance.

  6. To have and update the legal framework considering a wide range of impacts.

  7. To assess vulnerabilities and risks to identify and mitigate potential pitfalls.