Skip to main content
. 2021 Sep 13:1–25. Online ahead of print. doi: 10.1007/s10796-021-10191-z

Table 1.

 A summary of three important responsible AI principles model

Elements Clarke (2019) Microsoft AI (2020) Floridi et al. (2018)
Justice/fairness Process and procedural fairness and transparency should be fulfilled. AI systems should treat all people fairly and produce fairness rather than reinforce bias and stereotype to society Using AI to correct past mistakes such as eliminating unfair discrimination and to create shared or sharable benefits without creating new harm.
Inclusiveness - AI systems should intentionally engage communities. -
Reliability and safety Embedded quality assurance. AI system should be consistent with designers’ ideas, organizational values, and principles. It applies to any products of the company. -
Transparency Ensure accountability (i.e., each entity is discoverable) for legal and moral obligations AI systems should be understandable, and people can understand behaviors of AI, designers open to users with why and how they create the system. The relationship between humans and this transformative technology should be readily understandable to the average person.
Privacy and security - AI systems should be secure and respect privacy through considering data origin and lineage, data use internal and external. -
Beneficence Consistency with human values and human rights. - The original motivation of creating AI technology is to promote the benefits or well-being of humans and the planet with dignity and sustainability.
Non-maleficence Safeguards for human stakeholders at risk should be provided and replace that inhumane machine decision-making. - Be cautious against the potentially negative consequences of overusing or misusing AI technologies, for example, the prevention of infringement on personal privacy, even worse as the AI arms race. Accidental or deliberate harm should be taken seriously whether from the intent of humans or the unpredicted behavior of machines.
Autonomy Human ceding power to machines; but all stakeholders have legal and moral obligations to assess the impacts of AI - A principle that the autonomy of humans to make decisions should be protected rather than delegating too much to machines.

Sources: Clarke (2019). Principles for Responsible AI. https://arxiv.org/abs/2101.02032. Accessed 1 November 2020

Floridi et al. (2018). AI4People-An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds and Machines, 28(4), 689–707

Microsoft AI, 2020. Responsible AI. https://www.microsoft.com/en-us/ai/responsible-ai?activetab=pivot1:primaryr6. Accessed 4 October 2020