Falsifiability and incremental deployment |
Identify falsifiable requirements and test them in incremental steps from the lab to the “outside world” |
Nonmaleficence |
Safeguards against the manipulation of predictors |
Adopt safeguards which (i) ensure that non-causal indicators do not inappropriately skew interventions, and (ii) limit, when appropriate, knowledge of how inputs affect outputs from AI4SG systems, to prevent manipulation |
Nonmaleficence |
Receiver-contextualised intervention |
Build decision-making systems in consultation with users interacting with and impacted by these systems; with understanding of users’ characteristics, the methods of coordination, the purposes and effects of an intervention; and with respect for users’ right to ignore or modify interventions |
Autonomy |
Receiver-contextualised explanation and transparent purposes |
Choose a Level of Abstraction for AI explanation that fulfils the desired explanatory purpose and is appropriate to the system and the receivers; then deploy arguments that are rationally and suitably persuasive for the receiver to deliver the explanation; and ensure that the goal (the system’s purpose) for which an AI4SG system is developed and deployed is knowable to receivers of its outputs by default |
Explicability |
Privacy protection and data subject consent |
Respect the threshold of consent established for the processing of datasets of personal data |
Nonmaleficence; autonomy |
Situational fairness |
Remove from relevant datasets variables and proxies that are irrelevant to an outcome, except when their inclusion supports inclusivity, safety, or other ethical imperatives |
Justice |
Human-friendly semanticisation |
Do not hinder the ability for people to semanticise (that is, to give meaning to, and make sense of) something |
Autonomy |