Skip to main content
. 2023 Jul 24;29(4):29. doi: 10.1007/s11948-023-00448-y

Table 1.

Challenges and risks of the SAF process and proposed mitigating actions

Identified challenge / risk Description Mitigating action
Lack of representation of all agents affected Martin and Phillips (2022) explain that a collection of social, legal and economic forces drive firms to prioritize and reinvest in current stakeholders. We have explicitly stated, at stage 0 of the SAF process, that all legitimate stakeholders should be represented, i.e., all agents affected by the AI system, both internal and external to the company or institution that is developing it.
Power imbalances among participating stakeholders Opinions expressed by vulnerable salient groups might be undervalued by AI powerful stakeholders (Birhane et al., 2022). This can aggravate rather than improve the existing power dynamics.

The MLMDT must discuss how to deal with power imbalances before inviting stakeholders to the project. If necessary, the team will look for expert advice to ensure that participants feel free and empowered to express their opinions (Hollis & Whittlestone, 2021).

Tensions among stakeholders are bound to arise, since fairness is not achieved in effortless ways (Sloane et al., 2020). The MLMDT aims to accompany stakeholders throughout the process to reach agreements that are acceptable by all stakeholders.

Participation washing Participatory processes can be a mechanism for big corporations to mask or even facilitate an illegitimate exercise of power, focusing only on corporate benefits. In this case, the disempowerment of VSG continues and corporations emerge as the legitimate arbiters (Sloane et al., 2020).

Objectives in terms of bias management and participation will be agreed among legitimate stakeholders, openly published and assessed periodically. Assessment on stakeholders’ participation satisfaction is also required in the SAF process.

Participation will only be considered effective if the agreed objectives on bias management are met and legitimate stakeholders express a positive outcome in the participation assessment.

Lack of clarity on what meaningful participation entails If specific steps and objectives of the process are not clearly defined within the design, development and deployment steps, the good intentions of participation can be diluted and not fructiferous. The SAF proposes a step-by-step process identifying the participation of stakeholders and the agreement on objectives that are evaluated periodically.
A cover up for unethical or illegal activities If there is not an evaluation of the SAF process, there is a lack of assurance that it complies with AI Trustworthy requirements (HLEGAI, 2019) and the legal framework. The SAF process can question current legislation and become a trigger to modify it, but it should not justify going against the law. Guidelines to perform an impact assessment of the SAF process implementation should be proposed. These would provide organizations a way to anticipate the potential effect of SAF in terms of outputs. Guidelines to evaluate compliance with the SAF process should also be developed building on the vast literature on methodologies of AI assurance, especially to audit AI systems (Ahamat et al., 2021; Asplund et al., 2020; Metaxa et al., 2021) (Raji et al., 2020). In turn, SAF process is expected to facilitate existing processes of AI auditing, certification and accreditation because it provides explainability of the fairness decision making.
Conflation of the process The participation of legitimate stakeholders does not mean that there are no discriminatory outcomes. At a minimum, the outcome of the SAF process should comply with the criteria of demographic parity and negative dominance (Wachter et al., 2020). For example, black applicants should not make for the majority of rejected applicants in job recruitment. In addition, the always imperfect trade-offs in terms of fairness decision making are to be disclosed.
Difficulty to measure the benefits of participation As a result, the positive benefits of participation can be questioned in terms of profitability. Participation should not be considered the means to a goal, but also a goal in itself (Birhane et al., 2022), where stakeholders feel empowered to evaluate the impact of the technical solution and even to conclude that the solution is not required at all. A participation satisfaction assessment is foreseen in the SAF process to measure the benefits of the process.
Burden on all stakeholders to gather information, reflect and decide Reflecting and taking decisions on bias can become an enormous task. Reflection and decision making in the SAF process is limited to a specific scenario and stakeholders will be assisted by the project MLMDT to focus on the specificity of such context.
Lack of incentives for engaging in the process The SAF process might be seen as an overhead by the companies and institutions that develop and implement ML systems. There is an increasing pressure from employees and users to implement fairer and more transparent ML systems. 63% of technological workers declare that they need more time and resources to think about the impact of their work (Miller & Coldicott, 2019). Therefore, the principle of AI justice and fairness is increasingly a business need.
Lack of representation of stakeholders from the Global South

Two major machine learning venues, NeurIPS 2020 and ICML 2020, found that among the top 10 countries in terms of publication index, none are located in Latin America, Africa or the Southeast Asia (Chuvpilo, 2020).

There is a risk, therefore, that stakeholders from the Global South, where ML systems are also applied, are not invited to participate in the process.

The SAF process precisely focuses on involving the legitimate stakeholders where the ML system is to be implemented.
Design justice as an oxymoron Given the logic of performance focus and profit orientation in which ML systems are developed and implemented, obtaining ML products that are genuinely fair and equitable can be considered an impossible task not worth pursuing (Sloane et al., 2020). The SAF process does not aim to provide universally fair outputs, but to reach acceptable agreements among legitimate stakeholders who, feeling empowered to do so, define and assess objectives in terms of fairness for a particular scope and ML developing process.