Table 2.
Three pillars for fairness
Fairness pillar | Source of unfairness | Challenge: | Attribute | Key questions |
Transparency: A range of methods designed to see, understand and hold complex algorithmic systems accountable in a timely fashion. | ‘Like Gods, these mathematical models were opaque, their workings invisible to all but the highest priests in their domain: mathematicians and computer scientists’ (O’Neil: 3) | How can we foster democratic and sustained debate on the role of AI/ML in healthcare with a range of stakeholders, including patients experiencing complex and serious mental illness and/or addiction? | Interpretable | Are biases from predictive care models carried over across samples and settings? |
Explainable | Which model features are contributing to bias and what kinds of assumptions do they amplify? How does an understanding of these features by stakeholders impact clinical care? | |||
Accountable | How does predictive care impact stakeholders (patients, families, nurses, social workers)? What governance structures are in place to ensure fair development and deployment? Who is responsible for identifying and reporting potential harms? | |||
Impartiality: Health care should be free from unfair bias and systemic discrimination. | ‘AI can help reduce bias, but it can also bake in and scale bias’ (Silberg and Manyika:2) | How are complex social realities transformed into algorithmic systems, and what kinds of normative assumptions drive these processes? |
Provenance | Do predictive care model features reflect socio-economic and political inequities? Might these features contribute to biased performance? |
Implementation | What harms might result from the implementation of predictive care models? Do they disproportionately affect certain groups? | |||
Inclusion: The process of improving the ability, opportunity, and dignity of people, disadvantaged on the basis of their identity, to access health services, receive compassionate care and achieve equitable treatment outcomes. | ‘Randomised trials estimate average treatment effects for a trial population, but participants in clinical trials often aren’t representative of the patient popuation that ultimately receives the treatment’ (Chen: 167). | How can we ensure that the benefits of advances in clinical AI accrue to the most structurally disadvantaged? | Completeness | Is information required to detect bias missing? Is there sufficient data to evaluate predictive care models for intersectional bias? Are marginalised groups involved in the collection and use of their data? |
Patient and Family Engagement | Have stakeholders been involved in the development and implementation of predictive care? Do patients perceive models as being fair or positively impacting their care? |
ML, machine learning.