Skip to main content
. Author manuscript; available in PMC: 2023 Sep 3.
Published in final edited form as: Perspect Psychol Sci. 2022 Dec 9;18(5):1062–1096. doi: 10.1177/17456916221134490

Table 2.

Recommendations for Assessing and Mitigating Bias in Mental Health AI Applications

Model Building
1. Recruit a diverse team to build the algorithms (e.g., Cogwill et al., 2020).
2. Recruit stakeholders and representatives from target population to inform all stages of development (e.g., Lee et al., 2019).
3. Create a standardized system for eliciting feedback and revising models, including standard questions (e.g., Mulligan et al., 2019).
4. Elicit feedback from stakeholders/target population on problem/solution conceptualization, revise (e.g., Lee et al., 2019; Smith & Rustagi, 2020).
5. Elicit feedback from stakeholders/target population on features and labels to be used, revise (e.g., Lee et al., 2019; Smith & Rustagi, 2020).
6. Collect representative data that matches the target population and application to be implemented (e.g., Buloamwini & Gebru, 2018; Hupoint & Fernandez, 2019; Karkkainen & Joo, 2019; Merler et al., 2019).
7. If possible, avoid using sensitive attributes as features in model development (e.g., Kilbertus et al., 2018; Yan et al., 2020).
8. After collecting data, conduct a pre-processing bias assessment on features and labels (e.g., Celis et al., 2016; Celis et al., 2018; Zhang et al., 2018).
9. Elicit feedback from stakeholders/target population on data pre-processing assessment, revise (e.g., Lee et al., 2019; Smiith & Rustagi, 2020).
10. If possible, choose interpretable and intuitive models.
Model Evaluation
1. Examine ratios of predictions (e.g., ratio of diagnoses versus non-diagnoses) across sensitive attributes and combinations thereof (e.g., Hardt et al. 2016).
2. Examine model performance (e.g., accuracy, kappa) across sensitive attributes and combinations thereof (e.g., Hardt et al. 2016).
3. Elicit feedback from stakeholders/target population on decision post-processing assessment, revise (e.g., Hardt et al. 2016).
Bias Mitigation
1. If bias is detected, apply model in-processing and decision post-processing methods (e.g., Feng, 2022; Kamishima et al., 2012; Mehrabi et al., 2022; Oneto et al00., 2019; Pfohl et al., 2021; Ustun, 2019; Zemel et al., 2013; Zhao et al., 2018).
2. Repeat Model Evaluation Steps 1–3 until bias is removed from the model.
Model Implementation
1. Identify and plan for appropriate use cases for applying the algorithm.
2. Identify and plan for worst-case scenarios and outline remediation plans for these scenarios (e.g., High-Level Expert Group on Artificial Intelligence, 2020).
3. Delineate and implement safeguards and model monitoring parameters (e.g., High-Level Expert Group on Artificial Intelligence, 2020).
4. Delineate and implement opt-out and appeal processes that are easy and straightforward (e.g., Schwartz et al. 2020).
5. Elicit feedback from stakeholders/target population on model results and implementation plan, revise.
6. Publish algorithm, de-identified dataset, documentation, and Bias and Fairness Assessment results (e.g., Shin 2021).
7. Maintain regular monitoring and assessment of algorithm impact and update the model as needed (e.g., Schwartz et al. 2020).
8. Elicit feedback from stakeholders/target population on monitoring and impact assessments, revise.
9. Repeat process for any model adaptations to new target populations or use cases.