Skip to main content
UKPMC Funders Author Manuscripts logoLink to UKPMC Funders Author Manuscripts
. Author manuscript; available in PMC: 2023 Jul 27.
Published in final edited form as: Nat Med. 2023 Jan 1;29(1):10–11. doi: 10.1038/s41591-022-02137-y

A participatory initiative to include LGBTQ+ voices in AI for mental health

Andrey Kormilitzin 1,, Nenad Tomasev 2, Kevin R McKee 2, Dan W Joyce 1,3
PMCID: PMC7614833  EMSID: EMS180419  PMID: 36631594

Artificial intelligence (AI) can help clinicians improve healthcare decision-making by integrating data from high-volume, heterogeneous electronic health records (EHRs) and surfacing salient information to support healthcare decisions. However, there is growing and disturbing evidence that AI solutions in healthcare carry significant risk of harm for people belonging to racial, ethnic, sexual and gender minority communities, which can exacerbate inequalities.1,2 In particular, the safety and fairness of AI algorithms for LGBTQ+ communities have to-date been largely unaddressed and under-discussed.2 These considerations are especially important when developing tools to support the delivery of mental healthcare, given that mental health difficulties such as self-harm and suicidal distress are more prevalent among LGBTQ+ people, in part because of stressors including victimisation and isolation.3

Clinical decision support tools require high-fidelity, granular, and representative patient data. Many LGBTQ+ organisations endorse collecting data on sexual orientation and gender identity to advance population health.4 Nonetheless, the collection and use of data comes with privacy concerns, especially when including sensitive and sometimes stigmatised characteristics. EHR systems should be adapted both to respect privacy and to reflect the diverse ways in which LGBTQ+ people express their identities.5

Incompleteness of sexual orientation and gender identity information in healthcare data is a global challenge for LGBTQ+ communities. In the US, only four states have LGBTQ+ inclusive data-collection regulations.6 EU member states vary in terms of the sexual orientation and gender identity data they collect,7 whilst the UK government concluded that this information is inadequately captured and harmonised across public and health sector datasets.8 The team from the University of Oxford (AK and DWJ) analysed the UK’s NHS secondary mental healthcare data and observed that gender identity is typically recorded with limited granularity (i.e. as mutually exclusive categories of male, female, indeterminate and non-binary), and sexual orientation is recorded for only around 2.5% of patients9. Some people from LGBTQ+ communities may seek healthcare outside of their local area, or may access support from charities or non-governmental organisations. There may be a lack of trust relating to a troubling history of pathologisation of queerness within the field of psychiatry, links between medical professionals and conversion therapy, and the continued existence of discrimination against LGBTQ+ people in many countries today.10 Perceived gaps in healthcare are often met by non-governmental LGBTQ+ mental health organisations, evincing demonstrable unmet need among these communities.

Inadequacies in health provision for LGBTQ+ people raise two sets of concerns. Firstly, there is an increased risk of disparate impact and iatrogenic harm from AI-supported clinical decision making across LGBTQ+ groups. The systematic exclusion of gender identity data, for example, renders transgender patients invisibile to decision support tools, which in turn causes those tools to inappropriately recommend services on the basis of heteronormative computational assumptions. Secondly, the lack of reliable data and the insufficient levels of engagement with the wider queer community make harms from AI hard to identify and even harder to mitigate. The potential harms must be minimised if AI is to be safely integrated into healthcare and mental healthcare.

To ensure AI tools are fair and safe, AI for health should be more transparent, ethical, and participatory. There is a pressing need for involving underrepresented and marginalised groups in the design and development of solutions and practises aimed at helping improve their wellbeing and mental health. Such involvement should build on the history of patient and public involvement, incorporate the diversity of queer communities, and accommodate the multitude of considerations needed to design inclusive and equitable AI clinical decision support.

We announce PARQAIR-MH (PARticipatory Queer AI Research for Mental Health), a new multi-stage participatory initiative aimed at informing policy around the issues of privacy, data collection, and data use for developing safe and fair mental health AI. We will achieve this through engaging with the LGTBQ+ community and other stakeholders including clinical scientists, biomedical ethicists, and mental health specialists from non-governmental organisations. The initiative will rely on established Delphi methodology, bringing together diverse perspectives to develop a comprehensive and inclusive surveying methodology, which we would subsequently implement to help reach policy recommendations.

We aim to repeat this process in different geographic locations to identify common themes, ensuring the generalisability of findings and recommendations to a wide range of contexts, while still remaining sensitive to local views on privacy, data, and technology. To ensure that the results of our initiative will inform follow-up research on mental healthcare, the findings will be openly and widely disseminated. Community participation is key, and we hope that this effort will help demonstrate best practices towards the inclusion of LGBTQ+ voices in future AI development.

Acknowledgements

AK and DWJ were supported in part by the NIHR AI Award for Health and Social Care (AI_AWARD02183); AK declared a research grant from GlaxoSmithKline. DWJ is supported by the NIHR Oxford Health Biomedical Research Centre (grant BRC-1215-20005). The views expressed are those of the authors and not necessarily those of the UK National Health Service, the NIHR, the UK Department of Health, or DeepMind.

This version of the article has been accepted for publication, after peer review (when applicable) but is not the Version of Record and does not reflect post-acceptance improvements, or any corrections. The Version of Record is available online at: http://dx.doi.org/10.1038/s41591-022-02137-y. Use of this Accepted Version is subject to the publisher’s Accepted Manuscript terms of use https://www.springernature.com/gp/open-research/policies/accepted-manuscript-terms.

Footnotes

Author Contributions:

All authors contributed equally to the conception of the work and reviewed the final manuscript.

Competing Interests:

AK declares a research grant from GlaxoSmithKline. All other authors declare no conflicts of interest.

References

RESOURCES