Abstract
There is growing attention and evidence that healthcare AI is vulnerable to racial bias. Despite the renewed attention to racism in the United States, racism is often disconnected from the literature on ethical AI. Addressing racism as an ethical issue will facilitate the development of trustworthy and responsible healthcare AI.
There is growing attention and evidence that healthcare AI is vulnerable to racial bias. Despite the renewed attention to racism in the United States, racism is often disconnected from the literature on ethical AI. Addressing racism as an ethical issue will facilitate the development of trustworthy and responsible healthcare AI.
Main text
Artificial intelligence (AI) tools are becoming more widely used in healthcare. They are used for both administrative tasks, such as predicting appointment no-shows, and clinical tasks, such as identifying cancerous tissue in medical images or predicting the onset of sepsis. Although AI has many beneficial uses, there is growing attention and evidence that AI used for healthcare is vulnerable to problems that have been observed in other fields, such as racial bias and gender bias.1,2 Bias in AI can emerge at different stages of the development and implementation process, including conceptualization, design, data collection and processing, training, and implementation, and can have harmful effects, often on already marginalized groups.3
In response to these issues, there has been a focus not only on identifying problematic scenarios to which AI contributes but also on defining what ethical AI in healthcare could be. The attention to ethical AI, both in healthcare and in other sectors, has developed contemporaneously with increasing attention in the United States to racism, particularly anti-Black racism. However, despite this parallel focus on ethical AI and renewed attention to racism in the United States, addressing racism in healthcare and developing ethical AI are largely disconnected efforts. We assert that this is, in part, because racism is not often framed as an ethical issue in bioethics and other ethics discourses.4 Second, we assert that more attention to and recognition of racism as an ethical issue in healthcare would facilitate the connection of racism to the development of ethical, responsible, and trustworthy healthcare AI. Third, we believe that viewing racism as an ethical issue can make efforts to minimize the harms and increase the benefits of healthcare AI throughout the development life cycle more effective.
Considerations of racism should be central to health AI ethics because racism is an injustice, and justice is a central issue in moral philosophy overall and in bioethics specifically. Recently, in response to increased attention to the killing of Black individuals by law enforcement in the United States, bioethics scholars have called for increasing focus on racism as a justice and bioethics issue, since racism has profound effects on the health and well-being of Black people and other racially oppressed people.5 The disproportionate COVID-19 incidence, complications, and mortality in Black and Brown communities in the United States sparked the Centers for Disease Control and Prevention to name racism a threat to public health, though scholars have argued this for many years.6 Obermeyer’s widely cited example of a healthcare algorithm favoring healthier White patients over sicker Black patients for additional healthcare resources has been held up as an example of algorithmic bias.7 However, when racism is seen as an ethical issue, this same example is more than just “bias,” but a case of racial injustice. The data for this healthcare algorithm show that Black patients suffer the injustice of lower healthcare expenditures despite worse health and then the algorithm compounds this by deprioritizing their healthcare resource needs.
In addition, when racism is considered alongside other more familiar ethical issues like privacy and consent, it adds additional dimensions for consideration in discussions about trustworthy or responsible AI. For example, in the increasing attention on how to make healthcare AI more trustworthy and responsible, issues like transparency, privacy, and security are important to consider. However, a consideration of racism alongside trustworthiness would prompt additional inquiries about how structural racism and/or interpersonal racism might affect efforts to make algorithms more trustworthy. For example, there is some literature on the importance of bringing together a diverse range of stakeholders to help shape the development of ethical, trustworthy, and responsible AI.8 However, without explicit attention to racism, these efforts to give stakeholders, particularly racialized minority groups, a “seat at the table” could result in “bounded justice,9” as Creary argues. In these cases, efforts at inclusion do not fully recognize the historical and current marginalizations that can limit full participation and eventual just outcomes, even when racialized minority groups are invited to participate in stakeholder engagement.
The attention to the harms in AI, including when used in healthcare, is important. However, the focus on the dangers of biased healthcare is often framed as a largely technical issue that can be solved by getting better data. However, AI bias is a result of sociotechnical processes, and purely technical fixes will not be sufficient.10 It is important to recognize the “ordinariness of racism.11” That is, the ubiquity of racism, not its absence, characterizes society’s normal state and is not always perceptible. This ordinariness is baked into technology like AI.12 Recently, the White House’s Office of Science and Technology Policy released the Blueprint for an AI Bill of Rights, which names algorithmic discrimination, rather than simply AI bias, as a key problem to address. This is notable because the term algorithmic discrimination brings our attention to the social harms that result from algorithmic bias. Often these harms parallel already existing forms of social exclusion and marginalization, such as racism. Though many publications focusing on ethics in healthcare AI focus on bias, they make sparse mention of racism specifically, which privileges technical expertise and implies that mathematical fairness solutions are all that is needed. We believe that racism should be viewed as an ethical issue in healthcare AI because specifically calling out and focusing on racism provides an opportunity to show how racism can affect other aspects of ethical healthcare AI, not just algorithmic bias. Calling out racism in AI can bring focus to historical and current racial health disparities, such as differences in healthcare access, screening, testing, and treatment, which all influence biased data. Focusing on racism in AI as an ethical issue brings additional and important factors into the frame of consideration and opens opportunities to bring providers, funders, and other nonanalytic experts with an interest in advancing medical AI work into the conversation.
We do not suggest that writing on AI healthcare ethics should merely start including the term “racism.” Instead, we would like to see scholarship on healthcare AI and ethics include explanations of how training data become biased. In other words, scholarship should describe how racism happens in our social world, and in healthcare specifically, and how those practices become imprinted onto data that are used for healthcare AI. For example, racism is embedded in healthcare and public health systems in different ways—from race-based clinical algorithms to clinical interactions—that affect data recorded in healthcare settings.13 Such scholarship would require expertise from a diversity of scholars, including social scientists, bioethicists, and historians, who are not always included in AI development decisions. Developing such scholarship might also require novel empirical approaches and praxes based on antiracism, equity, and justice frameworks. Funders interested in building more equitable healthcare AI algorithms should also invest in interdisciplinary initiatives focused on understanding and redressing racial bias in the AI development pipeline.
More fulsome discussions of racism in the literature on the ethics of healthcare AI would also include more specificity on the ways that racism impacts clinical care. We also advocate for healthcare AI ethics to include more clear distinctions between structural racism and individual racist practices and/or beliefs and how these might affect healthcare AI through the life cycle—from dataset creation to design and development to implementation. In addition, we advocate for more in-depth discussions in the healthcare AI literature on how racism operates differently between groups, such as by discussing how anti-Black racism might be different from other kinds of racism and understanding the nuance of intersectionality.14
Finally, in a recent commentary, colleagues have considered the different ways the informatics field should recognize institutional, systemic, and structural racism and proposed the use of the Public Health Critical Race Praxis (PHCRP) to mitigate and dismantle racism in digital forms.15 We expand on this argument by noting that the field of healthcare AI ethics would also be augmented by discussions of antiracism. Intentional antiracist efforts can catalyze new sociotechnical practices for healthcare AI development and governance. Antiracism is defined as “[a] commitment to dismantling racism, which has dimension that are institutional and social as well as attitudinal and behavioral.”11 By centering on antiracism, literature addressing AI in healthcare can help contextualize and provide a deeper understanding of the policies and practices that perpetuate racist ideas and actions in medical care. The fast-moving practice of AI offers informaticists, scholars, and researchers an opportunity to create frameworks that meaningfully engage an intersectional approach to ethics and antiracism and that are critically reflective of field norms and standards. These conversations provide directions forward for questions raised by the recognition of racist diagnoses or clinical interactions and may catalyze conversation about structural changes to improve equitable use of AI in medicine. Additionally, they can provide the groundwork for procedural standards that ensure developers, funders, and others committed to advancing medical AI are transparent in addressing how racism may shape the resources and outcomes of their work.
We recommend that future AI health ethics frameworks should (1) explicitly discuss how systemic and individual racism creates biased data and algorithms, (2) discuss solutions to address racial bias that are grounded in approaches that have proven to be effective, (3) discuss how proposed ethical frameworks can benefit communities or individuals impacted by racial inequities, and (4) make ethical recommendations that are intentionally antiracist.
Acknowledgments
Author contributions
K.F. and E.O.N. contributed to the conceptualization of the manuscript. N.C., K.F., M.C., and E.O.N. contributed to manuscript development, literature review, analysis, drafting, and editing.
Declaration of interests
K.F. is a member of the All of Us Research Program’s institutional review board and a member of the digital ethics advisory board of Merck KGaA. E.O.N. is part of the Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity (AIM-AHEAD) Program team at the National Institutes of Health through the Intergovernmental Personnel Act (IPA) Mobility Program.
References
- 1.Benjamin R. Race after technology: Abolitionist tools for the new Jim code. Polity; 2019. [Google Scholar]
- 2.McKay C. Predicting risk in criminal procedure: Actuarial tools, algorithms, AI and judicial decision-making. Curr. Issues Crim. Justice. 2019;32:22–39. doi: 10.1080/10345329.2019.1658694. [DOI] [Google Scholar]
- 3.Chen I.Y., Pierson E., Rose S., Joshi S., Ferryman K., Ghassemi M. Ethical Machine Learning in Healthcare. Annu. Rev. Biomed. Data Sci. 2021;4:123–144. doi: 10.1146/annurev-biodatasci-092820-114757. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Ray K. Black Bioethics and How the Failures of the Profession Paved the Way for Its Existence. Bioethics Today. 2020 https://bioethicstoday.org/blog/black-bioethics-and-how-the-failures-of-the-profession-paved-the-way-for-its-existence/?fbclid=IwAR224R1cNKi5kSAGK-T94dA_oD6-bXD-A4nL6o2PY4UBT3Gg3TcmGn7VaH4 August 6, 2020. [Google Scholar]
- 5.Mithani Z., Cooper J., Boyd J.W. Race, Power, and COVID-19: A Call for Advocacy within Bioethics. Am. J. Bioeth. 2021;21:11–18. doi: 10.1080/15265161.2020.1851810. [DOI] [PubMed] [Google Scholar]
- 6.Jones C.P. Invited commentary: “race,” racism, and the practice of epidemiology. Am. J. Epidemiol. 2001;154:299–306. doi: 10.1093/aje/154.4.299. [DOI] [PubMed] [Google Scholar]
- 7.Obermeyer Z., Powers B., Vogeli C., Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019;366:447–453. doi: 10.1126/science.aax2342. [DOI] [PubMed] [Google Scholar]
- 8.Siala H., Wang Y. SHIFTing artificial intelligence to be responsible in healthcare: A systematic review. Soc. Sci. Med. 2022;296 doi: 10.1016/j.socscimed.2022.114782. [DOI] [PubMed] [Google Scholar]
- 9.Creary M.S. Bounded Justice and the Limits of Health Equity. J. Law Med. Ethics. 2021;49:241–256. doi: 10.1017/jme.2021.34. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Ferryman K., Mackintosh M., Ghassemi M. Considering Biased Data as Informative Artifacts in AI-Assisted Health Care. N. Engl. J. Med. 2023;389:833–838. doi: 10.1056/NEJMra2214964. [DOI] [PubMed] [Google Scholar]
- 11.Ford C.L., Griffith D.M., Bruce M.A., Gilbert K.L. American Public Health Association; 2019. Racism: Science & Tools for the Public Health Professional. [Google Scholar]
- 12.Ford C.L., Airhihenbuwa C.O. The public health critical race methodology: praxis for antiracism research. Soc. Sci. Med. 2010;71:1390–1398. doi: 10.1016/j.socscimed.2010.07.030. [DOI] [PubMed] [Google Scholar]
- 13.Vyas D.A., Eisenstein L.G., Jones D.S. Hidden in Plain Sight — Reconsidering the Use of Race Correction in Clinical Algorithms. N. Engl. J. Med. 2020;383:874–882. doi: 10.1056/nejmms2004740. [DOI] [PubMed] [Google Scholar]
- 14.Crenshaw K.W. In: The Public Nature of Private Violence. Fineman M.A., Mykitiuk R., editors. Routledge; 2013. Mapping the margins: Intersectionality, identity politics, and violence against women of color; pp. 93–118. [Google Scholar]
- 15.Platt J., Nong P., Merid B., Raj M., Cope E., Kardia S., Creary M. Applying anti-racist approaches to informatics: a new lens on traditional frames. J. Am. Med. Inform. Assoc. 2023;30:1747–1753. doi: 10.1093/jamia/ocad123. [DOI] [PMC free article] [PubMed] [Google Scholar]
