Skip to main content
Clinical and Translational Science logoLink to Clinical and Translational Science
. 2025 Sep 25;18(10):e70364. doi: 10.1111/cts.70364

Racing Against the Algorithm: Leveraging Inclusive AI as an Antiracist Tool for Brain Health

Victor Ekuta 1,
PMCID: PMC12461116  PMID: 40994240

ABSTRACT

Artificial intelligence (AI) is transforming medicine, including neurology and mental health. Yet without equity‐centered design, AI risks reinforcing systemic racism. This article explores how algorithmic bias and phenotypic exclusion disproportionately affect marginalized communities in brain health. Drawing on lived experience and scientific evidence, the essay outlines five design principles—centered on inclusion, transparency, and accountability—to ensure AI promotes equity. By reimagining AI as a tool for justice, we can reshape translational science to serve all populations.

Keywords: artificial intelligence, brain health, clinical algorithms, health equity, inclusive design, machine learning, racial disparities, translational science


Study Highlights.

  • What is the current knowledge on the topic?
    • Artificial intelligence (AI) is rapidly being integrated into medicine, including neurology and mental health.
    • Despite its promise, AI systems often inherit and amplify systemic inequities through biased data, exclusionary design, and structural disparities.
    • Racial and phenotypic biases in neuroscience tools (e.g., EEG, fNIRS, neuroimaging hardware) are increasingly recognized as barriers to equity in brain health.
  • What question did this study address?
    • This article asked: how can AI be redesigned as an antiracist tool in brain health rather than a mechanism that perpetuates disparities?
    • Specifically, it examined algorithmic bias, phenotypic exclusion, and systemic neglect in neurotechnology, and proposed equity‐centered strategies for inclusive AI design.
  • What does this study add to our knowledge?
    • It synthesizes lived experience, empirical evidence, and translational science to show how racial inequities manifest across the AI lifecycle from data collection to deployment.
    • It identifies five design principles (diverse teams, inclusive datasets, community feedback loops, bias auditing, and patient‐centered inputs) that can mitigate bias and enhance fairness in AI.
    • It highlights practical innovations (e.g., redesigned EEG caps, debiasing algorithms like D3M, patient‐reported outcomes) as proof‐of‐concept strategies for inclusive AI.
  • How might this change clinical pharmacology or translational science?
    • By centering inclusion and accountability, AI can evolve into a tool that advances equity rather than replicates structural racism.
    • Translational science can adopt equity‐centered frameworks not only to improve fairness in brain health research but also to guide broader biomedical innovation.
    • These principles may help reshape clinical trial design, biomarker validation, and implementation of AI‐driven diagnostics to ensure they serve diverse populations equitably.

1. Introduction

Zeros and ones, if we are not careful, could deepen the divides between haves and have‐nots, between the deserving and the undeserving—rusty value judgments embedded in shiny new systems.

Ruha Benjamin, Race After Technology

Artificial intelligence (AI) promises to revolutionize medicine, including neurology and mental health. Yet without thoughtful design, AI can encode and automate the same systemic biases it aims to overcome [1]. As AI and machine learning become central tools in clinical decision‐making, diagnosis, and resource allocation, a critical question arises: Will these technologies narrow the gap in brain health outcomes—or widen it?

Technology is often imagined as neutral, objective, and fair. But as we've seen with medical devices like pulse oximeters, even the most seemingly impartial tools can reproduce racial bias [2]. In 2021, I participated in the MIT linQ Catalyst Healthcare Innovation Fellowship and investigated pulse oximeter inaccuracy among patients with darker skin tones—a “systemic racism in miniature” that has led to delayed diagnoses, undertreatment, and inequitable care [2]. These tools, which assess oxygen saturation, were never designed with diverse users in mind.

As a neurologist, I know just how essential oxygen is for brain health—and I can't help but wonder: if something as fundamental as the pulse oximeter can encode racial bias, what does that mean for the more complex technologies we increasingly rely on in neurology and mental health?

The question is not whether technology has the capacity to help—but whether we will design it to do so. As we race forward with AI‐powered tools, especially in the brain health space, we must confront a central truth: technology is not immune to racism because racism is embedded in the systems that create it [1]. The solution lies in designing inclusive AI that doesn't just reflect our biases but actively counters them.

2. Algorithms as Amplifiers: How AI Reinforces Health Inequities

AI's power stems from its ability to learn from data—but what happens when that data is incomplete, biased, or reflective of an unjust world?

One widely cited case involved an algorithm that used healthcare spending as a proxy for need. Because Black patients historically had less access to care, they incurred fewer costs, leading the algorithm to falsely conclude they were healthier than white patients [3]. Similar race‐based “corrections” have been baked into algorithms used in nephrology, cardiology, and pulmonary medicine, further embedding systemic bias into clinical care [4]. For instance, in nephrology, commonly used equations for estimated glomerular filtration rate (eGFR) (a measure of kidney function) include race‐based adjustments that assign higher eGFR values (thus implying better kidney function) to Black patients, potentially leading to delayed referral to specialists or eligibility for kidney transplantation [4]. In cardiology, the American Heart Association Heart Failure Risk Score assigns lower risk estimates to Black patients, which may inadvertently raise the threshold for intervention and reduce access to necessary clinical resources [4].

In neuroscience and mental health, these patterns are alarmingly present. AI models trained on neuroimaging or psychophysiological data may perform poorly for Black individuals due to systemic exclusion and phenotypic biases. As Webb et al. [5] highlighted in Nature Neuroscience, these biases include:

  • Hair type bias: EEG and fNIRS technologies often fail to register accurate signals through coarse, curly hair, leading to disproportionate exclusion of Black participants from datasets [5, 6].

  • Skin pigmentation bias: Optical devices like fNIRS and even wearable technologies such as fitness trackers or smartwatches perform less accurately in individuals with darker skin, due to melanin's impact on light absorption [5, 6].

  • Lived experience bias: Electrodermal sensors may misinterpret the neural effects of chronic racism—labeling Black participants as “non‐responders” due to altered physiological baselines [5].

These phenomena are not isolated technical glitches—they reflect deeper patterns of phenotypic exclusion, normative bias, and structural neglect embedded in the design and deployment of neuro AI. Table 1 offers a framework for understanding how these biases operate across domains.

TABLE 1.

Manifestations of racial bias in neuro AI.

Step in AI lifecycle Type/Subtype of bias Example Impact on equity & brain health
Data collection Phenotypic Bias: Hair‐Related Barriers
  • EEG caps, MRI head coils, TMS, tDCS, and fNIRS optodes restrict Afro‐textured hairstyles (e.g., braids, locs) and fail to register accurate signals [5, 6]—Styling products or accessories (e.g., beads, clips, gels) commonly used in Afro‐textured hairstyles can distort MRI signal [6]
  • Exclusion of participants with Afro‐textured hair from studies—Loss of usable MRI/EEG/fNIRS/TMS/tDCS data, reducing representativeness

Data collection Phenotypic bias: Skin pigmentation bias
  • fNIRS, cerebral oximeters, and pulse oximeters perform poorly in patients with darker skin due to melanin's impact on light absorption [2, 5, 6]
  • Misinterpretation of brain signals or delayed recognition of low cerebral or tissue oxygenation in dark skinned patients [2]
Data Collection Electrophysiologic bias: lived experience bias
  • Electrodermal sensors misclassify altered physiological baselines from racism‐related stress as “non‐responsive” [6]

  • Discarding valid data from Black participants—Reinforces exclusion from AI model training

Algorithm development Training data bias
  • Alzheimer's prediction models trained on non‐diverse datasets underperform in Black populations [7]
  • Reduced diagnostic accuracy in underrepresented groups
Algorithm development Normative data bias: behavioral data
  • Resting‐state fMRI less predictive of behavior in Black participants [6]
  • suicide prediction models based on health records data accurately predict suicide risk in white but not Black patients [6]
  • Brain‐behavior associations captured by models are representative of the dominant or most represented group (i.e., White), as opposed to general relationships across populations

Algorithm development/Evaluation Normative data bias: neurocognitive testing
  • Older Black Americans perform more poorly on a wide range of neuropsychological tests (i.e., executive function, visuospatial ability) despite intact cognition [8]

  • AI inherits flawed thresholds, leading to false impairment labels

Algorithm development Benchmarking bias: biomarkers
  • Tau levels vary by race (i.e., studies have found lower levels of phosphorylated tau 181 and total tau in Black Americans compared to White Americans) but models use uniform thresholds [8]

  • Misclassification of dementia risk in Black patients

Deployment Structural access disparity
  • Black Americans receive disproportionately fewer DBS procedures compared with their non‐African American counterparts (per national trend data)

  • Advanced technologies (such as AI) may be underutilized or inaccessible in minoritized individuals, reinforcing disparities in neurological care

Note: This table outlines key examples of how racial bias appears throughout the lifecycle of AI development—from data collection and algorithm development to evaluation and ultimately deployment. Footnotes reference key studies supporting these claims.

These technical failings aren't merely bugs—they are reflections of a society that has not prioritized inclusive design. Left unchecked, AI models trained on these flawed inputs risk embedding inequality into the core of brain health algorithms and technology.

3. Accountability in the Age of AI

We often think of technology as neutral—an objective artifact, free from the flaws of human prejudice. Who then is responsible when an algorithm discriminates? Deborah Raji, Mozilla Fellow in Trustworthy AI, put it best:

What is the difference between overpolicing in minority neighborhoods and the bias of the algorithm that sent officers there? What is the difference between a segregated school system and a discriminatory grading algorithm? Between a doctor who doesn't listen and an algorithm that denies you a hospital bed? There is no systematic racism separate from our algorithmic contributions, from the hidden network of algorithmic deployments that regularly collapse on those who are already most vulnerable. [1]

Therefore, accountability begins with rejecting the myth of algorithmic neutrality. We must recognize that AI is not created in a vacuum. Its design reflects the assumptions, priorities, and blind spots of its developers—and the society in which it operates. If we want AI to serve as a tool for health equity rather than oppression, we must embed justice into every layer of its architecture.

4. Equity‐Centered AI: Design Principles for Inclusive Brain Health Technology

To turn AI into an antiracist tool for brain health, we need equity‐centered design. This means building systems that don't just avoid harm but actively dismantle structural barriers (Table 2). Key strategies include:

  1. Assemble Diverse Development Teams: Diversity in development leads to diversity in design. Building inclusive AI begins with inclusive teams. Teams must include people of color, patients, advocates, and individuals with lived experience of marginalization to catch blind spots and bias early.

  2. Invest in Inclusive, Representative Datasets: AI is only as fair as the data it learns from. Yet many neuroimaging and psychophysiological datasets underrepresent Black participants. We need targeted recruitment, culturally tailored protocols, and a rethinking of inclusion criteria to ensure AI learns from everyone—not just the privileged few.

  3. Build Feedback Loops with Affected Communities: Designers must engage communities historically excluded from technological innovation. These communities must become co‐creators, not just subjects of innovation. Co‐design sessions, community‐based participatory research, and open channels for critique can ensure AI reflects the needs and values of diverse populations. This is not just good ethics—it is good design.

  4. Audit AI for Bias and Disparate Impact: AI models should be continuously stress‐tested across racial and socioeconomic groups to uncover potential harms. Where proprietary models make this difficult, we should develop culturally competent auditing algorithms that can evaluate disparate impact—even with black‐box systems—potentially detecting disparities before they scale. One promising example is Data Debiasing with Datamodels (D3M), a dataset reweighting technique developed by MIT researchers [9]. Unlike traditional methods that adjust weights based solely on group representation, D3M identifies individual training examples that most negatively affect group‐level performance or fairness [9]. These high‐impact points are then down‐weighted or removed, enabling the model to generalize more equitably—particularly for underrepresented groups. D3M can be combined with other fairness‐enhancing strategies, including preprocessing adjustments, fairness‐aware optimization, and post‐processing corrections, to form a multilayered approach to debiasing. Importantly, such methods can also be applied retroactively to retrain existing models to meet predefined equity benchmarks. To ensure effectiveness, comparative fairness evaluation frameworks should be used to measure and validate reductions in bias prior to real‐world deployment.

  5. Revalue Inputs: Learn from Patients, Not Just Providers: Traditional AI systems often mirror the implicit biases of their training sources—especially when learning from provider‐driven judgments. One promising study trained an AI to learn from Black patients' self‐reported pain rather than physician assessments [10]. The result? A dramatic reduction in racial disparities in pain prediction. This approach flips the power dynamic by centering patient voices instead of physician bias—enhancing model performance while advancing justice.

TABLE 2.

Equity‐Centered Strategies for Inclusive AI: This table outlines key strategies (both proposed and already implemented) for counteracting racial bias and promoting inclusivity, accuracy, and fairness in AI models and technologies that influence brain health.

Strategy Action Example Intended impact
Assemble Diverse Development Teams Recruit and retain researchers, patients, developers, and testers with lived experience from marginalized communities Implemented: Diverse neuroengineering teams redesigned EEG caps to accommodate coarse hair types [5, 6] Detect design flaws early, enhance data quality, and improve inclusivity in device development
Invest in Inclusive Representative Datasets Design culturally sensitive dataset recruitment protocols and increase efforts to actively include underrepresented groups Implemented: The Alzheimer's Disease Neuroimaging Initiative (ADNI) revised recruitment protocols to better include underrepresented participants in Alzheimer's dataset Enhance generalizability and reduce bias in training data
Build Feedback Loops with Affected Communities Engage marginalized populations through co‐design, participatory research, and qualitative input

Proposed 1: Adapt Stanford's Our Voice citizen science and participatory research model to gather community‐driven data on cognitive and mental health concerns

Proposed 2: Partner with Black and Hispanic barbershops and hair stylists to co‐design neuroimaging hardware that accommodates Afro‐textured hair

Ensure tools reflect lived experience and improve community trust
Audit AI for Bias and Disparate Impact Apply fairness‐aware auditing tools to stress‐test model outputs and retrain with equity constraints Proposed: Applying MIT Data debiasing model (D3M) model to audit and improve machine learning model performance for minority groups in already deployed models Improve subgroup performance and ensure responsible deployment
Revalue Inputs: Learn from Patients, Not Just Providers Prioritize patient‐reported outcomes over clinical assumptions to reduce labeling bias Proposed: Retraining pain prediction algorithms on self‐reported data from Black patients Reduce provider‐driven diagnostic bias and improve validity

5. Conclusion: A Race we Must Win

AI is not inherently liberatory or oppressive—it is a mirror. It reflects who we are, what we value, and how we think. If we build it without intention, it will replicate the inequalities we already live with. But if we build it with care, with equity at the center, it can become a transformative tool for justice.

In brain health—where disparities in diagnosis, treatment, and outcomes are well documented—we cannot afford to wait. We must design AI systems that work for all brains, not just some.

The race against the machine is not about outpacing technology. It is about shaping it before it shapes us. If we succeed, the same algorithms once feared as tools of oppression may become our most powerful allies in building a more just, inclusive future for brain health.

Conflicts of Interest

The author declares no conflicts of interest.

Acknowledgments

AI‐assisted tools (i.e., Grammarly, Microsoft Word) were used to support editing and formatting during manuscript preparation. I assume full responsibility for the content.

Glossary of Bias Types

Phenotypic bias: Bias arising from physical traits such as hair texture or skin pigmentation that can interfere with data acquisition in neurotechnology.

Normative bias: The assumption that majority group data represents the norm, potentially skewing AI performance for underrepresented groups.

Benchmarking bias: Using fixed clinical or biomarker thresholds that do not account for population‐level differences, risking misclassification or unequal treatment.

Structural access disparity: Systemic barriers that prevent equitable access to healthcare technologies—such as limited deployment of AI tools in minority‐serving institutions.

Ekuta V., “Racing Against the Algorithm: Leveraging Inclusive AI as an Antiracist Tool for Brain Health,” Clinical and Translational Science 18, no. 10 (2025): e70364, 10.1111/cts.70364.

Funding: The author received no specific funding for this work.

Previous Presentation: This work expands on themes initially explored in a blog post published by the Boston Congress of Public Health (“Race Against the Machine: Leveraging Inclusive Technology as an Antiracist Tool for Brain Health”).

References

  • 1. Raji D., “How Our Data Encodes Systematic Racism,” (2020), MIT Technol Rev. website, https://www.technologyreview.com/2020/12/10/1013617/racism‐data‐science‐artificial‐intelligence‐ai‐opinion/. December 10, 2020.
  • 2. Fawzy A., Wu T. D., Wang K., et al., “Racial and Ethnic Discrepancy in Pulse Oximetry and Delayed Identification of Treatment Eligibility Among Patients With COVID‐19,” JAMA Internal Medicine 182, no. 7 (2022): 730–738, 10.1001/jamainternmed.2022.1906. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Obermeyer Z., Powers B., Vogeli C., and Mullainathan S., “Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations,” Science 366 (2019): 447–453. [DOI] [PubMed] [Google Scholar]
  • 4. Vyas D. A., Eisenstein L. G., and Jones D. S., “Hidden in Plain Sight—Reconsidering the Use of Race Correction in Clinical Algorithms,” New England Journal of Medicine 383, no. 9 (2020): 874–882. [DOI] [PubMed] [Google Scholar]
  • 5. Webb E. K., Etter J. A., and Kwasa J. A., “Addressing Racial and Phenotypic Bias in Human Neuroscience Methods,” Nature Neuroscience 25 (2022): 410–414. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6. Ricard J. A., Parker T. C., Dhamala E., Kwasa J., Allsop A., and Holmes A. J., “Confronting Racially Exclusionary Practices in the Acquisition and Analyses of Neuroimaging Data,” Nature Neuroscience 26 (2023): 4–11, 10.1038/s41593-022-01218-y. [DOI] [PubMed] [Google Scholar]
  • 7. Yuan C., Linn K. A., and Hubbard R. A.. “Algorithmic Fairness of Machine Learning Models for Alzheimer Disease Progression,” JAMA Netw Open 6, no. 11 (2023): e234220. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Barnes L. L., “Alzheimer Disease in African American Individuals: Increased Incidence or Not Enough Data?,” Nature Reviews. Neurology 18 (2022): 56–62, 10.1038/s41582-021-00589-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Jain S., Hamidieh K., Georgiev K., Ilyas A., Ghassemi M., and Mądry A., “Improving Subgroup Robustness via Data Selection,” Advances in Neural Information Processing Systems 37 (2024): 94490–94511. [Google Scholar]
  • 10. “AI Could Make Health Care Fairer—by Helping us Believe What Patients Say,” MIT Technol Rev. website, https://www.technologyreview.com/2021/01/22/1016577/ai‐fairer‐healthcare‐patient‐outcomes/.

Articles from Clinical and Translational Science are provided here courtesy of Wiley

RESOURCES