Skip to main content
NPJ Digital Medicine logoLink to NPJ Digital Medicine
. 2022 Sep 30;5:152. doi: 10.1038/s41746-022-00695-6

Addressing racial disparities in surgical care with machine learning

John Halamka 1, Mohamad Bydon 1, Paul Cerrato 1,, Anjali Bhagra 1
PMCID: PMC9525720  PMID: 36180724

Abstract

There is ample evidence to demonstrate that discrimination against several population subgroups interferes with their ability to receive optimal surgical care. This bias can take many forms, including limited access to medical services, poor quality of care, and inadequate insurance coverage. While such inequalities will require numerous cultural, ethical, and sociological solutions, artificial intelligence-based algorithms may help address the problem by detecting bias in the data sets currently being used to make medical decisions. However, such AI-based solutions are only in early development. The purpose of this commentary is to serve as a call to action to encourage investigators and funding agencies to invest in the development of these digital tools.

Subject terms: Health sciences, Health policy


Racial disparities in surgical care are a well-documented reality across the United States. Black patients receiving cardiac surgery have 17% and 26% higher odds of mortality and major postoperative complications, respectively, compared to White patients1. In spine surgery, the risk of postoperative complications has been estimated to be as much as 61% higher for Black patients. And it should be noted that these estimates were risk-adjusted for comorbidities, hospital characteristics, baseline patient status, and other factors, with the unadjusted discrepancies being even wider. The fact that Black patients receive a lower quality of care has been a known fact for over four decades24, but despite several efforts to address the problem57, the gap is far from closed8.

Access to health care and barriers

One of the main reasons for the inequality in surgical outcomes is unequal access to health care. This lack of access has three main components: decreased exposure to preventive practices, lower rates of health care utilization, and delayed presentation. Black patients, for instance, are less likely to receive routine cancer screening and tend to present later for the management of preventable or early-detectable cancers, such as cervical or colorectal9,10. They are also less likely to receive hip and knee arthroplasty, lumbar surgery, carotid endarterectomy, and others11.

Although timely surgical intervention often results in better clinical outcomes, pathologies that are allowed to progress are less likely to respond to management. Several of the aforementioned factors—insurance, health literacy, economic status— may eventually result in Black patients presenting later in the natural timeline of their disease, restricting the benefit they may obtain from surgery. Among lung cancer patients of I–IIIA stage, for instance, Neroda et al. found that Black individuals were almost 70% more likely to receive delayed surgery—defined as a time from diagnosis to surgery of more than 6 weeks in this study. Similar findings have been reported in spine, benign brain tumor, and hip replacement surgery, with the delayed presentation being a mediator towards worse postoperative outcomes in Black patients1214. Overall, hindered access to health care makes Black patients worse surgical candidates upon presentation.

It is no surprise to find that poor insurance coverage often contributes to the under-utilization of health care among Black individuals. Most studies supporting this observation have investigated the differences between private insurance, Medicare and Medicaid coverage, and lack of any insurance, and have found them to be significant. Black individuals are more prone to lose health care coverage at any point in their lives, recording a proportion of uninsured person-years of 0.20, compared to 0.12 among White individuals. Private insurance is associated with easier appointment scheduling15, delivery of more patient-friendly care practices— such as minimally invasive and outpatient surgery16,17—and superior surgical outcomes when compared to government payors and a lack of insurance coverage18,19. However, the comparison of private-payer programs, government-issued programs, and lack of insurance illuminates only part of the story, as there is a large heterogeneity among private insurance programs that could potentially impact access to health care. More specifically, higher-deductible plans discourage patients from pursuing contact with a provider and are more prevalent among Black individuals20.

Health literacy, prior individual experiences, and cultural traits may also contribute to the health disparities between Black and White individuals. Ibrahim et al. conducted a survey on patients with hip or knee osteoarthritis to assess the patients’ heuristics and expectations from care. They found that Black patients were more inclined towards complementary or self-administered therapeutic options, while they were less likely to consider joint replacement surgery21. Research also suggests that less education contributes to this phenomenon22. In addition, Black patients are more likely to overestimate the length of hospitalization, procedure-related pain, and disability; overall they are more skeptical about joint replacement surgery than White patients23.

Comparing delivered quality of care

Another potential component of the racial disparities in surgical care is the discrepancies in the quality of care delivered. In a study by Rangrass et al. utilizing a national claims database, it was shown that hospital quality might explain as much as 35% of the observed discrepancy in mortality after coronary artery bypass graft surgery between Black and White patients24. However, this conclusion is refuted by Silber et al., who utilized the same database to study the same hypothesis on general surgical procedures. The investigators of the latter study found that discrepancies in outcomes were eliminated following matching Black and White patients on preoperative status; hence, they suggested that racial disparities should be attributed to the delayed access to care rather than the heterogeneity of care quality among providers and institutions25.

Addressing surgical disparities

Several federal programs and scientific community initiatives have been launched to address racial disparities in health care during the past two decades57. In 2011, the Department of Health and Human Services announced a multidimensional plan to address the racial gap in healthcare: this plan included policy modifications, funding redistribution, and rewards for the care of socially disadvantaged populations, among others. Subsequently, the Affordable Care Act provided poorer individuals with enhanced insurance options. Buchmueller et al. found the Affordable Care Act to lower the uninsured rate among Black individuals by almost 35%. Nevertheless, overall, these measures have not been as impactful as desired, and the landscape remains essentially unchanged. In a study using the National Inpatient Sample, Best et al.8 investigated the utilization of nine common procedures by race relative to the proportion of races in the population and found the gap between them having become smaller in some cases, while larger in others, while at no point reaching equality between Black and White individuals. It is clear that racial disparities are still an unresolved problem in society, and the high-level policies employed so far have not proven sufficient to eradicate them.

Can surgical bias yield to AI-based algorithms?

It is unrealistic to imagine that discrimination against various population subgroups can be resolved with artificial intelligence alone. The cultural, ethical, and sociological issues are far too complex to solve with digital tools, regardless of how sophisticated they may be. Nonetheless, AI and machine learning that addresses a variety of technological touch points can improve the profession’s ability to detect bias and improve patients’ access to surgical services and outcomes.

To ensure that all patients are ensured equal access to high-quality medical care, including surgical services, it is first necessary to analyze the data sets used to determine whether patients of color, women and those in lower socioeconomic groups are accurately represented in the data sets and algorithms used to determine the need for said services. As we have pointed out in a previous publication26, this has not always been the case. Obermeyer et al.’s27 analysis of a commercial database has demonstrated that, while Blacks were considerably sicker than White patients, based on signs and symptoms, the dataset did not recognize the greater disease burden in Blacks because it assigned risk scores based on total healthcare costs accrued. It is unrealistic to assume that such costs accurately measured patients’ needs; the lower cost among Blacks may have been due to less access to care, which in turn resulted from their distrust of the healthcare system and direct racial discrimination from providers28. Similar discrimination against women has been documented in medical imaging datasets used to train and test AI systems used for computer-assisted diagnosis29. There is also evidence to suggest that some machine learning enhanced algorithms that rely on electronic health record data under-represent patients in lower socioeconomic groups30.

Commercially available AI bias detection tools that have been used to help identify discrimination include concept activation vectors (TCAV), which are used by Google to measure bias by race, gender, and location31, and Audit-AI, which uses a Python library from Pymetrics that can detect discrimination by locating specific patterns in the training data26,32.

Devising a comprehensive bias detection toolkit

While the aforementioned bias detection programs have merit, solving the problem of surgical bias will require a more comprehensive approach. That approach begins with a set of guidelines that set forth standards on how to conduct AI-related research and how to report it in the professional literature, including The Standard Protocol Items: Recommendations for Interventional Trials-Artificial Intelligence (SPIRIT-AI)extension, a set of guidelines designed to help researchers develop AI-related clinical trials33, and the Consolidated Standards of Reporting Trials-Artificial Intelligence (CONSORT-AI) extension34 Unfortunately, despite the recommendations from thought leaders regarding the importance of adhering to standards that would make algorithms more equitable, Lu et al. have found these guidelines are often ignored35.

They looked at 15 model reporting guidelines and reviewed 12 deployed Epic models. They found a median completion rate was 39% and stated: “…information on usefulness, reliability, transparency, and fairness was missing from at least half of documentation.”

Mayo Clinic is taking a more direct approach to algorithmic bias. Mayo Clinic Platform (MCP) has developed _Validate, a digital solution that helps measure model sensitivity, specificity, area under the curve (AUC), and bias, which in turn enables the system to break down the racial, gender, and socio-economic disparities in the delivery of care. Using the tool can lend credibility to models, accelerates adoption into clinical practice, and enables developers to more readily meet regulatory requirements for approval. It provides users with a series of descriptive statistics of model performance and data to demonstrate that the model was run against each demographic.

To illustrate _Validate’s performance, imagine that a developer wants to create a clinical solution that predicts whether a patient with signs and symptoms of appendicitis will need surgery or can be managed with antibiotics. Inputs fed into the algorithm might include all historical patient data, including demographics, prior diagnoses, a history of abdominal abnormalities, and family history of the same. _Validate would provide testability that has been missing from many commercially available products. It enables health care stakeholders to test an AI model against an extensive data set and evaluate the reasonableness and usefulness of the result. In addition to its ability to evaluate and certify the quality and accuracy of an AI model, _Validate protects the intellectual property of the model and its data, using state-of-the-art de-identification protocols. With the assistance of Diagnostic Robotics, a validation services provider, _Validate analyzes the model’s performance, generating a table that includes true negatives, false negatives, true positives, and false positives, from which sensitivity, specificity, AUC, and positive predictive and negative predictive values can be derived. It can also perform a biased evaluation that takes into account race, ethnicity, age, obesity, behavioral health, genetic history, gender, and socioeconomic status markers.

Johns Hopkins University investigators are also taking measures to solve the bias problem. Wang et al have developed an 11-question checklist to help assess the validation of predictive models36. Among the issues that the checklist asks algorithm developers to take into consideration:

  • “Is the prediction target an appropriate proxy for patient health care outcomes or needs?”

  • “Are there any modeling choices made that could lead to bias? For example, are there any dependencies between inputs and outcomes that could lead to discriminatory performance across groups?”

  • “Was the data used to train the model representative of the population in the deployment environment?”

  • “Do validation studies report and address performance differences between groups?”

Innovation depends upon a perfect storm of technology, policy, and culture. Machine learning techniques, including deep learning systems, are mathematically robust in 2022 and commercially supported by all cloud providers, so it is fair to say that technology is not a rate-limiting step. Policies for the guardrails and guidelines of the machine learning life cycle to reduce bias and monitor ongoing fairness and usefulness, on the other hand, are still a work in progress; Several of us have assembled a multi-stakeholder coalition (coalitionforhealthai.org) to provide the foundational implementation guides that may evolve into policy. Culture likewise will require additional focus. We must set a cultural expectation that machine learning in healthcare should only be deployed in production when equity is a design principle. Finally, we believe that machine learning is only one tool in our quiver to reduce racial disparities in surgery, but it can be rapidly deployed, locally optimized, and monitored for impact over time.

Reporting summary

Further information on research design is available in the Nature Research Reporting Summary linked to this article.

Supplementary information

Reporting Summary (828.7KB, pdf)

Acknowledgements

We would like to thank Kira Radinsky, Ph.D., the CEO of Diagnostic Robotics, for her insights on bias detection software.

Author contributions

J.H., M.B., P.C., and A.B. collected the data; conceived, designed, and performed the analysis; reviewed the literature; and wrote the paper.

Competing interests

The authors declare no competing interests.

Footnotes

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

The online version contains supplementary material available at 10.1038/s41746-022-00695-6.

References

  • 1.Mehta RH, et al. Association of hospital and physician characteristics and care processes with racial disparities in procedural outcomes among contemporary patients undergoing coronary artery bypass grafting surgery. Circulation. 2016;133:124–130. doi: 10.1161/CIRCULATIONAHA.115.015957. [DOI] [PubMed] [Google Scholar]
  • 2.Carlisle DM, Leake BD, Shapiro MF. Racial and ethnic disparities in the use of cardiovascular procedures: associations with type of health insurance. Am. J. Public Health. 1997;87:263–267. doi: 10.2105/AJPH.87.2.263. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Lucas FL, Stukel TA, Morris AM, Siewers AE, Birkmeyer JD. Race and surgical mortality in the United States. Ann. Surg. 2006;243:281–286. doi: 10.1097/01.sla.0000197560.92456.32. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Bombardier C, Fuchs VR, Lillard LA, Warner KE. Socioeconomic factors affecting the utilization of surgical operations. N. Engl. J. Med. 1977;297:699–705. doi: 10.1056/NEJM197709292971305. [DOI] [PubMed] [Google Scholar]
  • 5.Movement Is Life Caucus. Movement Is Life: a Catalyst for Change: Addressing Musculoskeletal Health Disparities (Movement Is Life Caucus, accessed May 2022); https://www.movementislifecaucus.com/wp-content/uploads/Movement-Is-Life-A-Catalyst-For-Change-Proceedings-Report.pdf (2011).
  • 6.US Department of Health and Human Services. HHS Action Plan to Reduce Racial and Ethnic Disparities: a Nation Free of Disparities in Health and Health Care (US Department of Health and Human Services, accessed May 2022); https://www.minorityhealth.hhs.gov/assets/PDF/Update_HHS_Disparities_Dept-FY2020.pdf (2011).
  • 7.O’Connor MI, Lavernia CJ, Nelson CL. AAOS/ORS/ABJS Musculoskeletal Healthcare Disparities Research Symposium: Editorial comment: a call to arms: eliminating musculoskeletal healthcare disparities. Clin. Orthop. Relat. Res. 2011;469:1805–1808. doi: 10.1007/s11999-011-1884-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Best MJ, McFarland EG, Thakkar SC, Srikumaran U. Racial disparities in the use of surgical procedures in the US. JAMA Surg. 2021;156:274–281. doi: 10.1001/jamasurg.2020.6257. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Johnson NL, Head KJ, Scott SF, Zimet GD. Persistent disparities in cervical cancer screening uptake: knowledge and sociodemographic determinants of papanicolaou and human papillomavirus testing among women in the United States. Public Health Rep. (Washington, DC: 1974). 2020;135:483–491. doi: 10.1177/0033354920925094. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Burgess DJ, et al. Presence and correlates of racial disparities in adherence to colorectal cancer screening guidelines. J. Gen. Intern. Med. 2011;26:251–258. doi: 10.1007/s11606-010-1575-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Jha AK, Fisher ES, Li Z, Orav EJ, Epstein AM. Racial trends in the use of major procedures among the elderly. N. Engl. J. Med. 2005;353:683–691. doi: 10.1056/NEJMsa050672. [DOI] [PubMed] [Google Scholar]
  • 12.Elsamadicy AA, et al. Influence of racial disparities on patient-reported satisfaction and short- and long-term perception of health status after elective lumbar spine surgery. J. Neurosurg.: Spine SPI. 2018;29:40–45. doi: 10.3171/2017.12.SPINE171079. [DOI] [PubMed] [Google Scholar]
  • 13.Anzalone CL, Glasgow AE, Van Gompel JJ, Carlson ML. Racial differences in disease presentation and management of intracranial meningioma. J. Neurolog. Surg. Part B Skull Base. 2019;80:555–561. doi: 10.1055/s-0038-1676788. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Nayar SK, et al. Racial disparity in time to surgery and complications for hip fracture patients. Clin. Orthop. Surg. 2020;12:430–434. doi: 10.4055/cios20019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Hsiang WR, et al. Medicaid patients have greater difficulty scheduling health care appointments compared with private insurance patients: a meta-analysis. Inquiry. 2019;56:46958019838118. doi: 10.1177/0046958019838118. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Mooney J, et al. Minimally invasive versus open lumbar spinal fusion: a matched study investigating patient-reported and surgical outcomes. J. Neurosurg. Spine. 2021;36:1–14. doi: 10.3171/2021.10.SPINE211128. [DOI] [PubMed] [Google Scholar]
  • 17.Mooney, J. et al. Outpatient versus inpatient lumbar decompression surgery: a matched noninferiority study investigating clinical and patient-reported outcomes. J. Neurosurg. Spine 1–13. 10.3171/2022.3.SPINE211558 (2022). [DOI] [PubMed]
  • 18.Curry WT, Jr, Carter BS, Barker FG., 2nd Racial, ethnic, and socioeconomic disparities in patient outcomes after craniotomy for tumor in adult patients in the United States, 1988–2004. Neurosurgery. 2010;66:427–437. doi: 10.1227/01.NEU.0000365265.10141.8E. [DOI] [PubMed] [Google Scholar]
  • 19.LaPar DJ, et al. Primary payer status affects mortality for major surgical operations. Ann. Surg. 2010;252:544–550. doi: 10.1097/SLA.0b013e3181e8fd75. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Cole MB, Ellison JE, Trivedi AN. Association between high-deductible health plans and disparities in access to care among cancer survivors. JAMA Netw. Open. 2020;3:e208965–e208965. doi: 10.1001/jamanetworkopen.2020.8965. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Ibrahim SA, Siminoff LA, Burant CJ, Kwoh CK. Variation in perceptions of treatment and self-care practices in elderly with osteoarthritis: a comparison between African American and white patient s. Arthritis Rheum. 2001;45:340–345. doi: 10.1002/1529-0131(200108)45:4<340::AID-ART346>3.0.CO;2-5. [DOI] [PubMed] [Google Scholar]
  • 22.Chaudhry SI, et al. Racial disparities in health literacy and access to care among patients with heart failure. J. Card. Fail. 2011;17:122–127. doi: 10.1016/j.cardfail.2010.09.016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Ibrahim SA, Siminoff LA, Burant CJ, Kwoh CK. Differences in expectations of outcome mediate African American/white patient differences in “willingness” to consider joint replacement. Arthritis Rheum. 2002;46:2429–2435. doi: 10.1002/art.10494. [DOI] [PubMed] [Google Scholar]
  • 24.Rangrass G, Ghaferi AA, Dimick JB. Explaining racial disparities in outcomes after cardiac surgery: the role of hospital quality. JAMA Surg. 2014;149:223–227. doi: 10.1001/jamasurg.2013.4041. [DOI] [PubMed] [Google Scholar]
  • 25.Silber JH, et al. Examining causes of racial disparities in general surgical mortality: hospital quality versus patient risk. Med. Care. 2015;53:619–629. doi: 10.1097/MLR.0000000000000377. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Cerrato P, Halamka J, Pencina M. A proposal for developing a platform that evaluates algorithmic equity and accuracy. BMJ Health Care Inf. 2022;29:e100423. doi: 10.1136/bmjhci-2021-100423. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Obermeyer Z, et al. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019;366:447–53. doi: 10.1126/science.aax2342. [DOI] [PubMed] [Google Scholar]
  • 28.Ledford H. Millions of black people affected by racial bias in health- care algorithms. Nature. 2019;574:608–609. doi: 10.1038/d41586-019-03228-6. [DOI] [PubMed] [Google Scholar]
  • 29.Larrazabal AJ, et al. Gender imbalance in medical imaging datasets produces biased classifiers for computer-aided diagnosis. Proc. Natl Acad. Sci. USA. 2020;117:12592–12594. doi: 10.1073/pnas.1919012117. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Gianfrancesco MA, et al. Potential biases in machine learning algorithms using electronic health record data. JAMA Intern. Med. 2018;178:1544–1547. doi: 10.1001/jamainternmed.2018.3763. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Kim, B., Wattenberg, M. & Gilmer, G. Interpretability beyond feature attribution: quantitative testing with concept activation vectors (TCAV). In Proc. 35th International Conference on Machine Learning, (ed. Lawrence, N.) (Stockholm, Sweden, PMLR 80, MLR Press, 2018).
  • 32.Pymetrics/audit, AI. (Pymetrics/audit, AI, accessed May 2022) https://github.com/pymetrics/audit-ai (2020).
  • 33.Cruz Rivera S, et al. Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI extension. Nat. Med. 2020;26:1351–63.. doi: 10.1038/s41591-020-1037-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Liu X, et al. Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI extension. Nat. Med. 2020;26:1364–74. doi: 10.1038/s41591-020-1034-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Lu J, et al. Assessment of Adherence to Reporting Guidelines by Commonly Used Clinical Prediction Models From a Single Vendor: A Systematic Review. JAMA Netw Open. 2022;5:e2227779. doi: 10.1001/jamanetworkopen.2022.27779. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Wang HE, et al. A bias evaluation checklist for predictive models and its pilot application for 30-day hospital readmission models. J. Am. Med. Inform. Assoc. 2022;29:1323–1333. doi: 10.1093/jamia/ocac065. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Reporting Summary (828.7KB, pdf)

Articles from NPJ Digital Medicine are provided here courtesy of Nature Publishing Group

RESOURCES