Skip to main content
. 2023 Feb 7;2:e42936. doi: 10.2196/42936

Table 1.

Issues related to AIa and health equity that were abstracted from the literature.

Category and issue Description
Background Context

Biased or nonrepresentative developers Development team composition may be biased or poorly representative of the population, leading to mismatched priorities and blind spots.
Diminished accountability Lack of developer accountability makes it difficult for individuals harmed by AI applications to obtain compensation.
Enabling discrimination Developers may use AI algorithms to purposely discriminate for malice or for economic gain.
Data Characteristics

Limited information on population characteristics Insufficiently granular data on population characteristics may lead to inappropriately aggregating dissimilar groups, such as classifying race into only White and non-White.
Unrepresentative data or small sample sizes Inadequate representation of groups in training data can lead to worse model performance in these groups, especially when training and deployment populations are poorly matched.
Bias ingrained in data When data reflect past disparities or discrimination, algorithms may incorporate and perpetuate these patterns.
Inclusion of sensitive variables Inclusion of sensitive information, such as race or income, may cause algorithms to inappropriately discriminate on these factors.
Exclusion of sensitive variables Exclusion of sensitive information may reduce accuracy in some groups and lead to systematic bias due to a lack of explanatory power.
Limited reporting of information on protected groups Lack of reporting on the composition of training data or model performance by group makes it difficult to know where to appropriately use models and whether they have disparate impacts.
Model Design

Algorithms are not interpretable When we do not understand why models make decisions, it is difficult to evaluate whether the decision-making approach is fair or equitable.
Optimizing algorithm accuracy and fairness may conflict Optimizing models for fairness may introduce a trade-off between model accuracy and the fairness constraint, meaning that equity may come at the expense of decreased accuracy.
Ambiguity in and conflict among conceptions of equity There are many conceptions of fairness and equity, which may be mutually exclusive or require sensitive data to evaluate.
Deployment Practices

Proprietary algorithms or data unavailable for evaluation When training data, model design, or the outputs of algorithms are proprietary, regulators and other independent evaluators may not be able to effectively assess risk of bias.
Overreliance on AI applications Users may blindly trust algorithmic outputs, implementing decisions despite contrary evidence and perpetuating biases if the algorithm is discriminatory.
Underreliance on AI applications People may be dismissive of algorithm outputs that challenge their own biases, thereby perpetuating discrimination.
Repurposing existing AI applications outside original scope Models may be repurposed for use with new populations or to perform new functions without sufficient evaluation, bypassing safeguards on appropriate use.
Application development or implementation is rushed Time constraints may exacerbate equity issues if they push developers to inappropriately repurpose existing models, use low-quality data, or skip validation.
Unequal access to AI AI applications may be deployed more commonly in high-income areas, potentially amplifying preexisting disparities.

aAI: artificial intelligence.