Skip to main content
. 2025 Mar 3;31(2):75–88. doi: 10.4274/dir.2024.242854

Table 3. Recommendations for addressing bias in artificial intelligence (AI) for medical imaging.

Stage of AI

Recommendation

Design

• Ensure that the project team represents a range of perspectives, including radiologists, clinicians, data scientists, engineers, and department administrators, preferably from different demographic backgrounds.

• Encourage the entire team for transparency in detecting and reporting potential biases.

• Scrutinize research questions to identify any inherent biases or inequalities and address them proactively in the study design.

• Consider adhering to established reporting and methodological quality guidelines to ensure transparency and reproducibility.

Data

• Collect data from a wide range of sources to capture diverse patient populations.

• Conduct in-depth exploratory data analysis to identify any potential systematic errors that may exist, informing subsequent modeling and mitigation strategies.

• Standardize data to ensure consistency across datasets, with effective harmonization techniques.

• Implement rigorous quality control measures to maintain the accuracy and reliability of labels and annotations, following established protocols and guidelines.

• Continuously monitor data quality and update annotations as needed to reflect any changes or improvements.

Modeling and Evaluation

• Divide the dataset into training, validation, and test sets before any modeling begins, ensuring that each subset is representative of the overall population.

• Select evaluation metrics that account for disparities in outcomes across different demographic groups, avoiding metrics that may mask underlying systematic errors.

• Consider techniques such as fairness-aware machine learning algorithms and model interpretability methods to mitigate bias and enhance transparency.

• Evaluate model fairness using a variety of methods to capture different aspects of bias.

• Assess model performance separately for different demographic subgroups to identify any disparities in predictive accuracy or bias.

• Continuously retrain and update models to account for evolving datasets and mitigate the perpetuation of historical biases.

Deployment

• Continuously monitor model performance in real-world settings, paying particular attention to disparities in outcomes among different demographic groups.

• Conduct thorough evaluation of model performance after any updates or modifications to ensure that biases have not been inadvertently introduced or amplified.

• Engage with regulatory bodies to ensure compliance with relevant standards and guidelines and seek periodic audits to validate the fairness and effectiveness of the deployed models.

• Try to collect effective feedback from the end-users to identify potential biases or shortcomings in the deployed system and address them promptly.