Abstract
As artificial intelligence (AI) becomes further embedded in healthcare, healthcare delivery organizations (HDOs) must navigate a complex regulatory landscape. Health AI Partnership (HAIP) has created 31 best practice guides to inform the development, validation, and implementation of AI products. Here, we map the most common principles found in 8 key AI regulatory frameworks to HAIP recommended best practices to provide practical insights for compliance with expanding AI regulations.
Subject terms: Health care, Health policy
Introduction
Artificial intelligence (AI) has made remarkable inroads in healthcare, offering opportunities to enhance patient care and operational efficiency. However, the integration of AI surfaces many ethical, legal, and social challenges. In order to mitigate risks, government agencies1,2, corporations3,4, and global groups like the World Health Organization5 have introduced principles to govern the use of AI.
Healthcare delivery organizations (HDOs) can be explicitly or implicitly required to comply with high-level principles. For example, as a federal agency, the Veterans Health Administration is required to comply with principles for AI use listed in Executive Order 139606. HDOs want to comply with the White House Blueprint for an AI Bill of Rights2 and have gone as far as to make public commitments to the safe, secure, and trustworthy use of AI in healthcare7. While the current state of AI regulation is based largely on voluntary compliance with existing principles, as the regulatory landscape expands HDOs will inevitably have to satisfy an increasing number of mandatory requirements.
But practically navigating these commitments is hard. As it stands, HDOs must wade through the many different principles that are not always aligned with each other. The principles are high-level, failing to account for the nuanced realities and experiences healthcare professionals encounter. However, the process of considering and implementing these principles is critical to ensure ethical and legally compliant use of AI in healthcare right now, as well as preparing for compliance with future mandatory regulations. HDOs must grapple with moving from principles to practices.
To help bridge this gap, Health AI Partnership (HAIP) conducted interviews with diverse stakeholders across the United States (US) and released an initial set of best practices for AI adoption in 20238. Shortly after releasing the best practices, HDOs began asking how the best practices mapped to commonly cited principles.
This comment illustrates the first formal effort to help HDOs translate between high-level AI principles and on-the-ground best practices. We present four novel contributions. First, we evaluate the extent of alignment between a small number of key AI regulatory frameworks and guiding principles. Second, we assess the extent to which the adoption of on-the-ground best practices can be used to ensure an HDO will consider and address all key guiding AI principles. Third, we provide practical strategies to empower HDOs to navigate the rapidly evolving AI regulatory landscape. Fourth, we highlight areas of overlap and gaps between on-the-ground practices and principles to inform opportunities to strengthen future principles and best practices.
Mapping AI guidelines and HDO best practices
Sourcing AI principles
Our analysis of AI principles focuses on 8 key frameworks that have recently published and are particularly relevant to HDOs in the US. Five frameworks were put forth by the US government between 2021 and 2023: three by the US executive branch2,6,9, one by the US National Institute of Standards and Technology (NIST)1, and one draft guidance document by the US Food and Drug Administration (FDA)10. One framework was put forth by the World Health Organization (WHO) in 20215, which was included in the analysis to promote a global perspective in contrast to the other primarily US-centric sources, as health system leaders of many countries reference the WHO for guidance. We supplement these government and global frameworks with the two most highly cited systematic reviews of AI principles11,12. We focus on these 8 key frameworks because they are either from entities that HDOs seek to align with or are foundational academic contributions to the field of responsible AI. Table 1 summarizes the included frameworks.
Table 1.
Overview of the frameworks
AI guideline frameworks | Year | Actor who put out the statement | Target audience for statement | Relation to HDOs and health AI | Enforcement/Recourse | Raw principles | Number of principles |
---|---|---|---|---|---|---|---|
Blueprint for an AI Bill of Rights | 2022 | White House Office of Science and Technology Policy |
1. American citizens 2. Technology developers and researchers 3. Policymakers 4. Industry stakeholders 5. Members of the international community |
Oversight and advisory | Not legally binding nor does it constitute U.S. government policy. |
Safe and effective systems; Algorithmic discrimination preventions; Data privacy; Notice and explanation; Human alternatives, considerations, and fallbacks |
5 |
Executive Order (EO) 13960 | 2020 | The President of the United States (US Fedral Government Executive branch) |
1. Federal Agencies 2. Chief Information Officer (CIO) Council 3. General Public and Other Government Agencies |
Advisory and awareness | Enforceable within the scope of the executive branch of the government. |
Lawful (3a); Performance driven (3b); Accurate, reliable, and effective (3c); Safe, secure, and resilient (3d); Understandable (3e); Responsible and traceable (3f); Monitored (3g); Transparent (3h); Accountable (3i); |
9 |
Executive Order (EO) 14110 | 2023 | The President of the United States (US Fedral Government Executive branch) |
1. Federal Agencies 2. Chief Information Officer (CIO) Council 3. General Public and Other Government Agencies |
Advisory and awareness | Enforceable within the scope of the executive branch of the government. (The executive order was revoked and is no longer enforceable as of January 20, 2025.) |
Safety and security; Responsible competition; Commitment to workforce; Equity and civil rights; Consumer protection; Privacy and civil liberties; Government infrastructure; Global leadership |
8 |
FDA PCCP Appendix A | 2023 |
U.S. Department of Health and Human Services Food and Drug Administration Center for Devices and Radiological Health Center for Biologics Evaluation and Research Center for Drug Evaluation and Research Office of Combination Products in the Office of the Commissioner |
1. Industry 2. Food and Drug Administration (FDA) Staff |
Draft guidance | Not enforceable |
Data management; Re-training; Performance evaluation; Update procedures |
4 |
NIST Risk Management Framework | 2023 | National Institute of Standards and Technology (NIST), a non-regulatory federal agency within the United States Department of Commerce. |
1. AI Actors who design, develop, or deploy AI systems and applications or AI-based products or systems 2. Policy Makers and Governance Bodies |
Advisory organization for guidance, standards, and best practices for HDOs | Not enforceable |
Safe; Secure/resilient; Explainable and interpretable; Privacy-enhanced; Fair and manage bias; Accountable and transparent; Valid and reliable |
7 |
Principled AI Framework | 2020 | Berkman Klein Center for Internet & Society, 2020. |
1. Policymakers 2. Advocacy Groups and Civil Society Organizations: 3. Academics/Scholars 4. Technical Experts 5. Companies/Privet Sector 6. Healthcare Workers 7. Communities and Individuals |
Advisory document for guidance, standards, and best practices for HDOs | Not enforceable |
Privacy; Accountability; Safety and security; Transparency and explainability; Fairness and non-discrimination; Human control of technology; Professional responsibility; Promotion of human values |
8 |
The Global Landscape of AI Ethics Guidelines | 2019 |
Members of the Health Ethics & Policy Lab at ETH Zurich, Switzerland: Anna Jobin Marcello Ienca Effy Vayena |
1. Scientists, research institutions, 2. Funding agencies, 3. Governmental and intergovernmental organizations, 4. Other relevant stakeholders |
Advisory document for guidance, standards, and best practices for HDOs | Not enforceable |
Transparency; Justice and fairness; Non-maleficence; Responsibility and accountability; Privacy; Beneficence; Freedom and autonomy; Trust; Sustainability; Dignity; Solidarity |
11 |
WHO Framework | 2021 | Complied by the WHO’s Health Ethics and Governance unit in the department of Research for Health and by the department of Digital Health and Innovation. The guidance is based on the collective views of 20 experts in public health, medicine, law, human rights, technology and ethics |
1. Governments 2. Technology Developers 3. Companies/Privet sector 4. Healthcare Workers 5. Communities and Individuals 6. Experts in Ethics, Digital Technology, Law, and Human Rights |
Advisory organization for guidance, standards, and best practices for HDOs | Not enforceable |
Model purpose & suitability; Algorithmic validation; Clinical validation; Deployment & ongoing monitoring; Economic evaluation; Communication of results |
6 |
We do not seek to account for all published AI principles but rather highlight the principles most relevant to US HDO leaders. We recommend the two most highly cited systematic reviews of AI principles for readers seeking comprehensive analyses11,12.
Sourcing best practices
Our analysis relies on the 31 best practice guides across 8 key decision points developed by HAIP to empower healthcare professionals to use AI safely, effectively, and equitably. The HAIP best practices and key decision points were derived through a rigorous process that combined a review of published literature and nearly 90 interviews with clinical, technical, and operational leaders from HDOs across the US. The qualitative analysis is presented in detail elsewhere and is the most extensive study to date of AI governance practices in US HDOs8. The HAIP best practices are publicly available at healthaipartnership.org and listed in Table 2.
Table 2.
HAIP best practice guides
HAIP key decision point | HAIP best practice guides | HAIP best practice guide description |
---|---|---|
Identify and prioritize a problem | 1.1 Identify problems across the organization | This guide outlines practical steps to involve diverse stakeholders in identifying and addressing organizational issues related to AI integration, ultimately fostering trust and accessibility. |
1.2 Prioritize problems | Step-by-step guide to align problem selection with organizational strategy and develop a structured prioritization process that encourages inclusive decision-making and sustainable growth. | |
1.3 Identify potential downstream impacts | This guide emphasizes the significance of anticipating unintended consequences and provides a systematic approach to mitigate risks and engage stakeholders early in the innovation process. | |
1.4 Determine the dimensions of the problem | This guide emphasizes understanding healthcare issues before using AI solutions. It outlines a strategic approach to empower teams to identify root causes and solve problems comprehensively. | |
1.5 Determine the suitability of technical approaches | This guide provides a systematic approach to balance technical and non-technical factors for effective problem resolution. | |
Define AI product scope and intended use | 2.1 Define the role of AI | This guide offers a framework to assess AI’s scope and provides actionable insights for seamless implementation while adhering to ethical and clinical standards. |
2.2 Evaluate internal resources for adoption | This guide helps healthcare organizations assess and allocate the resources needed for AI adoption, ensuring a cost-effective and well-prepared approach. | |
2.3 Assess the viability of buying or building AI | This guide helps organizations decide whether to buy or build AI solutions, considering factors like clinician engagement, infrastructure, customization, costs, and workforce impact. | |
2.4 Assess the quality of external AI product options | This guide helps organizations evaluate external AI product options by assessing regulatory compliance, assembling a cross-functional team, conducting a quality assessment, and making informed investment decisions. | |
2.5 Assess legal risks | This guide helps organizations assess legal risks associated with AI products in healthcare. Steps include identifying relevant laws, evaluating potential risks, creating action plans, determining responsibility, and sharing findings with the project team. | |
2.6 Audit investment decisions | This guide covers auditing AI investment decisions in healthcare. Steps include defining scope, reviewing documentation, evaluating outcomes, and ensuring compliance with regulations. It aims to improve efficiency, compliance, and trust in AI procurement. | |
Develop success measures | 3.1 Define performance targets | This guide outlines auditing AI investment decisions in healthcare, including scope definition, documentation review, outcome evaluation, and regulatory compliance. Its goal is to enhance efficiency, compliance, and trust in AI procurement. |
3.2 Define successful use | Defining success for healthcare AI involves engaging stakeholders, contextualizing metrics, assessing resources, and seeking consensus. Customized metrics and stakeholder involvement are key to a successful integration. | |
Design AI solution workflow | 4.1 Design and test workflow for clinicians | Designing and testing healthcare workflows for AI integration is vital. It involves understanding care processes, mapping patient journeys, creating user-friendly interfaces, and visualizing workflows for seamless integration. |
4.2 Adapt to enable AI implementation | Prepare for organizational change when implementing AI in healthcare. Evaluate existing conditions, gain senior leadership support, align incentives for end-users, compensate for labor, and reward clinicians for improving patient outcomes. | |
Generate evidence of safety, efficacy and equity | 5.1 Locally validate prior to integration | Local validation is essential before integrating AI in healthcare. Set performance standards, involve clinicians and end-users, adapt workflows, test on diverse patient groups, and take your time to ensure accuracy and trust. |
5.2 Identify and mitigate risks | Identifying and mitigating risks in AI integration is vital for patient safety and trust. Engage diverse stakeholders, define risk management plans, perform thorough risk analysis, and continually monitor and control risks throughout the AI product’s lifecycle. | |
5.3 Determine if AI should be integrated | Evaluate ROI, both financial and non-financial, when deciding to integrate AI in healthcare. Seek consensus among stakeholders, assess the intended use, and consider long-term maintenance costs before making a decision. | |
Execute AI solution rollout | 6.1 Disseminate information to end users | To ensure successful AI adoption in healthcare, designate leaders, create communication channels, and provide information to clinicians. Establish a governance committee for oversight, share regular updates, and set up a process for future updates. |
6.2 Manage changes to the work environment | Implementing AI in healthcare requires careful planning, incremental rollout, responsive feedback mechanisms, updates as needed, and transparency with senior leaders to ensure successful adoption. | |
6.3 Prevent inappropriate use of AI | Prevent inappropriate AI use in healthcare by developing clear guidelines, constraining AI product scope, providing comprehensive training, considering phased rollouts, and monitoring usage continuously. | |
Monitor the AI solution | 7.1 Monitor AI performance | Regularly monitor AI tools in healthcare. Develop a plan, assign a team, stay updated on guidelines, execute the process, and provide feedback for updates. |
7.2 Monitor work environment | Track usage, gather feedback from end users, and watch for biases in AI tool use, ensuring data represents the work environment’s impact. | |
7.3 Audit AI solutions and work environment | Make sure the system is auditable, identify independent auditing services, facilitate the audit process, review audit reports, and plan for follow-up assessments and audits. | |
7.4 Identify potential risks | Identify potential risks of AI products in healthcare by maintaining communication with stakeholders, establishing a multidisciplinary committee, monitoring regulatory databases, and assessing alternative AI products regularly. | |
7.5 Sustain improved outcomes | To sustain positive outcomes in healthcare AI, continuously monitor performance, set targets, and address shifts, while fostering collaboration and communication among stakeholders. | |
Update or decommission the AI solution | 8.1 Determine if updating or decommissioning is necessary | Assess AI tools regularly, monitor metrics, engage cross-functional teams, and follow vendor processes to maintain their effectiveness and ensure optimal patient care. |
8.2 Determine if the work environment requires adaptation | Ensure prerequisites are met to intervene in the work environment, monitor performance targets, specify problems, and communicate changes effectively to optimize the use of health AI tools and improve patient care. | |
8.3 Evaluate expansion to new settings | Expanding a health AI product to new settings involves assessing its performance, considering regulations, evaluating feasibility, testing, engaging stakeholders, and updating documentation and training. | |
8.4 Minimize disruptions from decommissioning | To minimize disruptions from AI product decommissioning, follow these steps: assess preparedness, keep stable components running, empower clinical leads for new workflows, communicate the decommissioning process, and closely monitor the care delivery setting. | |
8.5 Disseminate information about updates to end users | To disseminate AI product updates effectively, empower key stakeholders, maintain clear documentation and plans, and communicate updates or decommissioning to front-line workers while ensuring accessible documentation repositories. |
Mapping between AI principles and HAIP best practices
Different frameworks use different terms to refer to the same concept. For example, the AI Bill of Rights framework describes the concept of mitigating bias with the terms “algorithmic discrimination preventions,” while the Principled AI framework describes the same concept with the terms “fairness and non-discrimination”. To minimize the redundancy of principles while maintaining conceptual meaning, we grouped similar principles together. After grouping similar terms, the number of principles across the 8 frameworks decreased from 58 to 13. Details regarding redundant principles are included in Supplementary Table 1. The 13 distinct principles, which we hereafter refer to as synthesized principles, are listed in Table 3.
Table 3.
Synthesized principles definitions
Synthesized principle name | Definition |
---|---|
Responsibility and accountability | Careful consideration of and oversight of the impact AI systems have on their given field. Organizations should take responsibility for predicting adverse outcomes, developing rigorous monitoring practices, and addressing unintended consequences of AI. |
Prevention of bias/discrimination | Monitoring for and addressing potential biases in AI systems to prevent unfair or discriminatory outcomes. Actively promoting fairness, equity, and inclusivity involves carefully considering all aspects of the AI product including training data, model design, and decision-making processes. |
Respect for humanity/autonomy | Ensuring that AI technologies enhance human well-being and dignity. AI should be used to empower individuals rather than diminish their autonomy or decision-making capabilities. |
Safety and security | Ensuring that AI systems are designed, deployed, and maintained in a way that minimizes risks to its users and society as a whole. This includes developing defenses against any attacks targeting AI systems, protecting against misuse, and adhering to standard safety and security protocols. |
Transparency and explainability | Clearly conveying all aspects of the design and implementation process of AI systems to users and to the public. Efforts should be made to make AI systems easily understandable and stakeholders should have insights into how AI models make decisions. |
Lawfulness | AI systems should be fully compliant with established legal and regulatory frameworks. |
Data privacy | Respecting individuals’ right to privacy by handling personal data in a secure manner. Organizations should collect and store data in compliance with privacy laws and regulations, as well as inform users about ways that their data may be used. |
Validity and reliability | AI systems should be accurate, reliable, and validated through multiple forms of testing and monitoring. Users and stakeholders should have confidence in the results produced by a given AI model. |
Workforce consideration | Anticipating and addressing the impact AI might have on jobs and the workforce. Organizations should invest in training and re-training to prepare employees for AI-related changes. |
Beneficence | AI should be used for the benefit of humanity, promoting positive, sustainable outcomes. |
Economic regulation | The development process should balance innovation with economic considerations. Regulations should foster responsible AI development where consumers/users are protected without slowing down technological progress. |
Sustainability | Developing AI systems that are environmentally sustainable, with emphasis on energy-efficiency and responsible use of resources. |
Government infrastructure | Developing AI tools to increase the efficiency and impact of government systems as well as recruiting new talent to drive government innovation and AI policy. |
We then mapped the 31 HAIP best practice guides to the 13 synthesized principles. If a practice guide provided actionable recommendations related to a synthesized principle, it was mapped to that principle. A single guide could be mapped to multiple synthesized principles. Three of our team members (N.P., A.H., M.L.) separately reviewed and adjudicated the mapping process to ensure consistency.
Findings
Alignment of principles across frameworks
From the mapping process, we determined the extent of alignment between each framework and the synthesized principles. Table 4 summarizes these findings. Of the frameworks, the Global Landscape of AI Ethics Guidelines (n = 8), EO 14110 (n = 7), Principled AI (n = 7), and NIST RMF (n = 6) included the highest number of synthesized principles. The Blueprint for an AI Bill of Rights (n = 5) and FDA PCCP guidance Appendix A (n = 4) included the least synthesized principles. We also determined the frequency with which synthesized principles appeared in the frameworks, visualized in Fig. 1. Data privacy, transparency and explainability, and responsibility and accountability appeared in the largest number of frameworks while Sustainability and Government Infrastructure appeared in the fewest.
Table 4.
Framework to synthesized principles mapping
Fig. 1. Synthesized principle occurrence in framework.
Bar chart displaying the number of frameworks that reference each synthesized principle in the mapping exercise.
Alignment of HAIP best practices to synthesized principles
We also determined the extent to which the HAIP best practice guides incorporated actionable steps to fulfill the synthesized principles. Table 5 summarizes these findings. Topic guides for ‘(3.1) Define performance targets’ (n = 8, 61.54%) and ‘(6.3) Prevent inappropriate use of AI’ (n = 8, 61.54%) aligned with the most synthesized principles. Topic guides for ‘(1.1) identify problems across the organization’ (n = 1, 7.69%), ‘(1.2) prioritize problems’ (n = 1, 7.69%) and ‘(8.4) minimize disruptions from decommissioning’ (n = 1, 7.69%) aligned with the least.
Table 5.
Synthesized principles to HAIP best practice guides mapping
Additionally, we determined the synthesized principles that appeared most frequently across all 31 HAIP best practice guides. These results are also displayed in Table 5. The synthesized principle of Responsibility and accountability was addressed by most guides (n = 17, 54.84%), followed by Respect for humanity/autonomy (n = 16, 51.61%) and Prevention of bias/discrimination (n = 16, 51.61%). The synthesized principles addressed by the fewest number of best practice guides were government infrastructure (n = 0, 0%) and sustainability (n = 4, 12.90%).
Use of HAIP best practices to address principles in regulatory frameworks
HAIP best practices varied in their relevance to the different frameworks. 71% of HAIP best practices guides applied to all 8 frameworks, while only 3 guides (‘[1.1] Identify Problems Across Organization’, ‘[1.2] Prioritize Problems’, and ‘[8.4] Minimize Disruptions from Decommissioning’) applied to fewer than six frameworks. These results are shown in Fig. 2. With the results of the mapping, we can determine which HAIP best practice guides comply with each of the 8 frameworks. These results are displayed in Table 6.
Fig. 2. Framework coverage by HAIP best practice guides.
Bar chart displaying the number of frameworks represented by HAIP best practice guides.
Table 6.
HAIP best practice guides to framework mapping
Identifying overlaps and gaps between principles and practices
Our final analysis maps the inclusion of synthesized principles in the 8 frameworks against the inclusion of synthesized principles in the 31 best practice guides. The findings are visualized in Fig. 3. The synthesized principle that is most poorly represented in frameworks (n = 1) and best practices guides (n = 0) is Government infrastructure. On the other hand, the synthesized principle that is most prominently represented in frameworks (n = 6) and best practice guides (n = 17) is Responsibility and accountability.
Fig. 3. Synthesized principle alignment.
Chart displaying the number of guides vs. the number of frameworks covering a synthesized principle, with synthesized principles plotted in quadrants.
Discussion
Our analysis highlights the complex challenges that HDOs in the US face navigating the rapidly evolving regulatory landscape of AI. As highlighted in Table 4, no two AI frameworks are the same. Frameworks include different numbers and different sets of AI principles. HDOs will need to self-aggregate and prioritize government documents and published literature or they will need to address the breadth of principles covered by all the guidance. Thankfully, our analysis does demonstrate that the many variations of raw principles presented in different frameworks can be distilled into 13 synthesized principles. Rather than addressing specific frameworks individually, HDOs can prioritize among the 13 synthesized principles to put into practice.
Our findings create a practical workflow for HDOs to adopt and implement HAIP best practice guides. For example, referencing Table 5, an HDO that is concerned with the Transparency and Explainability of a specific AI product (a principle highlighted in many frameworks), may choose to reference the HAIP best practice guide ‘1.3 Identify potential downstream impacts’. This guide outlines strategies such as process mapping, conducting focus groups with affected parties, or creating standardized assessment rubrics to ensure that the HDO may thoroughly address that synthesized principle. Through the process of adopting a core set of 31 best practices across the AI product lifecycle, our findings ensure that HDOs will address all 13 synthesized principles that appear in key regulatory and peer-reviewed AI frameworks. On one hand, this demonstrates the implicit nature in which AI principles are already embedded within the operational practices of HDOs. On the other hand, it highlights the strong position HDOs are in, compared with other industries, to utilize AI safely, effectively, and equitably. Many of the synthesized principles, such as Data Privacy, Transparency and Explainability, and Responsibility and accountability, have rich legal and regulatory precedents in healthcare.
We do find several gaps between the content of the synthesized principles captured by key AI frameworks and the HAIP best practice guides. This may provide insight into the differences between high-level AI principles and practical “on-the-ground” considerations. First, key AI frameworks do not cover the tail ends of the AI product lifecycle as thoroughly as the HAIP best practice guides. This is most apparent in the content of the first two HAIP best practice guides, ‘1.1 Identify problems across the organization’ and ‘1.2 Prioritize problems’, which cover the beginning of the AI product lifecycle, as well as the last two HAIP best practice guides, ‘8.4 Minimize disruptions from decommissioning’ and ‘8.5 Disseminate information about updates to end users’, which cover the end of the AI product lifecycle. This content is a key focus of HAIP’s best practice guides yet is not addressed as thoroughly in the synthesized principles, as seen in Tables 5 and 6. This may highlight the importance of AI frameworks to more rigorously consider whether AI solutions are being applied to appropriate problems and the conditions under which AI solutions are decommissioned and updated. Additionally, considerations of the HAIP best practice guide ‘4.1 Design and test workflow for clinicians’ were similarly not covered to the same extent in the synthesized principles. This finding could mean that AI frameworks are not capturing the importance of sociotechnical challenges that occur at the interface of human users and AI technologies.
Lastly, our findings highlight an opportunity for several synthesized principles to be more prominently featured in both AI frameworks and HAIP best practice guides. In Fig. 3, we find five synthesized principles that are featured in two or fewer AI frameworks and 10 or fewer HAIP best practice guides (visualized in the bottom left quadrant separated by the red lines). These synthesized principles are: government infrastructure, sustainability, economic regulation, beneficence, and workforce considerations. Many of these poorly represented principles share a common theme: properly addressing them will require increased coordination between regulatory bodies and individual HDOs. For example, a single HDO has little control over government infrastructure or economic regulation—collaboration on a greater scale is required to achieve this. We discuss considerations for each of these principles in turn.
First, there is an urgent need for government investments in infrastructure to improve the use of AI in healthcare. This is not directly addressed in any HAIP best practice guides, however, there is a significant opportunity for the government to play a more prominent role in supporting the safe, effective, and equitable use of AI within HDOs. There have been many suggested courses of action in literature; a prominently discussed strategy is that of the FDA and other government bodies considering AI tools “medical devices” for regulation purposes13,14. In addition to central regulation, experts have also suggested that local regulation will be necessary to account for differences in care, patients, and system performance15. Second, there is also an urgent need for AI frameworks and HDOs to prioritize sustainability in the use of AI. HDOs are increasingly called upon to reduce greenhouse gas emissions and decarbonization strategies can include careful selection of AI technologies16. There is an opportunity for both HAIP and government agencies to develop best practices in this domain. Third, there is a gap in the economic regulation of AI in HDOs. This speaks to the urgent need for more mature reimbursement mechanisms to support the responsible use of AI in healthcare. Experts have proposed many different models for this, including reimbursement for value and outcomes to prevent overuse of AI, utilizing advance market commitments and time-limited reimbursements for new products, and financial incentives for bias mitigation17,18. Additionally, literature has suggested that establishing clearer regulation may incentivize innovation and increase reimbursement for developers of AI products13. Fourth and perhaps most surprisingly, the use of AI can more prominently prioritize beneficence to positively impact the well-being of people. AI is increasingly framed as a solution to address inefficiencies through automation, but in healthcare and other industries, its impact on people can be more prominently considered. Lastly, AI frameworks and HAIP best practice guide poorly capture workforce considerations. As opposed to the traditional model of the medical professional being responsible for the tool they use, as AI products become more advanced the burden of responsibility may shift in the direction of the vendor19. Additionally, the expanding use of AI will undoubtedly affect the labor force in a multitude of ways, replacing some tasks that are currently carried out by humans and leading to new roles requiring different skillsets20. Despite concerns around the future of work and the potential displacement of skilled labor, there is an opportunity to strengthen best practices to improve the experience of workers who will increasingly interact with AI.
Conclusion
As AI products become more deeply ingrained in healthcare, regulatory considerations expand with them. HDOs will be expected to comply with regulatory frameworks that are constantly evolving and difficult to translate into practice. HAIP bridges this gap by creating best practice guides for HDO leaders that translate regulatory principles into practice. This enables HDOs to align their AI governance efforts with regulatory priorities. This process is widely applicable and adaptable as new frameworks emerge. We hope that our analysis and findings serve as a blueprint for healthcare AI regulatory compliance as the field continues to mature.
Supplementary information
Supplemental Table 1: Synthesized Principles Generation
Acknowledgements
We thank the Gordan and Betty Moore Foundation for their funding and support of the project. The foundation played no role in the study design, data collection, analysis, and interpretation of data, or the writing of this manuscript. We thank all interview participants for sharing their insights and time with us. We thank Duke Heart Center for helping administer and manage the grant.
Author contributions
N.P., A.H., M.S., and M.L. designed the study concept, analyzed the data collected, and surfaced the findings; S.R. and N.P. reviewed various sources and documents to surface data needed for the study; S.Z. helped with background and literature review for the study; J.Y.K., D.V., K.S., D.T., A.V., and S.B. helped with review and editing of the manuscript critically for important intellectual content; M.P., M.S., and S.B. helped secure the funding for the project; M.S. and S.B. conceived and designed the analysis. All authors were involved in writing and editing the manuscript. All authors approved the final manuscript.
Competing interests
M.S. and S.B. reported co-inventing software at Duke University licensed by Duke University to external commercial entities including Clinetic, Cohere Med, Kela Health, and Fullsteam Health. M.S. and S.B. own equity in Clinetic. No other disclosures were reported.
Footnotes
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
These authors contributed equally: Alifia Hasan, Noah Prizant.
Supplementary information
The online version contains supplementary material available at 10.1038/s41746-025-01605-2.
References
- 1.Artificial Intelligence Risk Management Framework (National Institute of Standards and Technology, 2023).
- 2.Blueprint for an AI Bill of Rights (The White House, Washington, DC, 2022).
- 3.Responsibility: Our Principles (Google, 2024).
- 4.Roche Data Ethics Principles: 13 Principles to Guide Ethical Data Use (F. Hoffmann-La Roche Ltd, 2022).
- 5.Ethics and Governance of Artificial Intelligence for Health. WHO Guidance (World Health Organization, 2021).
- 6.Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government. Executive Order 13960 (The White House, 2021).
- 7.Brainard, L., Tanden, N. & Prabhakar, A. Delivering on the Promise of AI to Improve Health Outcomes (The White House, 2023).
- 8.Kim, J. Y. et al. Organizational governance of emerging technologies: AI adoption in healthcare. In Proc. 2023 ACM Conference on Fairness, Accountability, and Transparency 1396–1417 (Association for Computing Machinery (ACM), New York, 2023). [DOI] [PMC free article] [PubMed]
- 9.Executive Order on the Safe, Secure, and Trustworthy Development and use of Artificial Intelligence. Executive Order 14110 (The White House, 2023).
- 10.Marketing Submission Recommendations for a Predetermined Change Control Plan for Artificial Intelligence/Machine Learning (AI/ML)-Enabled Device Software Functions. Draft Guidance for Industry and Food and Drug Administration Staff (US Food and Drug Administration, 2023).
- 11.Jobin, A., Ienca, M. & Vayena, E. The global landscape of AI ethics guidelines. Nat. Mach. Intell.1, 389–399 (2019). [Google Scholar]
- 12.Fjeld, J., Achten, N., Hilligoss, H., Nagy, A. & Srikumar, M. Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-based Approaches to Principles for AI (Berkman Klein Center Research Publication, 2020).
- 13.Stern, A. D. The Regulation of Medical AI: Policy Approaches, Data, and Innovation Incentives (National Bureau of Economic Research, 2022).
- 14.Penteado, B. E., Fornazin, M., Castro, L. & Rachid, R. The regulation of artificial intelligence in healthcare: an exploratory study. In Proc. 2022 Computers and People Research Conference 1–5 (Association for Computing Machinery (ACM), New York, 2022).
- 15.Price, W. N., Sendak, M., Balu, S. & Singh, K. Enabling collaborative governance of medical AI. Nat. Mach. Intell.5, 821–823 (2023). [Google Scholar]
- 16.Singh, H., Vernon, W., Scannell, T. & Gerwig, K. Crossing the decarbonization chasm: a call to action for hospital and health system leaders to reduce their greenhouse gas emissions. NAM Perspect.10.31478/202311g (2023). [DOI] [PMC free article] [PubMed]
- 17.Parikh, R. B. & Helmchen, L. A. Paying for artificial intelligence in medicine. npj Digit. Med.5, 63 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Abràmoff, M. D. et al. A reimbursement framework for artificial intelligence in healthcare. npj Digit. Med.5, 72 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Pasquale III, F. A. The price of autonomy: liability standards for complementary and substitutive medical robotics and artificial intelligence. Lus et Prax (07172877)28, 1 (2022). [Google Scholar]
- 20.Hazarika, I. Artificial intelligence: opportunities and implications for the health workforce. Int. Health12, 241–245 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Supplemental Table 1: Synthesized Principles Generation