Abstract
This research applies the People, Process, Technology, and Operations (PPTO) framework to develop AI governance within a large hospital system in Canada that is early in AI adoption. Stakeholder interviews identified the organization’s strengths, gaps, and priorities for AI governance, providing foundational insights into the organization’s readiness and needs. Co-design workshops then adapted the PPTO framework to the organization’s specific context. Together, these efforts led to the creation of policies and the formation of an AI governance committee within the organization. This work demonstrates that the PPTO framework is a practical and adaptable tool for developing AI governance in real-world healthcare settings. It also addresses a critical gap in the field by generating empirical evidence of how a conceptual AI governance framework can be implemented within healthcare delivery organizations to drive organizational change.
Subject terms: Health policy, Medical ethics
Introduction
Artificial Intelligence (AI) has the potential to transform healthcare delivery by enhancing both clinical and operational outcomes. AI solutions empower healthcare providers with data-driven insights to support clinical decision-making and enhance patient care. For example, they can predict diseases for earlier intervention, analyze medical images to detect conditions in their early stages, personalize treatments, and enable more efficient follow-ups1–4. By automating administrative tasks—including scheduling, billing, and documentation—AI enhances operational efficiency, allowing healthcare professionals to prioritize patient care while minimizing inefficiencies5,6. These advancements can reduce unnecessary hospitalizations, lessen the need for extensive treatments, and create a more patient-centered healthcare experience. AI adoption is projected to generate significant cost savings for healthcare systems, with estimates reaching up to USD 150 billion by 20263.
Building on these benefits, AI adoption in healthcare is expected to expand, with growing interest in its use for diagnosis, treatment, and healthcare operations7. This enthusiasm is reflected in the significant rise in Food and Drug Administration (FDA)-approved AI systems and the growing body of AI-related research. The number of FDA-approved AI systems has grown significantly since 2010, with a sharp acceleration beginning in 2018 and peaking at 221 approvals in 2023. As of August 2024, the FDA had authorized a total of 950 AI-enabled medical devices8. The National Science and Technology Council’s Committee on Technology estimated that the U.S. government’s investment in AI-related research and development was ~$1.1 billion in 2015, with expectations for substantial increases in subsequent years9. The number of AI research publications in healthcare surged from 3569 in 2010 to over 50,000 by 202210. AI-related health startups have seen growing funding and investor interest, with the AI market projected to reach $6.6 billion by 202111. Healthcare-focused AI deals rose from under 20 in 2012 to nearly 70 by mid-201612.
While AI adoption is expected to grow, its actual implementation currently remains limited due to a variety of challenges. Beyond developing AI algorithms, productizing them for clinical use is an exceptionally complex task4,13. It entails overcoming implementation challenges, such as securing significant investment in funding, building infrastructure, employing skilled personnel, ensuring regulatory compliance, and integration into clinical workflows14–16. More critically, it involves addressing safety and ethical concerns, including algorithmic bias, data privacy and security, and transparency16,17. For example, algorithmic bias can result in patient harm through misdiagnosis, underdiagnosis, or unequal access to resources for certain demographic groups18–20. The collection and the use of large data sets raise concerns about privacy violations18. Additionally, the black box nature of AI solutions makes it difficult to detect and address when these errors occur18.
To ensure that AI solutions are successfully integrated into clinical use and deliver maximum benefits and minimal harm, healthcare organizations must proactively govern AI. AI governance is a system of rules, practices, processes, and technological tools designed to ensure the responsible development, deployment, and use of AI technologies21. Robust AI governance ensures that AI solutions are transparent, equitable, and aligned with ethical and regulatory standards. It mitigates risks, safeguards patient safety, and supports the responsible integration of AI into healthcare16.
Several AI governance frameworks have been proposed to guide the responsible adoption of AI in healthcare. The Technology-Organization-Environment (TOE) framework explains how organizations adopt new technologies by considering both internal factors—such as technological capabilities and organizational resources—and external factors, including the availability of technologies in the market and the regulatory environment22. While the TOE framework provides a holistic approach to understand the multifaceted factors influencing technology adoption at the organizational level, it has limitations when applied to establishing AI governance. First, its primary focus is on adoption—such as deciding to implement a technology and integrating it into workflows—rather than governance, which involves establishing the policies, oversight structures, and accountability mechanisms needed to manage the technology responsibly over time. The framework emphasizes identifying facilitators and barriers to the adoption of a specific technology, rather than providing guidance on governing the broader range of technologies implemented within organizations23–26. Second, although the framework has been applied in the healthcare sector, it was not specifically designed for this context, where regulatory complexity, patient safety, and equity are critical concerns. Its broad, high-level guidance—intended for general applicability across fields—lacks the specificity necessary to support concrete, actionable steps for implementing governance structures in healthcare settings.
Risk assessment frameworks have been developed through federal AI governance initiatives, as understanding and managing the risks posed by AI are fundamental elements of AI governance27. For example, the AI Risk Management Framework (AI RMF) was developed by the U.S. National Institute of Standards and Technology (NIST)28, and the Algorithmic Impact Assessment (AIA) tool was introduced by the Government of Canada29. These frameworks have gained international recognition and have been applied across various sectors, including healthcare. However, their utility within healthcare delivery organizations remains limited, as they were not specifically designed for the healthcare context and do not offer actionable guidance for establishing AI governance structures at the organizational level.
In addition to general frameworks, AI governance frameworks specifically designed for healthcare have been developed to address its unique needs. However, these frameworks largely emphasize high-level ethical principles30–32 or focus on technical considerations related to algorithm development, implementation, and maintenance33,34. These frameworks offer limited guidance on how to establish and operationalize AI governance in real-world settings27. Few studies offer practical insights into the process of establishing AI governance within healthcare organizations.
To address these limitations in existing frameworks, this research applies the newly developed People, Process, Technology, and Operations (PPTO) framework to establish AI governance within a healthcare organization35. Grounded in real-world practices and implementation experiences, the framework identifies key capabilities for establishing AI governance within a healthcare delivery organization across four core domains: People, Process, Technology, and Operations. The People domain specifies the personnel needed for AI governance, outlining the structure of the AI governance committee, required areas of expertise, defined roles and responsibilities, and strategies for managing membership over time. The Process domain outlines a governance process that balances innovation with risk, detailing key decision points across the AI lifecycle and the associated documentation required. The Technology domain details the infrastructure and technical capabilities necessary to oversee AI tools effectively throughout their lifecycles. The Operations domain describes the organizational capabilities required to operationalize AI governance, including executive sponsorship, accountability for committee activities, budget planning, and metrics to evaluate governance effectiveness.
The PPTO framework was selected for its practicality, comprehensiveness, and relevance within healthcare. It enables a systematic and comprehensive assessment of the capabilities and resources required to establish AI governance. By addressing not only AI technology but also the people, process, and operations that support it, the framework ensures a holistic approach to managing the complexities of AI governance. This study was guided by two research questions: (1) Is the PPTO framework effective for establishing AI governance in a healthcare delivery organization in practice? and (2) What strategies and processes support the successful implementation of the framework?
While numerous AI governance frameworks have been proposed, there is limited empirical evidence describing their real-world implementation within healthcare delivery organizations—a critical gap in the literature as these organizations increasingly adopt AI technologies and face growing demand to govern their use effectively. This study addresses that gap by detailing the implementation of the PPTO framework within one of the largest university-affiliated community health hospital systems in Canada. At the outset, the hospital system—still in the early stages of AI adoption—lacked a formal governance structure tailored to AI. By building an AI governance system from the ground up using a potentially scalable framework, this work translates abstract concepts into actionable strategies and offers a replicable model for other organizations. It contributes to the limited body of practical guidance on how to establish AI governance in real-world healthcare settings.
Results
Current state of AI adoption
The interviews revealed an increasing demand for use of AI in clinical practice within the organization. Although AI adoption was relatively new at the organization, five AI products developed by external vendors had already been implemented either in the electronic health record (EHR) or picture archiving communication system, with one additional vendor-developed product undergoing local evaluation.
Existing capabilities for AI governance
The interviews revealed the organization’s existing capabilities in the People, Process, and Technology domains for establishing an AI governance system. The Operations domain was not addressed due to the absence of a formal AI governance system.
Regarding People capabilities, the interviews identified several critical stakeholders who could provide a strong foundation for the organization’s AI governance. A dedicated team of senior leaders from DHC, with strategic, operational, clinical, and informatics expertise, was identified as the primary body responsible for overseeing AI initiatives in clinical practice until a formal AI governance committee is established. Though team members had limited experience with AI technologies, they were open to expanding their expertise in AI governance and demonstrated a commitment to adapt to the organization’s evolving digital health needs. An operational stakeholder from DHC commented, “We’re totally open if we need to augment it [what’s available], or if we need to build something externally, then we need to do that.”
The interviews also revealed additional key stakeholders essential to AI implementation, including a business committee managing the software budget, departmental leaders overseeing budget approvals, quality requirements, and technology use, clinical champions representing end users, IT teams responsible for technical infrastructure and data management, data scientists assessing AI solutions, and external vendors developing AI solutions. Together, these groups formed a network of People capabilities for AI governance, covering budget oversight, quality assurance, technical support, and stakeholder engagement.
Regarding Process capabilities, the interviews revealed inconsistencies in AI governance practices within the organization. While both effective and ineffective practices were identified throughout the entire AI lifecycle (Table 1), practices were applied inconsistently. A clinical stakeholder noted that “It is completely ad hoc and whoever is running the project gets to decide if they want to do anything at all.” For instance, while certain best practices–such as feasibility assessment, clinician engagement in solution design, and the establishment of communication channels–were occasionally implemented, they were not consistently applied across projects. Similarly, ineffective practices, including inadequate time and process in procurement and minimal monitoring, were also sometimes executed, resulting in gaps in governance. This lack of uniformity highlighted the need for a more structured approach to ensure reliable and effective AI governance across the organization.
Table 1.
Examples of effective and ineffective AI governance practices identified across the AI lifecycle
| AI lifecycle stage | Effective practices | Ineffective practices |
|---|---|---|
| Problem identification and procurement |
• Problems are identified across the organization using top-down and bottom-up approaches. • Problems are prioritized based on organizational concerns and strategic priorities. • Feasibility of adoption is assessed. |
• No clear evidence-based reason for AI adoption • Insufficient time to assess a problem and a solution • Lack of visibility and involvement in the procurement process for relevant key stakeholders |
| Development and adaptation |
• Engagement of clinicians in designing a new workflow for AI implementation with vendors and conducting usability testing • Pilot testing is conducted on a small area before fully integrating the solution. |
• Heavily dependent on the vendor for solution design and validation • Ad hoc sessions with end users for solution development • Limited discussion of success measures for projects • Lack of internal validation and evaluation • Limited assessment for health equity and bias • No systematic decision process for rollout |
| Clinical integration | • Established communication channels for change management |
• Heavily dependent on the vendor for creating and providing training and education materials • Insufficient education and explanation about the AI solution provided to those affected by the solution adoption |
| Lifecycle management |
• Conducts technical monitoring • Established end user support channels and a feedback loop to gather user feedback |
• No systematic approach to gather feedback from end users • Insufficient monitoring, especially outcome monitoring • No systematic approach for updating or decommissioning the AI solution or its ecosystem |
Regarding Technology capabilities, the interviews revealed that the organization possesses a robust data infrastructure and strong capabilities to support AI initiatives, with several vendor-based AI products already in place. Technical stakeholders reported that IT teams collaborated with vendors to “build the data set, the proper infrastructure behind it, and the servers that are needed” for data extraction and model integration. IT teams successfully extracted data from the EHR, shared it with vendors to facilitate model adaptation and evaluation under appropriate agreements, and reintegrated it into the EHR system. Additionally, the teams oversaw data de-identification and re-identification processes, ensuring secure vendor access to data while maintaining its confidentiality.
Needs and requirements for AI governance
In the absence of established AI governance, the interviews highlighted a strong desire for a unified approach to AI governance. Participants unanimously expressed the need for a centralized and standardized governance process to ensure the safe, effective, and ethical adoption of AI technologies. An operational stakeholder noted that different programs “tend to focus autonomously, function autonomously, and they feel like they are essentially their own institution within an institution, as opposed to having this centralized process.” A technical stakeholder also echoed this need, stating, “if we actually had a standard operating procedure to do these for these all products that come in that everybody knows about. And that this goes to kind of one committee to evaluate. That is something that I think we can definitely get better at doing.” Participants identified several key elements essential to this governance structure, including clearly defined stakeholder roles and responsibilities, established monitoring processes with metrics and frequency, and specific criteria for decommissioning AI products.
The importance of ethics consultation was also highlighted, particularly in monitoring health equity dimensions. Currently, ethical consultations are required only for clinical research projects–not AI tools intended for integration into clinical practice–due to organizational structures and the novelty of AI adoption. It underscored a gap in the implementation of AI solutions. An operational stakeholder with an ethics role noted, “It was surprising to me that there is no ethics consultation when it comes to AI deployment. I think that inequities are going to be expected because we don’t routinely collect race-based data, we don’t routinely collect gender data. If you don’t have the data of the people that you serve, the tool itself could be biased against groups of people, depending on where you got the algorithm from.” Participants advocated for collaboration between AI ethics experts and computer scientists to identify potential risks and downstream impacts associated with AI adoption. Another operational stakeholder emphasized the need for “somebody with the computer science background side of things, a clinician and someone with EHR side of things” to work together. Similarly, a clinical stakeholder echoed the importance of “a joint responsibility because there are things the technical team would know the clinical team doesn’t, and vice versa.”
While participants felt that the general AI governance framework could mirror existing non-AI technology governance, they sought further clarification on elements unique to AI oversight. Table 2 presents selected quotes from the interviews.
Table 2.
Selected quotes from the interviews about the needs and requirements for AI governance
| Sample Quotes |
|---|
| • “If we actually had a standard operating procedure for all products that come in that everybody knows about, and that this goes to kind of one committee to evaluate…” – Technical Stakeholder |
| • “We want to know what does that mature state look like, where ultimately does AI live? For now, we’ve put it with this digital health committee.” – Technical Stakeholder |
| • “We work in silos.” – Operational Stakeholder |
| • “We don’t have rules or standards [to validate algorithms], at this point in time.” – Operational Stakeholder |
| • “There is no ethics consultation when it comes to AI deployment.” – Operational Stakeholder |
| • “I think it’s a joint responsibility because there are things that the technical team would know the clinical team doesn’t, and vice versa.” – Clinical Stakeholder |
Developing people capabilities for AI governance
Co-design workshops led to the creation of AI governance specifically tailored to the organization’s needs and requirements. Participants unanimously reported that the “AI governance committee should be a subcommittee reporting up to DHC,” leveraging its members’ expertise and facilitating smoother implementation, rather than as a newly formed entity. They also unanimously suggested that the membership of the AI governance committee should be reviewed and updated “annually per organizational standard governance process.” Participants found the stakeholders outlined in the PPTO framework20 to be relevant, with clearly defined roles and responsibilities. They particularly appreciated the inclusion of ethics consultation through the ethics and legal subcommittee, which addressed the needs identified in the interviews. Participants recommended adding a responsibility to the AI governance committee in general—“ensuring return on investment”—which was not originally included in the framework.
Participants emphasized that all stakeholders, regardless of role, should possess basic technical knowledge—particularly related to EHRs. They also noted that while technical and informatics experts would focus primarily on technical domains, they should also have a foundational understanding of clinical workflows to support socio-technical alignment.
Given the required skills and expertise and roles and responsibilities of AI governance committee members described in the PPTO framework20, participants nominated various personnel to be part of the AI governance committee. Participants also suggested that before recruiting nominees, it is essential to establish and document expectations for AI governance committee members. Key considerations should include the general time commitment expected from committee members, the availability of full-time staff to oversee the operations of AI governance, and whether committee members will be compensated for their service, along with the structure of that compensation.
Developing process capabilities for AI governance
The Process domain was the most critical area to address, as it structures the AI governance system and underscores the need for a centralized, standardized governance process, which was strongly emphasized in the interviews. Participants stressed the importance of balancing structure and innovation to ensure that the governance process supports AI implementation without undue delays. They responded positively to the process outlined in the PPTO framework20, finding it practical and aligned with the organizational needs. For example, participants appreciated the inclusion of a centralized inventory of algorithms and lifecycle management process, both of which are currently lacking within the organization.
Additional key takeaways from the workshops included clarifying the scope of AI governance and defining the enforcement boundaries for the AI governance committee. While participants supported stratifying oversight levels based on the risk associated with each product—advocating for more rigorous examination of high-risk AI compared to low-risk AI—developing the consensus around the purview of AI governance was found to be challenging. For example, participants debated whether AI products used for non-clinical purposes—such as clinical research, billing, and scheduling—as well as federally approved products, should fall under governance. Views varied considerably. Ultimately, participants agreed that the risk level of an AI product is contingent upon the specific use case. Even if an AI product complies with regulations and standards set by health authorities or regulatory bodies, if it directly interfaces with patients or informs patient care decisions, it should be fully governed.
Participants also worked to distinguish the responsibilities of the AI governance committee from those of individual departments engaged in AI adoption. This activity proved challenging, as many initially believed the governance committee should oversee all AI-related activities, including adoption efforts themselves. However, they were reminded that the committee’s role is to provide oversight—ensuring AI initiatives are ethical and responsible—rather than directly leading implementation. Without clear boundaries, the committee risked becoming overwhelmed and inefficient. Consensus emerged that individual departments should be responsible for addressing specific operational concerns, such as clinical needs, alignment with business priorities, feasibility, budgeting, interoperability, workflow and workforce impacts, clinician satisfaction, and product ownership. In contrast, the AI governance committee is responsible for aligning AI use with overarching organizational principles, including safety, efficacy, equity, security, privacy, regulatory compliance, and integration with the organizational IT roadmap.
Developing technology capabilities for AI governance
Participants found technology infrastructure described in the PPTO framework20 relevant and agreed to use it to examine gaps in their existing system. They nominated teams outside the AI governance committee to take responsibility for finalizing the specifications of the technical capabilities and infrastructure. For instance, participants proposed collaboration between a clinical systems and informatics team and a business team. They reported that this collaboration, bringing together clinical, technical, and business expertise, would ensure the selection of high-quality, appropriate data elements for AI models, thorough documentation of data usage processes, proper approvals for data utilization, and alignment with business objectives. Additionally, participants recommended that these external teams take responsibility for owning and operationalizing the technical capabilities and infrastructure throughout the entire AI lifecycle. Regarding costs, participants noted the absence of an explicit operational plan for covering the ongoing costs of technical capabilities and infrastructures required to support the AI lifecycle. Participants indicated that cost coverage would likely depend on the specific AI use case and the availability of funding sources.
Developing operations capabilities for AI governance
Participants found operational strategies described in the PPTO framework relevant20. They recognized the need for a dedicated budget to operationalize the AI governance system. They then brainstormed key measures of success for evaluating the effectiveness of AI governance and prioritized trust, user satisfaction, efficiency, compliance, and risk mitigation as foundational measures.
For successful communication and implementation of the AI governance system, participants unanimously emphasized the importance of creating comprehensive documentation of the AI governance system, along with educational materials. They proposed disseminating them throughout the organization using existing standard processes and communication channels.
To effectively integrate patient perspectives into AI development and address patient needs, participants proposed engaging patient advisors on the AI governance committee, collaborating with the Patient Experience Team, and leveraging insights from the Patient Family Advisory Council. This approach also aims to establish a transparent system for sharing information with patients and communities, reflecting their needs in AI governance.
Workshop feedback
At the conclusion of the workshops, participants shared what they found most valuable and offered suggestions for future improvement. They appreciated “learning about various steps we need to think about at [the organization]” and valued the “open dialogue with experts.” One participant highlighted “the fulsome engagement approach and the discussions rooted in practicality,” noting that these were particularly helpful in advancing the conversation. As a suggestion for future workshops, participants expressed interest in seeing “more best practice or existing hospital policies and procedures,” especially “from health systems with less advanced governance systems.”
Policy implementation and organizational change
As the ultimate outcome of this study, workshop participants were able to formulate organizational policies around AI governance, which were subsequently formally adopted by the institution. Using the PPTO framework as a roadmap, the policies defined the people, process, technology, and operations needed for AI governance and were structured according to the organization’s standard policy framework. The policy development process encountered several challenges. One key challenge was balancing foundational principles with the need for practical and feasible policies. The policies had to remain simple enough to be implementable while still providing adequate oversight of AI-related risks. Participants aimed to avoid hindering AI adoption or appearing to stifle innovation and instead achieve a balance between enabling progress and ensuring safety. Limited AI expertise among key stakeholders added another layer of difficulty, requiring efforts to ensure the policies were accessible and actionable to those responsible for implementation. In response to this challenge, participants held detailed discussions about feasibility and strategies for building a more robust governance system over time. As a first step, they refined some of the capabilities identified during the workshops into a simplified version with fewer steps and reduced complexity. As of October 2024, the organization has approved the policies, and as a result, an AI governance committee was successfully established in alignment with the policy recommendations.
Organizational leaders agreed to establish the committee as a subcommittee within the existing DHC, with which all workshop participants were already affiliated. The AI governance committee was structured under the leadership of a single executive and clinician leader and included subcommittees representing a diverse range of stakeholders. These subcommittees included stakeholders with expertise in clinical, technical, ethical, and research domains, as well as patient representatives, ensuring a comprehensive and inclusive approach to AI governance. Following its establishment, the AI governance committee undertook the task of refining the policies further. This process involved iterative testing of the policies with various AI use cases to ensure the applicability and effectiveness of the policies in real-world scenarios. The established AI governance committee will lead and guide AI adoption within the organization moving forward.
Discussion
This study demonstrates how a conceptual AI governance framework—the PPTO framework—can be translated into actionable strategies within a real-world healthcare delivery setting. Using an action case research approach, we collaborated with stakeholders from a Canadian hospital system through interviews and co-design workshops. Stakeholder interviews identified existing strengths, gaps, and priorities for AI governance, providing foundational insights into organizational readiness and needs. The subsequent co-design workshops adapted and implemented the PPTO framework20 to reflect the hospital system’s unique structure and resource environment. Together, these qualitative methods played a pivotal role in tailoring and operationalizing AI governance in a contextually relevant and practical manner.
This research offers both theoretical and practical contributions, aligning with calls in the AI governance literature to move beyond aspirational principles toward practical implementation36. Theoretically, it addresses a critical gap in the literature by generating empirical evidence of real-world implementation of AI governance frameworks—an area where most scholarship remains conceptual. Specifically, it offers the first evidence of implementing the PPTO framework in practice. Practically, the study illustrates how high-level governance principles can be systematically translated into concrete structures, processes, technologies, and operational actions. In response to the first research question, the PPTO framework was effective in guiding the development of an AI governance structure within a complex healthcare environment. This research supports and extends the PPTO framework by validating its practical usability and relevance, while also demonstrating its adaptability across diverse organizational contexts and varying levels of AI maturity. With respect to the second research question, the study underscores the value of various qualitative approaches—particularly surveys, interviews, and co-design workshops—in driving organizational change and shaping governance systems. Qualitative methods are widely used to explore stakeholders’ experiences, challenges, and needs in the adoption of AI37–39. These methods have also been shown to effectively support organizational change in healthcare. For example, a hospital system co-designed a team-based leadership model in collaboration with healthcare teams and patients, using a range of qualitative methods, including in-depth interviews and online surveys. The model, which emphasizes the shared distribution of leadership roles and responsibilities across team members, was successfully implemented within the hospital system40.
Several key insights and implications emerged from this work. First, the findings confirm that the PPTO framework35 is a useful and adaptable tool for establishing an AI governance in healthcare organizations in real-world. The framework supports organizations in identifying required capabilities across the domains of people, process, technology, and operations, as well as in assessing gaps, planning investments, and building toward sustainable implementation. The findings also demonstrate the framework’s potential for broader transferability across diverse contexts. The PPTO framework, initially developed in a large U.S. healthcare delivery organization, was successfully applied in a large Canadian hospital system. Organizations seeking to understand and implement AI governance may find these findings valuable and consider the shared experience with the PPTO framework a valuable model for their own initiatives.
Second, establishing robust AI governance should begin with a clear understanding of local needs and a deliberate effort to define organization-specific requirements. This process should be inclusive and participatory, engaging stakeholders with diverse expertise and experience across all organizational levels and functions, including those responsible for governance execution. Such engagement not only ensures the governance system is aligned with organizational realities but also fosters broader organizational buy-in. By incorporating diverse stakeholder voices, governance systems are more likely to be effective, relevant, and ethically grounded.
Third, the development of AI governance is inherently collaborative. It requires shared decision-making and open dialog, especially in settings where AI knowledge is still emerging and where organizational hierarchies may constrain participation. Creating safe spaces for engagement, and valuing the perspectives of all stakeholders, enables organizations to build more thoughtful, inclusive, and responsive governance systems. It is especially important because initial interactions among stakeholders from diverse backgrounds and areas of expertise can often be met with apprehension41. Various qualitative research methods, such as surveys, interviews, and co-design workshops, can be instrumental in facilitating this collaborative process40. These methods provide structured yet flexible formats for sharing, gathering, and synthesizing diverse insights.
Lastly, building AI governance capacity requires investments in education on AI in healthcare. As AI technologies become more prevalent in healthcare, stakeholders should develop the foundational knowledge to make informed decisions, identify and mitigate risks, and navigate implementation challenges successfully. Education about AI’s technical underpinnings, clinical applications, risks, and ethical considerations equips stakeholders to participate meaningfully in governance efforts and fosters long-term accountability, transparency, and trust.
While this study focused on a single health system, future work could apply the PPTO framework20 across multiple healthcare organizations of varying sizes and structures to test for broader applicability. Comparative studies could help determine whether the framework’s applicability and effectiveness remain consistent across different contexts, including rural versus urban settings or large academic medical centers versus smaller clinics. Additionally, studies could explore the framework’s relevance in organizations located in different geographical regions and operating under varying payment models, such as Canada’s public healthcare system versus the U.S. private insurance system.
Future research could examine the short- and long-term effectiveness and sustainability of AI governance systems. Studies could investigate how these systems evolve over time, adapt to changes in AI technology, and maintain their ability to ensure the safe and ethical use of AI within healthcare. The studies could involve developing methods to evaluate the effectiveness of AI governance in ensuring ethical, equitable, and safe AI deployment over both the short and long term.
Methods
Overview
The study utilized an action case research method, combining elements of both action research and case study research. Through stakeholder interviews and a series of co-design workshops, we collaborated with participants to implement the PPTO framework and drive organizational change, embodying the principles of action research. At the same time, the study focused on a specific organization and its AI governance efforts, reflecting the case study approach.
The primary objectives of the interviews were to gain a comprehensive understanding of the current state of AI adoption within the organization and assess current capabilities and future needs for AI governance. The interviews examined the existing processes for adopting AI solutions, identified key stakeholders or committees involved in AI adoption, and explored the current technical infrastructure supporting these initiatives. These objectives serve as foundational steps toward developing a robust AI governance system within the organization.
The primary objectives of the co-design workshops were to adapt the PPTO framework to the organization’s specific characteristics and needs20. A committee of senior leadership, with roles potentially relevant to the future AI governance system, played a central role in co-designing the adapted framework to ensure it would be both relevant and actionable within the organizational context.
This study was considered a quality improvement project without the involvement of data collection from human subjects. Interviews and workshops were informational and focused on processes and activities within the organization. As such, ethics board approval was not required. All participants provided verbal consent to participate in the study and to have anonymized data used in qualitative analyses.
Part 1: Stakeholder interviews
Thirteen stakeholders were recruited from across the organization. On average, participants had 15.85 years of experience and brought technical (n = 6), operational (n = 4), and clinical (n = 3) expertise (Table 3). Technical stakeholders comprised technology developers and IT infrastructure managers; operational stakeholders represented roles in operations management and research ethics; and clinical stakeholders included frontline clinicians. We used purposive sampling to gather insights from key stakeholders involved in adopting existing AI solutions. To broaden representation, we also employed snowball sampling by inviting participants to recommend others with valuable and relevant perspectives. To mitigate potential bias, we ensured diverse representation across clinical, technical, and operational domains, organizational levels, and years of experience. Recruitment continued until thematic saturation was reached.
Table 3.
Demographics of interview participants
| Frequency (n) | Percentage (%) | ||
|---|---|---|---|
| Years of experience | 1–5 years | 1 | 7.7 |
| 6–10 years | 2 | 15.4 | |
| 11–19 years | 7 | 53.8 | |
| 20 or more years | 3 | 23.1 | |
| Primary domains of expertise | Technical | 6 | 46.2 |
| Operational | 4 | 30.8 | |
| Clinical | 3 | 23.1 | |
| Education levels | Bachelor’s | 1 | 7.7 |
| Master’s | 4 | 30.8 | |
| Professional or doctorate | 8 | 61.5 | |
| Organizational roles | Junior | 2 | 15.4 |
| Mid-senior | 1 | 7.7 | |
| Director | 6 | 46.2 | |
| Executive | 4 | 30.8 | |
Participants participated in one-on-one semi-structured in-depth interviews. We created an interview guide and asked participants about current governance practices for AI solutions, as well as future governance needs, across four stages of AI lifecycle: (1) procurement, (2) development and adaptation, (3) clinical integration, and (4) lifecycle management15. The questions in the guide examined existing processes for AI adoption, identified key stakeholders or committees involved in AI adoption, and explored the current technical infrastructure supporting AI implementation (Supplementary Note 1).
All interviews were conducted via Zoom and lasted for 30 min to an hour. The interviews were recorded and later were transcribed for analysis via Otter.ai, an automated transcription software. This study was considered a quality improvement initiative and did not involve the collection of identifiable data about human subjects. Thus, it was exempt from IRB approval. All participants provided verbal informed consent to participate in interviews, to have the interviews recorded, and to allow anonymized data to be used in qualitative analysis. To analyze the transcripts and extract key insights, we used a coding process and conducted a thematic analysis in NVivo, a qualitative data analysis software. A hybrid approach combining inductive and deductive thematic analysis was employed. The inductive approach helped explore various aspects of the existing governance system and captured emergent themes, while the deductive approach organized these themes using the PPTO framework to assess governance system capabilities across the People, Process, Technology, and Operations domains.
Part 2: Co-design workshops
We recruited a team of five senior leadership members expected to oversee the AI governance system to participate in a series of co-design workshops. They were members of the existing digital health committee (DHC), managing projects with digital components in clinical practice within the organization. They had 19.60 years of experience on average and brought expertise across operational (n = 3), technical (n = 1), and clinical (n = 1) domains.
To prepare participants for the workshops and ensure all of their voices were represented, we conducted an online survey and semi-structured interviews. The survey contained open-ended questions about aspirational practices and key decisions for AI governance across the People, Process, Technology, and Operations domains (Supplementary Note 2). Next, we reviewed participants’ survey responses and conducted one-on-one follow-up interviews to better understand their needs and priorities for the AI governance system described in their survey responses. We asked clarifying questions and explored any concerns that were raised. All interviews were conducted via Zoom and lasted for 30 min. While the interviews were not recorded, detailed notes were taken throughout.
After gathering data from surveys and interviews, we organized and visualized the data on a virtual collaborative whiteboard. All survey responses were anonymized and displayed as sticky notes, organized by question. Interview insights were synthesized through content analysis, where each response was reviewed and grouped by recurring themes, highlighting key patterns across participants’ perspectives. This synthesized information was instrumental in refining the scalable framework to align with the organization’s specific characteristics and requirements. It was then added to the virtual whiteboard to ensure that participants could view and reflect on each other’s responses later during the workshop, promoting a comprehensive and inclusive discussion.
The primary activity in this study was a series of three design thinking workshops, each focused on different domains of AI governance. The first workshop addressed the Process domain, the second explored the People domain, and the third covered the Technology and Operations domains. In each workshop, participants accessed the virtual whiteboard, which displayed the sticky notes organized by question. They discussed each question, asking clarifying questions, expanding on initial responses, and addressing any conflicting views. Participants also reviewed and provided feedback on the PPTO framework. Throughout the workshops, participants modified existing sticky notes or added new ones to capture insights from the discussion.
After each workshop, we conducted a content analysis of the sticky notes to identify key themes. Each note was reviewed and categorized into broader themes based on its content to highlight recurring patterns. This thematic analysis enabled a systematic summary of participants’ prominent perspectives on each question, which was then used to adapt the PPTO framework to better align with the organization’s characteristics, needs, and priorities.
This study was considered a quality improvement initiative and did not involve the collection of identifiable data about human subjects. Thus, it was exempt from IRB approval. All participants provided verbal informed consent to participate in the workshops and to allow anonymized data to be used in qualitative analysis.
Supplementary information
Author contributions
J.Y.K. conducted interviews and workshops, analyzed the findings, and wrote and edited the manuscript. A.H. managed the project and contributed to the investigation. J.K. managed the project and contributed to the investigation. T.T. contributed to the investigation and edited the manuscript. C.H. contributed to the investigation and the drafting of organizational policies. B.F. contributed to the investigation, drafted organizational policies, edited the manuscript, and provided oversight. S.B. contributed to the investigation and provided oversight. M.S. contributed to the investigation, edited the manuscript, and provided oversight. All authors read and approved the final manuscript.
Data availability
The qualitative data generated and analyzed during this study are not publicly available due to ethical and confidentiality considerations. Participants did not consent to public data sharing. However, select anonymized quotes are included in the manuscript to support key findings.
Competing interests
S.B. is named co-inventor of products licensed by Duke University to CohereMed Inc., FullSteam Health Inc., and Clinetic Inc.; he holds equity in Clinetic Inc. M.P.S. is a co-inventor of intellectual property licensed by Duke University to Clinetic Inc., KelaHealth Inc., and Cohere-Med Inc.; he holds equity in Clinetic Inc.; and he has received speaking engagement honoraria from Roche and the American Medical Association. All other authors declare no financial or non-financial competing interests.
Footnotes
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
These authors contributed equally: Benjamin Fine, Suresh Balu, Mark Sendak.
Supplementary information
The online version contains supplementary material available at 10.1038/s41746-025-01909-3.
References
- 1.Badgeley, M. A. et al. Deep learning predicts hip fracture using confounding patient and healthcare variables. npj Digit. Med.2, 31 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Zech, J. R. et al. Confounding variables can degrade generalization performance of radiological deep learning models. arXiv preprint arXiv:1807.00431 (2018).
- 3.Bohr, A. & Memarzadeh, K. The rise of artificial intelligence in healthcare applications. In Artificial Intelligence in Healthcare (eds. Bohr, A. & Memarzadeh, K.) 25–60 (Academic Press, 2020).
- 4.He, J. et al. The practical implementation of artificial intelligence technologies in medicine. Nat. Med.25, 30–36 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Rumsfeld, J. S., Joynt, K. E. & Maddox, T. M. Big data analytics to improve cardiovascular care: promise and challenges. Nat. Rev. Cardiol.13, 350–359 (2016). [DOI] [PubMed] [Google Scholar]
- 6.Clement, J. & Maldonado, A. Q. Augmenting the transplant team with artificial intelligence: Toward meaningful AI use in solid organ transplant. Front. Immunol.12, 694222 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Bajwa, J., Munir, U., Nori, A. & Williams, B. Artificial intelligence in healthcare: transforming the practice of medicine. Future Healthc. J.8, e188–e194 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.U.S. Food and Drug Administration. Artificial intelligence and machine learning (AI/ML)-enabled medical devices. https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices (2024).
- 9.Executive Office of the President, National Science and Technology Council. Preparing for the Future of Artificial Intelligence (2016).
- 10.de Hond, A. A. H. et al. Guidelines and quality criteria for artificial intelligence-based prediction models in healthcare: a scoping review. npj Digit. Med.5, 2 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Accenture. Artificial intelligence: healthcare’s new nervous system (Accenture, 2017).
- 12.CB Insights. From virtual nurses to drug discovery: 106 artificial intelligence startups in healthcare. https://www.cbinsights.com/research/artificial-intelligence-startups-healthcare/ (2017).
- 13.Dreyer, K. J. & Geis, J. R. When machines think: radiology’s next frontier. Radiology285, 713–718 (2017). [DOI] [PubMed] [Google Scholar]
- 14.Boag, W. et al. The algorithm journey map: a tangible approach to implementing AI solutions in healthcare. npj Digit. Med.7, 87 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Kim, J. Y. et al. Organizational governance of emerging technologies: AI adoption in healthcare. In Proc. 2023 ACM Conference on Fairness, Accountability, and Transparency 1396–1417 (ACM, 2023).
- 16.Reddy, S., Allan, S., Coghlan, S. & Cooper, P. A governance model for the application of AI in health care. J. Am. Med. Inform. Assoc.26, 1034–1037 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Suresh, H. & Guttag, J. A framework for understanding sources of harm throughout the machine learning life cycle. In Proc. 1st ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization 1–9 (ACM, 2021).
- 18.Chustecki, M. Benefits and risks of AI in health care: narrative review. Interact. J. Med. Res.13, e53616 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Obermeyer, Z., Powers, B., Vogeli, C. & Mullainathan, S. Dissecting racial bias in an algorithm used to manage the health of populations. Science366, 447–453 (2019). [DOI] [PubMed] [Google Scholar]
- 20.Adam, H. et al. Write it like you see it: detectable differences in clinical notes by race lead to differential model recommendations. In Proc. 2022 AAAI/ACM Conference on AI, Ethics, and Society 7–21 (ACM, 2022).
- 21.Mäntymäki, M., Minkkinen, M., Birkstedt, T. & Viljanen, M. Defining organizational AI governance. AI Ethics2, 603–609 (2022). [Google Scholar]
- 22.Tornatzky, L. G. & Fleischer, M. The Processes of Technological Innovation (Lexington Books, 1990).
- 23.Yang, J., Luo, B., Zhao, C. & Zhang, H. Artificial intelligence healthcare service resources adoption by medical institutions based on TOE framework. Digit. Health8, 20552076221126034 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Bin Naeem, S., Azam, M., Kamel Boulos, M. N. & Bhatti, R. Leveraging the TOE framework: examining the potential of mobile health (mHealth) to mitigate health inequalities. Information15, 176 (2024). [Google Scholar]
- 25.Jeilani, A. & Hussein, A. Impact of digital health technologies adoption on healthcare workers’ performance and workload: perspective with DOI and TOE models. BMC Health Serv. Res.25, 271 (2025). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Beier, M. & Früh, S. Technological, organizational, and environmental factors influencing social media adoption by hospitals in Switzerland: cross-sectional study. J. Med. Internet Res.22, e16995 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Taeihagh, A. Governance of artificial intelligence. Policy Soc.40, 137–157 (2021). [Google Scholar]
- 28.National Institute of Standards and Technology (NIST). AI Risk Management Framework. https://www.nist.gov/itl/ai-risk-management-framework (accessed 2025).
- 29.Government of Canada. Algorithmic Impact Assessment tool. https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/algorithmic-impact-assessment.html (accessed 2025).
- 30.Abràmoff, M. D. et al. Considerations for addressing bias in artificial intelligence for health equity. npj Digit. Med.6, 170 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Chen, I. Y. et al. Ethical machine learning in healthcare. Annu. Rev. Biomed. Data Sci.4, 123–144 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Chin, M. H. et al. Guiding principles to address the impact of algorithm bias on racial and ethnic disparities in health and health care. JAMA Netw. Open6, e2345050 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Coombs, L. et al. A machine learning framework supporting prospective clinical decisions applied to risk prediction in oncology. npj Digit. Med.5, 117 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Collins, G. S. et al. TRIPOD+AI statement: updated guidance for reporting clinical prediction models that use regression or machine learning methods. BMJ 385, q902 (2024). [DOI] [PMC free article] [PubMed]
- 35.Kim, J. Y., Hasan, A., Balu, S. & Sendak, M. People, process, technology, and operations (PPTO) framework for organizational AI governance in healthcare. npj Digit. Med. [DOI] [PMC free article] [PubMed]
- 36.Reddy, S., Allan, S., Coghlan, S. & Cooper, P. A governance model for the application of AI in health care. J. Am. Med. Inform. Assoc.27, 491–497 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Leenen, J. P. L. et al. Exploring the complex nature of implementation of Artificial intelligence in clinical practice: an interview study with healthcare professionals, researchers and policy and governance experts. PLOS Digit. Health4, e0000847 (2025). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Siira, E., Tyskbo, D. & Nygren, J. Healthcare leaders’ experiences of implementing artificial intelligence for medical history-taking and triage in Swedish primary care: an interview study. BMC Prim. Care25, 268 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Sendak, M. et al. Building models, building capacity: a review of participatory machine learning for HIV prevention. PLOS Glob. Public Health5, e0003862 (2025). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.McAuliffe, E. et al. Collective leadership and safety cultures (Co-Lead): protocol for a mixed-methods pilot evaluation of the impact of a co-designed collective leadership intervention on team performance and safety culture in a hospital group in Ireland. BMJ Open7, e017569 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Pallesen, K. S., Rogers, L., Anjara, S., De Brún, A. & McAuliffe, E. A qualitative evaluation of participants’ experiences of using co-design to develop a collective leadership educational intervention for health-care teams. Health Expect.23, 358–367 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
The qualitative data generated and analyzed during this study are not publicly available due to ethical and confidentiality considerations. Participants did not consent to public data sharing. However, select anonymized quotes are included in the manuscript to support key findings.
