Table 1.
Guiding Principle | Key Questions: Boards of Directors | Key Questions: Health Leaders | Key Questions: Risk Managers |
---|---|---|---|
Ethical risks: Clearly define the value proposition of AI systems | 1. What services, care pathways, or client populations have the highest burden and need? Can AI help create fair, equitable access? 2. Could an AI system introduce new biases or inequities in the healthcare system? 3. What boundaries should be placed on AI systems? What types of decisions should they be allowed to make? |
1. How can AI help address potential gaps at our organization? Have these applications been proven or is it a novel application? 2. What boundaries should be placed on AI systems? What decisions should they be allowed to make? 3. Does the application of AI lead to improvements that are only possible at infeasible costs? |
1. Would an AI solution that makes errors be tolerable to deploy in practice? What if an AI system makes errors, but does so less frequently than humans? 2. Does our organization have appropriate datasets with reliable means of capturing inequities and diversity? 3. Has the proposed AI solution and objective undergone an independent ethics review? |
Governance risks: Establish comprehensive governance and oversight | 1. Who in the organization has ultimate decision-making authority in relation to AI initiatives and what is the framework for such decisions? Are patients and families included? 2. If vendors or other external parties are involved in developing a new solution, who owns the solution? Who is accountable? 3. Any AI solution must be designed carefully around the workflows in which it will be used. Are there unintended consequences of its use? |
1. How are AI-related decisions made and have these processes and required approvals been communicated and enforced in policies? 2. What consideration is given to the future of work, including how AI may impact workflows and functions? 3. If vendors or other external parties are involved in developing a new solution, who owns the solution? Who is accountable? |
1. How are AI-related decisions made and have these processes and required approvals been communicated and enforced in policies? 2. Who is accountable for recommendations made by AI systems in our organization? 3. Are all stakeholders impacted by the outcomes of an AI project included in governance and oversight? Have any stakeholders been missed? |
Performance risks: Apply rigorous methods in building AI systems | 1. No AI model will produce perfect results. What performance threshold is acceptable? How is this determined, and who is accountable? 2. Is a solution developed with in-house resources and expertise, or are external partners involved? How are the external partners chosen? How are their skills and services validated? 3. What support do any external providers offer post-deployment, and what is done to ensure accountability? |
1. Is a solution developed with in-house resources and expertise, or are external partners involved? How are they chosen and validated? 2. What support do any external providers offer post-deployment, and what is done to ensure accountability? 3. AI systems often require vast amounts of data to train and build. Are there scenarios where the organization considers partnerships with other institutions to augment datasets? |
1. AI systems may produce false positive or false negative predictions. Is one error type worse than others? What error rate is tolerable, and how is this determined? 2. Is the data used to build and train the system representative of what it will see in real life? 3. How does an AI system continue to learn post-deployment? What is the ongoing process to collect for continuous improvement? |
Implementation risks: Apply change management tools and processes | 1. Who is ultimately accountable for recommendations made by an AI system? 2. If an AI system provides a recommendation that conflicts with the advice of a clinician, who makes the final decision? 3. Have patients provided consent for their data to be used for development of an AI system? Have clients provided consent to have their treatment informed or guided by an AI system? |
1. Does interacting with AI systems change role responsibilities and job descriptions? 2. How does the AI system provide reasoning for the recommendations it provides in a way that a diverse user population can understand? 3. What is the disclosure process if errors occur based on the recommendations of an AI system? Who has access to AI datasets when investigating potential failures or adverse events? |
1. How is the AI system making recommendations or acting? What inputs does the system consider, and how are they evaluated? 2. How do users provide feedback if they think the output of an AI model is incorrect? How is this feedback used to update the model? 3. During incident reviews and investigations, are the datasets used to train an AI system also disclosed? If so, how? |
Security risks: Create strict privacy and security protocols | 1. What controls are in place to manage risk of privacy breaches or other cyber security incidents? How are data integrity, privacy, and security maintained? 2. What measures have been put in place to prevent hardware and software faults that could result in data being compromised? 3. What is the business continuity plan in the event of a service interruption of an AI system? |
1. What controls are in place to manage risk of privacy breaches or other cybersecurity incidents? How are data integrity, privacy, and security maintained? 2. What measures have been put in place to prevent hardware and software faults that could result in data being compromised? 3. What is the business continuity plan in the event of a service interruption of an AI system? |
1. AI models may also have been built using data from many sources. Where are these data stored, and how is access to this maintained? 2. What measures have been put in place to ensure that data are secure? Is there a way to know if data have been corrupted and how? 3. Do additional protocols or mitigation strategies need to be put in place to protect privacy of data and the integrity of AI systems? |
Abbreviation: AI, artificial intelligence.