Introduction
The potential uses of artificial intelligence and machine learning (AI/ML) within the healthcare field of pharmacovigilance are significant and possibly limitless.1–5 But while AI/ML has potential, there are also limitations and challenges to successful implementation. Within pharmacovigilance, the uses of technologies such as Robotic Process Automation and AI/ML are not new 1 and offer promise to dramatically impact all aspects of pharmacovigilance.4,5 Possible benefits range from reducing the cost of current pharmacovigilance activities and improving the “as-is” to more broad-ranging activities with the potential to revolutionize the pharmacovigilance field.2,3 However, with all the anticipated promise and hype of AI/ML, we must remember that pharmacovigilance remains a highly regulated space; the thalidomide tragedy is one reminder of why there must be controls and regulations regarding the safety of medicines and vaccines so that no patient suffers avoidable harm. Healthcare professionals prescribe medicines and vaccines that patients consume, trusting that their safety has been adequately assessed, well described, continues to be monitored, and that safety issues, should they arise, are rapidly and transparently communicated. Behind the information relied on by healthcare professionals and patients are a diverse set of legal regulations that mandate the scientific evaluation and communication of benefits and risks for medicines and vaccines. This framework is complex and is further complicated by the regulatory variations that exist worldwide. 6 Aligned to these varied regulations are pharmaceutical company processes, including governance frameworks, to ensure the integrity of the final outputs arising from pharmacovigilance activities, which are essential to ensuring patient safety and maintaining trust in medicines and vaccines.
A systematic analysis of articles from 2000 to 2021 7 demonstrated that the uptake of AI/ML in pharmacovigilance has been slow; additionally, only 42 articles out of 393 discussed adopting solutions reflecting current best practices in this area. A reason for this may be that regulatory requirements for pharmacovigilance activities that use AI/ML are currently partially formed, 8 and one challenge to the wider adoption of AI/ML in pharmacovigilance is the imperative for a harmonized global regulatory environment.1,4,9 In addition, there is very limited thought, opinion, or scientific commentary about how a pharmaceutical company should govern AI/ML within the current highly regulated pharmacovigilance regulatory framework, and existing literature in the public domain suggests regulatory alignment is still some way off.10,11 It is this gap within the scientific commentary that this article aims to fill.
We suggest that existing robust processes that govern and control the implementation of computerized systems within pharmacovigilance are directly applicable and can be leveraged and expanded under a new pharmacovigilance paradigm that utilizes AI/ML.
Governance of AI/ML in pharmacovigilance
Pharmaceutical company responsibility
When AI/ML is utilized to support the responsibilities of a pharmacovigilance department,12–20 it must be done so in an ethical, 21 risk-based manner, ensuring any change in, or impact to, business processes is fully understood and can be successfully managed by the pharmacovigilance department. Ensuring AI/ML is managed through a risk-based approach with a focus on audit readiness is paramount (Figure 1).
Figure 1.
Responsibilities of pharmacovigilance departments using AI/ML.
AI, artificial intelligence; ML, machine learning.
Roles and responsibilities within a pharmaceutical company pharmacovigilance department
Establishing and maintaining roles and responsibilities within a pharmacovigilance department for governing AI/ML can be accomplished by defining a decision-making matrix, such as the proposed RACI (Responsible, Accountable, Consulted, or Informed) matrix (Table 1). Defining the necessary training, education, and work experience parameters of these roles is of critical importance, 22 and must be tailored carefully to each pharmacovigilance department. Accountability for AI/ML governance, once tested, validated, and deployed for use by a pharmacovigilance department, must lie with the pharmacovigilance process owner and not a technologist (e.g., data scientists or AI/ML engineers). Accountability is placed with a single decision-maker who can pull together a team of individuals comprising an understanding of technology, pharmacovigilance processes, pharmacovigilance regulations, and the benefit/risk perspective of the patient. 23
Table 1.
RACI (Responsible, Accountable, Consulted, or Informed) structure for a pharmacovigilance department using AI/ML.
| Organizational persona or roles | Technical implementation | Management | Operations | ||||
|---|---|---|---|---|---|---|---|
| Algorithm understanding | Product architecture and deployment | Risk management | Human involvement/monitoring ramp-down plan | Metrics | Data integrity/privacy/security | Data quality | |
| Process owner, business | |||||||
| An individual from the PV team is responsible for the business process using specific AI/ML software. It is expected that this individual is non-technical and focused on the business process | A | A | A | A | R/A | A | A |
| Data owner, business | |||||||
| An individual from the PV team is accountable for the classification, protection, use, and quality of the data being used as inputs to the specific AI/ML software. It is expected that this individual is non-technical and focused on business data | I | I | — | I | — | I | R |
| Product owner, technology | |||||||
| An individual from the PV team who is a technical expert who focuses on closing the gap between the technical and business sides of AI/ML software product development | R | R | C | C | I | R | I |
| Risk management, business | |||||||
| An individual from the PV team is responsible for coordinating and managing risks associated with AI/ML software | — | — | R | C | — | I | — |
| Oversight board, business | |||||||
| A group of individuals from the PV team or the extended enterprise, comprised of technical, business, and risk skills, that provide oversight and governance for AI/ML software. This group does not need to be standalone, and the function may be incorporated into other business or technical governance groups | I | I | I/C | I/C | I | I | I |
| Head of safety, business | |||||||
| An individual from the PV team who bears the responsibility that the AI/ML software is designed, tested, validated, implemented, managed, and monitored correctly according to internal policies and external regulations | I | I | I | I | I | I | I |
AI, artificial intelligence; ML, machine learning; PV, pharmacovigilance.
Technology understanding and implementation
Master list
It is imperative that the safety department keeps a central listing of all AI/ML in use within the department for audit purposes. One potential location is within the Pharmacovigilance System Master File (PSMF) for pharmaceutical companies operating in the European Union or other regions where a PSMF is required or a similar managed document.
AI/ML understanding and transparency
Similar to existing pharmacovigilance information technology systems, it is imperative that the pharmacovigilance process owner possesses a comprehensive understanding of the AI/ML at a pharmacovigilance process level, can effectively communicate its operation as related to patient safety and risks, and is partnered with other individuals that can bridge knowledge gaps between technical understanding of AI/ML and business process implications. 23 The pharmacovigilance process owner must also have a clear understanding of both training and production datasets, bias testing, and relevant performance metrics, as these are paramount in understanding the production performance of the AI/ML implementation. These understandings must be appropriately documented and open to audit. 24
AI/ML algorithm details may be examined by an inspector, and pharmaceutical pharmacovigilance departments should be prepared to explain what the AI/ML does and should consider how they would explain the AI/ML to non-experts to give assurance to regulators. While there is limited value in reviewing algorithms for assurance purposes, 25 the pharmacovigilance process owner must consider having an agreement in place with the AI/ML provider (whether the provider is internal to the pharmacovigilance department, internal to the wider pharmaceutical company organization, or an external supplier) to provide support in case of an inspection request. Even though regulatory inspections are confidential in nature, this agreement is likely still restrictive, particularly when dealing with an external supplier, to protect potential patent or proprietary trade secrets from entering the public domain.
AI/ML characteristics that should be considered for documentation and audit readiness should follow Good Machine Learning Practice, 26 Good Practice (GxP) regulations, and align with a pharmaceutical company’s Certified Software Quality Analyst certification processes.
AI/ML implementation management
Establishing a framework for trustworthy AI/ML is important when implementing and leveraging the power of AI/ML within any system or process. 27 This can be realized by existing pharmacovigilance system principles including validation, production monitoring, and risk planning. 4 Overlapping traditional pharmacovigilance system management principles with guidance from the US National Institute of Standards and Technology (NIST) 27 results in validation, accountability/transparency, and reliability emerging as major themes for managing AI/ML within pharmacovigilance.
AI/ML validation
All computerized systems within the pharmacovigilance department that support processes bound by GxP regulations are validated proportional to the potential risk to patient safety. AI/ML is a computer system component and must also be validated. Validation, following procedures approved by a supporting quality/compliance department, involves demonstrating through documented evidence that an AI/ML implementation is reliable, fit for its specific purpose, and compliant with regulatory and corporate requirements.28,29 AI/ML must be assessed to identify potential risks, which are documented, monitored, and included in quality management documents, inspection readiness documents, and a control plan. AI/ML provided by third-party providers must also be evaluated, and audits conducted by the third party, aligned with current pharmacovigilance regulations. Pharmacovigilance process owners must prepare for inspections by regulatory agencies and must maintain system registers, overviews, and procedures that document the use of the GxP system. The compliance status must be reviewed and periodically updated to include the cumulative effects of changes or revisions to the deployed AI/ML.
AI/ML monitoring
While validation documentation requirements will already exist in a pharmacovigilance department, necessitating that training datasets, validated AI/ML code, and test results be retained and managed, we suggest additional documentation is required when implementing AI/ML systems for accountability and transparency purposes. A control plan is one mechanism for achieving these purposes, providing accountability and transparency, documenting the AI/ML risk plan, and defining the performance parameters for both the AI/ML and the operating infrastructure to enable decisions regarding whether the AI/ML is operating as defined, and when the AI/ML or the operating infrastructure should be modified or updated. Detecting deviations caused by varying input data, such as detecting outliers and data drift is critical. 30 Monitoring an AI/ML’s input and output data, with care given to considering data volumes and AI/ML-to-AI/ML interactions, is analogous to quality check procedures in place to verify that human workers are performing tasks within defined performance parameters. A robust incident and event management process for time-critical notifications needing human involvement is important to notify necessary individuals of sensitive production issues.
A pharmaceutical department may find it beneficial to maintain a closed platform (so-called “walled garden”) for each AI/ML implementation, where access is restricted and regulated under a data use agreement. 31 The walled garden, containing training data, AI/ML code, and test data, is used for both information sharing with regulators and continued AI/ML refinement. The walled garden must mirror the applicable production environment such that results from the walled garden can be generalized to production. In the current regulatory environment, incremental AI/ML updates and training and test datasets must be versioned and retained.
AI/ML reliability
A reliable AI/ML implementation must offer benefits that outweigh negative effects and ensure that unacceptable effects can be monitored for resolution.27,32 When the reliability of AI/ML is reduced, as production input data changes from the test data for example, the AI/ML control plan must capture a clear understanding of the AI/ML’s reliability, monitoring conditions, and necessary actions.
Risk management
The documentation of risk management strategies describes the risks and mitigations associated with AI/ML that are involved in a pharmacovigilance workflow. The existing experience with risk management frameworks 33 in pharmacovigilance must be incorporated into the approach for AI/ML, where risks are identified, assessed, and prioritized in terms of their importance.
All risks must have mitigation plans developed, and a quality management approach should be taken that includes actions, timeframes, allocated responsible persons, and effectiveness checks. The risk mitigations are managed within defined timeframes and reviewed routinely. When risks have been suitably mitigated, or potential risks have not been observed upon implementation, these risks are removed from the plan to ensure focus, attention, and effort remains on mitigation of identified risks. We suggest that removal of any risks in the control plan must be agreed to by a quorum, led by the pharmacovigilance process owner.
Risk management strategies are structured to last the lifecycle of the AI/ML implementation and are reviewed routinely as identified risks change. We suggest that risk management strategies are documented within the control plan.
AI/ML risks
Risks may be specific to using an individual AI/ML implementation, or to the more general use of AI/ML. NIST has developed a framework to highlight risks surrounding the use of these systems generally. 27 General risks must always consider the impact on the wider pharmacovigilance system and should balance the level of transparency available against AI/ML performance. 34
Specific risks associated with the system in question must be developed and may be linked to technical details, implementation of the system in an already established process, or linked to a human component, such as training. Where a pharmacovigilance system has multiple AI/ML implementations being utilized within it, the potential cross-interference at different process points must be considered. Specific risks must also be considered within the wider goals of pharmacovigilance and the processes that these tools are intended to perform; for example, the detection of black swan events in signal detection remains a relevant risk whether it is a human or AI/ML tool performing the task. 35
As trust in an AI/ML implementation grows, a pharmacovigilance department may desire to reduce human monitoring to gain additional scale and efficiency. It is important to keep in mind that AI/ML is not required to perform a defined task “better than or equal” to a human but rather AI/ML must be monitored against defined performance parameters outlined in the control plan.
In alignment with identified risk tolerance, human monitoring may be stepwise reduced, and the approach taken for reducing human monitoring should be documented in the control plan.
The plan for the reduction of human monitoring must be reviewed against the transparency, accountability, and risk sections of the control plan and use the defined performance parameters documented in the control plan to measure acceptable performance.
Quality management
Quality management is mandated in pharmacovigilance through government and regulatory legislation and guidance, and the framework of a quality management system is outlined in global guidance and is made pharmacovigilance-specific through European regulations.22,36–38 There is extensive experience of quality management in pharmacovigilance. Pharmaceutical pharmacovigilance departments are well placed to ensure a quality approach is adopted and should access and draw on existing experience when setting up these systems. Activities incorporated into quality management include process and technical documentation, vendor contracts, issue management, training, record management and archiving, and oversight and assurance activities. Additional considerations which have not already been discussed include vendor management, and oversight and assurance activities.
Vendor management
The setup of the relationship with a vendor must consider the increased scrutiny on pharmacovigilance systems utilizing AI/ML. Additional clauses are needed in the contract for the vendor to support the pharmacovigilance department when there is scrutiny of the system via internal (audit, oversight) or external (inspections) mechanisms. There must be consideration of the interactivity between the vendor and the pharmacovigilance department for all elements outlined in the control plan. While the third party may have developed the AI/ML solution in use, it is the pharmacovigilance department that bears the legal responsibility for implementation in its pharmacovigilance system. The contract must support the pharmaceutical company’s procedures governing any AI/ML that is adopted.
Consideration should be given to allowing visibility or access to regulators of data or information that would not routinely be available for review by auditors or pharmacovigilance departments through routine business activity including AI/ML algorithms and test datasets.
Oversight and assurance
Pharmacovigilance departments must have oversight mechanisms in place prior to AI/ML going live in production. In addition, these pharmacovigilance systems must be included in audit programs. An audit is recommended prior to go-live to ensure that validation documentation, control plan, and risk management activities are appropriate and aligned with the framework set by the pharmacovigilance department; this also allows implementing preventative actions prior to system go-live.
A new assurance paradigm is required
The implementation of AI/ML in pharmacovigilance represents a challenge to conventional audit and inspection methodology. Current assurance processes require “snap-shots” and documentation that are used to reconstruct a true representation of a time in history to determine either level of compliance or performance, or to scrutinize decision-making processes. 39 The pharmacovigilance framework requires exhaustive record and archiving procedures covering all pharmacovigilance data and documentation for the full life cycle of all medicinal products,36,38,40 including the systems being utilized for pharmacovigilance activities. Currently, these processes and systems are static and can be faithfully restored using archives and audit trails. Some regulators may assume these practices will still be a valid way of getting assurance for AI/ML; a recent article from the European Medicines Agency stated that there is an expectation that when AI/ML is used to support benefit-risk assessments, algorithms and datasets should be made available for review. 10
This way of thinking must be challenged, and for AI/ML, a new paradigm of assurance is required as the current assurance methodology is impractical if not impossible. The current expectation to keep an audit history and detailed record of every change that is required, for example, a detailed copy of a safety database or safety data test set when either is up-versioned, does nothing to support the implementation of AI/ML in pharmacovigilance but rather creates a data storage problem. The challenge then is with pharmaceutical companies to be able to demonstrate that without a typical audit trail, other controls are in place that give assurance the AI/ML is working as intended: indeed, alternative methodologies can be proposed. 41 Additional complications exist where AI/ML benefitting pharmacovigilance activities are utilized, and data privacy, ethical, and consent considerations may exist.
In addition, the quality assurance departments within pharmaceutical companies and regulatory authorities must adopt different approaches when it comes to the review of AI/ML in either audit or inspection scenarios. It is imperative that industry and regulators work together to ensure that assurance activities are robust and that expectations are aligned so that the benefits that AI/ML can offer to patient safety can be realized.
Conclusion
AI/ML offers great promise within pharmacovigilance for improving how the benefit–risk of medicines and vaccines is monitored; however, increased scrutiny on pharmacovigilance systems incorporating AI/ML can be expected and is welcomed. This presents an opportunity for pharmacovigilance departments to leverage their extensive experiences in the governance of computerized systems to form the basis of AI/ML governance. Organizing around a RACI matrix, appropriately governing the implemented AI/ML, developing and utilizing both a control plan and a plan for risk management, and being transparent for internal audits and external regulators, all leveraging experience and helping to build a high level of confidence that the pharmacovigilance department is performing appropriate risk-based management of AI/ML implementations. None of these activities is novel. All reflect existing processes within well-functioning pharmacovigilance departments that can be tailored and expanded to address requirements associated with AI/ML. As AI/ML expands into pharmacovigilance to ensure patient safety worldwide, it is important that regulators and the pharmaceutical industry have an open dialogue and agree on internationally aligned performance indicators and verification processes to prevent unnecessary added complexity and continue to ensure data integrity and patient safety.
Acknowledgments
The authors thank the Akkodis Belgium platform for editorial assistance and manuscript coordination, on behalf of GSK, and Dr Joanne Wolter (independent, on behalf of GSK) for providing writing support.
Footnotes
ORCID iD: Michael Glaser
https://orcid.org/0000-0001-8843-1662
Contributor Information
Michael Glaser, GSK, Development Global Medical, Global Safety and Pharmacovigilance Systems, 1250 South Collegeville Road, Upper Providence, PA 19464, USA.
Rory Littlebury, GSK, Development Global Medical, Global Safety and Safety Governance, Stevenage, UK.
Declarations
Ethics approval and consent to participate: Not applicable.
Consent for publication: Not applicable.
Author contributions: Michael Glaser: Conceptualization; Writing – original draft; Writing – review & editing.
Rory Littlebury: Conceptualization; Writing – original draft; Writing – review & editing.
Funding: The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: GlaxoSmithKline Biologicals SA took responsibility for all costs associated with the development and publishing of the present manuscript.
Competing interests: All authors are employees of GSK and hold financial equities in GSK.
Availability of data and materials: Not applicable.
References
- 1. Painter JL, Kassekert R, Bate A. An industry perspective on the use of machine learning in drug and vaccine safety. Front Drug Saf Regul 2023; 3: 1110498. [Google Scholar]
- 2. Bate A, Hobbiger SF. Artificial intelligence, real-world automation and the safety of medicines. Drug Saf 2021; 44: 125–132. [DOI] [PubMed] [Google Scholar]
- 3. Bate A, Stegmann J. Artificial intelligence and pharmacovigilance: what is happening, what could happen and what should happen? Health Policy Technol 2023; 12: 100743. [Google Scholar]
- 4. Kassekert R, Grabowski N, Lorenz D, et al. Industry perspective on artificial intelligence/machine learning in pharmacovigilance. Drug Saf 2022; 45: 439–448. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5. Lewis DJ, McCallum JF. Utilizing advanced technologies to augment pharmacovigilance systems: challenges and opportunities. Ther Innov Regul Sci 2020; 54: 888–899. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6. Zatovkaňuková P, Slíva J. Diverse pharmacovigilance jurisdiction—the right way for global drug safety? Eur J Clin Pharmacol 2024; 80: 305–315. [DOI] [PubMed] [Google Scholar]
- 7. Kompa B, Hakim JB, Palepu A, et al. Artificial intelligence based on machine learning in pharmacovigilance: a scoping review. Drug Saf 2022; 45: 477–491. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8. Ball R, Dal Pan G. “Artificial intelligence” for pharmacovigilance: ready for prime time? Drug Saf 2022; 45: 429–438. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9. No Author. Patient safety needs innovation. Nat Med 2022; 28: 1725. [DOI] [PubMed] [Google Scholar]
- 10. Hines PA, Herold R, Pinheiro L, et al. Artificial intelligence in European medicines regulation. Nat Rev Drug Discov 2023; 22: 81–82. [DOI] [PubMed] [Google Scholar]
- 11. Nong P, Hamasha R, Singh K, et al. How academic medical centers govern AI prediction tools in the context of uncertainty and evolving regulation. NEJM AI 2024; 1: 20240131. [Google Scholar]
- 12. World Health Organization. The importance of pharmacovigilance. Safety monitoring of medicinal products, https://apps.who.int/iris/bitstream/handle/10665/42493/a75646.pdf (2002, accessed 25 May 2023).
- 13. Beninger P. Pharmacovigilance: an overview. Clin Ther 2018; 40: 1991–2004. [DOI] [PubMed] [Google Scholar]
- 14. U.S. Department of Health and Human Services, Food and Drug Administration. Code of Federal Regulations: Title 21—Food and drugs, Chapter I—Food and Drug Administration, Department of Health and Human Services, Subchapter D—Drugs for human use, https://www.ecfr.gov/current/title-21/chapter-I/subchapter-D (2023, accessed 25 May 2023).
- 15. European Medicines Agency. Pharmacovigilance: overview, https://www.ema.europa.eu/en/human-regulatory/overview/pharmacovigilance-overview (2022, accessed 25 May 2023).
- 16. U.S. Food and Drug Administration. Data mining, https://www.fda.gov/science-research/data-mining (2019, accessed 25 May 2023).
- 17. U.S. Food and Drug Administration. FDA’s role in managing medication risks, https://www.fda.gov/drugs/risk-evaluation-and-mitigation-strategies-rems/fdas-role-managing-medication-risks (2018, accessed 25 May 2023).
- 18. European Medicines Agency. Human Regulatory, Risk management plans, https://www.ema.europa.eu/en/human-regulatory/marketing-authorisation/pharmacovigilance/risk-management/risk-management-plans (2022, accessed 25 May 2023).
- 19. European Medicines Agency, EudraVigilance Expert Working Group (EV-EWG). Guideline on the use of statistical signal detection method in the EudraVigilance data analysis system. EMEA/106464/2006, https://www.ema.europa.eu/en/documents/regulatory-procedural-guideline/draft-guideline-use-statistical-signal-detection-methods-eudravigilance-data-analysis-system_en.pdf (2006, accessed 25 May 2023).
- 20. CIOMS. Practical aspects of signal detection in pharmacovigilance. Report of CIOMS Working Group VIII, https://cioms.ch/working_groups/working-group-viii/ (2010, accessed 25 May 2023).
- 21. Greene D, Hoffmann AL, Stark L. Better, nicer, clearer, fairer: a critical assessment of the movement for ethical artificial intelligence and machine learning. In: Proceedings of the 52nd Hawaii international conference on system sciences, Hawaii, 8–11 January 2019. [Google Scholar]
- 22. European Medicines Agency. Guideline on good pharmacovigilance practices (GVP). Module VIII—Post-authorisation safety studies (Rev. 3). EMA/813938/2011, https://www.ema.europa.eu/en/human-regulatory-overview/post-authorisation/pharmacovigilance-post-authorisation/good-pharmacovigilance-practices (2017, accessed 25 May 2023).
- 23. Upadhyay U, Gradisek A, Iqbal U, et al. Call for the responsible artificial intelligence in the healthcare. BMJ Health Care Inform 2023; 30 e100920. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24. Felzmann H, Fosch-Villaronga E, Lutz C, et al. Towards transparency by design for artificial intelligence. Sci Eng Ethics 2020; 26: 3333–3361. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25. Busuioc M. Accountable artificial intelligence: holding algorithms to account. Public Adm Rev 2021; 81: 825–836. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26. U.S. Food and Drug Administration. Good machine learning practice for medical device development: guiding principles. https://www.fda.gov/medical-devices/software-medical-device-samd/good-machine-learning-practice-medical-device-development-guiding-principles. (2021, accessed 25 May 2023).
- 27. U.S. Department of Commerce, National Institute of Standards and Technology. AI risk management framework (2nd draft), https://www.nist.gov/system/files/documents/2022/08/18/AI_RMF_2nd_draft.pdf (2022, accessed 25 May 2023).
- 28. U.S. Food and Drug Administration. Guidance for Industry—Part 11, Electronic records; electronic signature—scope and application, https://www.fda.gov/regulatory-information/search-fda-guidance-documents/part-11-electronic-records-electronic-signatures-scope-and-application (2018, accessed 25 May 2023).
- 29. Huysentruyt K, Kjoersvik O, Dobracki P, et al. Validating intelligent automation systems in pharmacovigilance: insights from good manufacturing practices. Drug Saf 2021; 44: 261–272. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30. Klaise J, Van Looveren A, Cox C, et al. Monitoring and explainability of models in production. arXiv 2007.06299 [stat.ML]. [Google Scholar]
- 31. Beam AL, Manrai AK, Ghassemi M. Challenges to the reproducibility of machine learning models in health care. JAMA 2020; 323: 305–306. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32. Balagurunathan Y, Mitchell R, El Naqa I. Requirements and reliability of AI in the medical context. Phys Med 2021; 83: 72–78. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33. Vermeer NS, Duijnhoven RG, Straus SMJM, et al. Risk management plans as a tool for proactive pharmacovigilance: a cohort study of newly approved drugs in Europe. Clin Pharmacol Ther 2014; 96: 723–731. [DOI] [PubMed] [Google Scholar]
- 34. Bate A, Luo Y. Artificial intelligence and machine learning for safe medicines. Drug Saf 2022; 45: 403–405. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35. Kjoersvik O, Bate A. Black swan events and intelligent automation for routine safety surveillance. Drug Saf 2022; 45: 419–427. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36. European Parliament and Council of the European Union. Directive 2001/83/EC of the European Parliament and of the Council of 6 November 2001 on the Community code relating to medicinal products for human use. https://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=CONSLEG:2001L0083:20121116:EN:PDF (2001, accessed 25 May 2023.
- 37. European Medicines Agency. ICH Q10 Pharmaceutical quality system—scientific guideline, https://www.ema.europa.eu/en/ich-q10-pharmaceutical-quality-system-scientific-guideline (2014, accessed 6 May 2023).
- 38. European Commission. Commission implementing regulation (EU) No. 520/2012, https://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:L:2012:159:0005:0025:EN:PDF (2012, accessed 25 May 2023).
- 39. European Medicines Agency. Union procedure on the preparation, conduct and reporting of EU pharmacovigilance inspections. EMA/INS/PhV/192230/2014, https://www.ema.europa.eu/en/documents/regulatory-procedural-guideline/union-procedure-preparation-conduct-reporting-eu-pharmacovigilance-inspections_en.pdf (2014, accessed 13 July 2023).
- 40. European Medicines Agency. Guideline on good pharmacovigilance practices (GVP). Module I—Pharmacovigilance systems and their quality systems. EMA/541760/2011, https://www.ema.europa.eu/en/documents/scientific-guideline/guideline-good-pharmacovigilance-practices-module-i-pharmacovigilance-systems-their-quality-systems_en.pdf (2012, accessed 13 July 2023)
- 41. Stegmann JU, Littlebury R, Trengove M, et al. Trustworthy AI for safe medicines. Nat Rev Drug Discov 2023; 22: 855–856. [DOI] [PubMed] [Google Scholar]

