Skip to main content
Therapeutic Advances in Drug Safety logoLink to Therapeutic Advances in Drug Safety
editorial
. 2024 Oct 31;15:20420986241293303. doi: 10.1177/20420986241293303

Governance of artificial intelligence and machine learning in pharmacovigilance: what works today and what more is needed?

Michael Glaser 1,, Rory Littlebury 2
PMCID: PMC11528645  PMID: 39493927

Introduction

The potential uses of artificial intelligence and machine learning (AI/ML) within the healthcare field of pharmacovigilance are significant and possibly limitless.15 But while AI/ML has potential, there are also limitations and challenges to successful implementation. Within pharmacovigilance, the uses of technologies such as Robotic Process Automation and AI/ML are not new 1 and offer promise to dramatically impact all aspects of pharmacovigilance.4,5 Possible benefits range from reducing the cost of current pharmacovigilance activities and improving the “as-is” to more broad-ranging activities with the potential to revolutionize the pharmacovigilance field.2,3 However, with all the anticipated promise and hype of AI/ML, we must remember that pharmacovigilance remains a highly regulated space; the thalidomide tragedy is one reminder of why there must be controls and regulations regarding the safety of medicines and vaccines so that no patient suffers avoidable harm. Healthcare professionals prescribe medicines and vaccines that patients consume, trusting that their safety has been adequately assessed, well described, continues to be monitored, and that safety issues, should they arise, are rapidly and transparently communicated. Behind the information relied on by healthcare professionals and patients are a diverse set of legal regulations that mandate the scientific evaluation and communication of benefits and risks for medicines and vaccines. This framework is complex and is further complicated by the regulatory variations that exist worldwide. 6 Aligned to these varied regulations are pharmaceutical company processes, including governance frameworks, to ensure the integrity of the final outputs arising from pharmacovigilance activities, which are essential to ensuring patient safety and maintaining trust in medicines and vaccines.

A systematic analysis of articles from 2000 to 2021 7 demonstrated that the uptake of AI/ML in pharmacovigilance has been slow; additionally, only 42 articles out of 393 discussed adopting solutions reflecting current best practices in this area. A reason for this may be that regulatory requirements for pharmacovigilance activities that use AI/ML are currently partially formed, 8 and one challenge to the wider adoption of AI/ML in pharmacovigilance is the imperative for a harmonized global regulatory environment.1,4,9 In addition, there is very limited thought, opinion, or scientific commentary about how a pharmaceutical company should govern AI/ML within the current highly regulated pharmacovigilance regulatory framework, and existing literature in the public domain suggests regulatory alignment is still some way off.10,11 It is this gap within the scientific commentary that this article aims to fill.

We suggest that existing robust processes that govern and control the implementation of computerized systems within pharmacovigilance are directly applicable and can be leveraged and expanded under a new pharmacovigilance paradigm that utilizes AI/ML.

Governance of AI/ML in pharmacovigilance

Pharmaceutical company responsibility

When AI/ML is utilized to support the responsibilities of a pharmacovigilance department,1220 it must be done so in an ethical, 21 risk-based manner, ensuring any change in, or impact to, business processes is fully understood and can be successfully managed by the pharmacovigilance department. Ensuring AI/ML is managed through a risk-based approach with a focus on audit readiness is paramount (Figure 1).

Figure 1.

Figure 1.

Responsibilities of pharmacovigilance departments using AI/ML.

AI, artificial intelligence; ML, machine learning.

Roles and responsibilities within a pharmaceutical company pharmacovigilance department

Establishing and maintaining roles and responsibilities within a pharmacovigilance department for governing AI/ML can be accomplished by defining a decision-making matrix, such as the proposed RACI (Responsible, Accountable, Consulted, or Informed) matrix (Table 1). Defining the necessary training, education, and work experience parameters of these roles is of critical importance, 22 and must be tailored carefully to each pharmacovigilance department. Accountability for AI/ML governance, once tested, validated, and deployed for use by a pharmacovigilance department, must lie with the pharmacovigilance process owner and not a technologist (e.g., data scientists or AI/ML engineers). Accountability is placed with a single decision-maker who can pull together a team of individuals comprising an understanding of technology, pharmacovigilance processes, pharmacovigilance regulations, and the benefit/risk perspective of the patient. 23

Table 1.

RACI (Responsible, Accountable, Consulted, or Informed) structure for a pharmacovigilance department using AI/ML.

Organizational persona or roles Technical implementation Management Operations
Algorithm understanding Product architecture and deployment Risk management Human involvement/monitoring ramp-down plan Metrics Data integrity/privacy/security Data quality
Process owner, business
 An individual from the PV team is responsible for the business process using specific AI/ML software. It is expected that this individual is non-technical and focused on the business process A A A A R/A A A
Data owner, business
 An individual from the PV team is accountable for the classification, protection, use, and quality of the data being used as inputs to the specific AI/ML software. It is expected that this individual is non-technical and focused on business data I I I I R
Product owner, technology
 An individual from the PV team who is a technical expert who focuses on closing the gap between the technical and business sides of AI/ML software product development R R C C I R I
Risk management, business
 An individual from the PV team is responsible for coordinating and managing risks associated with AI/ML software R C I
Oversight board, business
 A group of individuals from the PV team or the extended enterprise, comprised of technical, business, and risk skills, that provide oversight and governance for AI/ML software. This group does not need to be standalone, and the function may be incorporated into other business or technical governance groups I I I/C I/C I I I
Head of safety, business
 An individual from the PV team who bears the responsibility that the AI/ML software is designed, tested, validated, implemented, managed, and monitored correctly according to internal policies and external regulations I I I I I I I

AI, artificial intelligence; ML, machine learning; PV, pharmacovigilance.

Technology understanding and implementation

Master list

It is imperative that the safety department keeps a central listing of all AI/ML in use within the department for audit purposes. One potential location is within the Pharmacovigilance System Master File (PSMF) for pharmaceutical companies operating in the European Union or other regions where a PSMF is required or a similar managed document.

AI/ML understanding and transparency

Similar to existing pharmacovigilance information technology systems, it is imperative that the pharmacovigilance process owner possesses a comprehensive understanding of the AI/ML at a pharmacovigilance process level, can effectively communicate its operation as related to patient safety and risks, and is partnered with other individuals that can bridge knowledge gaps between technical understanding of AI/ML and business process implications. 23 The pharmacovigilance process owner must also have a clear understanding of both training and production datasets, bias testing, and relevant performance metrics, as these are paramount in understanding the production performance of the AI/ML implementation. These understandings must be appropriately documented and open to audit. 24

AI/ML algorithm details may be examined by an inspector, and pharmaceutical pharmacovigilance departments should be prepared to explain what the AI/ML does and should consider how they would explain the AI/ML to non-experts to give assurance to regulators. While there is limited value in reviewing algorithms for assurance purposes, 25 the pharmacovigilance process owner must consider having an agreement in place with the AI/ML provider (whether the provider is internal to the pharmacovigilance department, internal to the wider pharmaceutical company organization, or an external supplier) to provide support in case of an inspection request. Even though regulatory inspections are confidential in nature, this agreement is likely still restrictive, particularly when dealing with an external supplier, to protect potential patent or proprietary trade secrets from entering the public domain.

AI/ML characteristics that should be considered for documentation and audit readiness should follow Good Machine Learning Practice, 26 Good Practice (GxP) regulations, and align with a pharmaceutical company’s Certified Software Quality Analyst certification processes.

AI/ML implementation management

Establishing a framework for trustworthy AI/ML is important when implementing and leveraging the power of AI/ML within any system or process. 27 This can be realized by existing pharmacovigilance system principles including validation, production monitoring, and risk planning. 4 Overlapping traditional pharmacovigilance system management principles with guidance from the US National Institute of Standards and Technology (NIST) 27 results in validation, accountability/transparency, and reliability emerging as major themes for managing AI/ML within pharmacovigilance.

AI/ML validation

All computerized systems within the pharmacovigilance department that support processes bound by GxP regulations are validated proportional to the potential risk to patient safety. AI/ML is a computer system component and must also be validated. Validation, following procedures approved by a supporting quality/compliance department, involves demonstrating through documented evidence that an AI/ML implementation is reliable, fit for its specific purpose, and compliant with regulatory and corporate requirements.28,29 AI/ML must be assessed to identify potential risks, which are documented, monitored, and included in quality management documents, inspection readiness documents, and a control plan. AI/ML provided by third-party providers must also be evaluated, and audits conducted by the third party, aligned with current pharmacovigilance regulations. Pharmacovigilance process owners must prepare for inspections by regulatory agencies and must maintain system registers, overviews, and procedures that document the use of the GxP system. The compliance status must be reviewed and periodically updated to include the cumulative effects of changes or revisions to the deployed AI/ML.

AI/ML monitoring

While validation documentation requirements will already exist in a pharmacovigilance department, necessitating that training datasets, validated AI/ML code, and test results be retained and managed, we suggest additional documentation is required when implementing AI/ML systems for accountability and transparency purposes. A control plan is one mechanism for achieving these purposes, providing accountability and transparency, documenting the AI/ML risk plan, and defining the performance parameters for both the AI/ML and the operating infrastructure to enable decisions regarding whether the AI/ML is operating as defined, and when the AI/ML or the operating infrastructure should be modified or updated. Detecting deviations caused by varying input data, such as detecting outliers and data drift is critical. 30 Monitoring an AI/ML’s input and output data, with care given to considering data volumes and AI/ML-to-AI/ML interactions, is analogous to quality check procedures in place to verify that human workers are performing tasks within defined performance parameters. A robust incident and event management process for time-critical notifications needing human involvement is important to notify necessary individuals of sensitive production issues.

A pharmaceutical department may find it beneficial to maintain a closed platform (so-called “walled garden”) for each AI/ML implementation, where access is restricted and regulated under a data use agreement. 31 The walled garden, containing training data, AI/ML code, and test data, is used for both information sharing with regulators and continued AI/ML refinement. The walled garden must mirror the applicable production environment such that results from the walled garden can be generalized to production. In the current regulatory environment, incremental AI/ML updates and training and test datasets must be versioned and retained.

AI/ML reliability

A reliable AI/ML implementation must offer benefits that outweigh negative effects and ensure that unacceptable effects can be monitored for resolution.27,32 When the reliability of AI/ML is reduced, as production input data changes from the test data for example, the AI/ML control plan must capture a clear understanding of the AI/ML’s reliability, monitoring conditions, and necessary actions.

Risk management

The documentation of risk management strategies describes the risks and mitigations associated with AI/ML that are involved in a pharmacovigilance workflow. The existing experience with risk management frameworks 33 in pharmacovigilance must be incorporated into the approach for AI/ML, where risks are identified, assessed, and prioritized in terms of their importance.

All risks must have mitigation plans developed, and a quality management approach should be taken that includes actions, timeframes, allocated responsible persons, and effectiveness checks. The risk mitigations are managed within defined timeframes and reviewed routinely. When risks have been suitably mitigated, or potential risks have not been observed upon implementation, these risks are removed from the plan to ensure focus, attention, and effort remains on mitigation of identified risks. We suggest that removal of any risks in the control plan must be agreed to by a quorum, led by the pharmacovigilance process owner.

Risk management strategies are structured to last the lifecycle of the AI/ML implementation and are reviewed routinely as identified risks change. We suggest that risk management strategies are documented within the control plan.

AI/ML risks

Risks may be specific to using an individual AI/ML implementation, or to the more general use of AI/ML. NIST has developed a framework to highlight risks surrounding the use of these systems generally. 27 General risks must always consider the impact on the wider pharmacovigilance system and should balance the level of transparency available against AI/ML performance. 34

Specific risks associated with the system in question must be developed and may be linked to technical details, implementation of the system in an already established process, or linked to a human component, such as training. Where a pharmacovigilance system has multiple AI/ML implementations being utilized within it, the potential cross-interference at different process points must be considered. Specific risks must also be considered within the wider goals of pharmacovigilance and the processes that these tools are intended to perform; for example, the detection of black swan events in signal detection remains a relevant risk whether it is a human or AI/ML tool performing the task. 35

As trust in an AI/ML implementation grows, a pharmacovigilance department may desire to reduce human monitoring to gain additional scale and efficiency. It is important to keep in mind that AI/ML is not required to perform a defined task “better than or equal” to a human but rather AI/ML must be monitored against defined performance parameters outlined in the control plan.

In alignment with identified risk tolerance, human monitoring may be stepwise reduced, and the approach taken for reducing human monitoring should be documented in the control plan.

The plan for the reduction of human monitoring must be reviewed against the transparency, accountability, and risk sections of the control plan and use the defined performance parameters documented in the control plan to measure acceptable performance.

Quality management

Quality management is mandated in pharmacovigilance through government and regulatory legislation and guidance, and the framework of a quality management system is outlined in global guidance and is made pharmacovigilance-specific through European regulations.22,3638 There is extensive experience of quality management in pharmacovigilance. Pharmaceutical pharmacovigilance departments are well placed to ensure a quality approach is adopted and should access and draw on existing experience when setting up these systems. Activities incorporated into quality management include process and technical documentation, vendor contracts, issue management, training, record management and archiving, and oversight and assurance activities. Additional considerations which have not already been discussed include vendor management, and oversight and assurance activities.

Vendor management

The setup of the relationship with a vendor must consider the increased scrutiny on pharmacovigilance systems utilizing AI/ML. Additional clauses are needed in the contract for the vendor to support the pharmacovigilance department when there is scrutiny of the system via internal (audit, oversight) or external (inspections) mechanisms. There must be consideration of the interactivity between the vendor and the pharmacovigilance department for all elements outlined in the control plan. While the third party may have developed the AI/ML solution in use, it is the pharmacovigilance department that bears the legal responsibility for implementation in its pharmacovigilance system. The contract must support the pharmaceutical company’s procedures governing any AI/ML that is adopted.

Consideration should be given to allowing visibility or access to regulators of data or information that would not routinely be available for review by auditors or pharmacovigilance departments through routine business activity including AI/ML algorithms and test datasets.

Oversight and assurance

Pharmacovigilance departments must have oversight mechanisms in place prior to AI/ML going live in production. In addition, these pharmacovigilance systems must be included in audit programs. An audit is recommended prior to go-live to ensure that validation documentation, control plan, and risk management activities are appropriate and aligned with the framework set by the pharmacovigilance department; this also allows implementing preventative actions prior to system go-live.

A new assurance paradigm is required

The implementation of AI/ML in pharmacovigilance represents a challenge to conventional audit and inspection methodology. Current assurance processes require “snap-shots” and documentation that are used to reconstruct a true representation of a time in history to determine either level of compliance or performance, or to scrutinize decision-making processes. 39 The pharmacovigilance framework requires exhaustive record and archiving procedures covering all pharmacovigilance data and documentation for the full life cycle of all medicinal products,36,38,40 including the systems being utilized for pharmacovigilance activities. Currently, these processes and systems are static and can be faithfully restored using archives and audit trails. Some regulators may assume these practices will still be a valid way of getting assurance for AI/ML; a recent article from the European Medicines Agency stated that there is an expectation that when AI/ML is used to support benefit-risk assessments, algorithms and datasets should be made available for review. 10

This way of thinking must be challenged, and for AI/ML, a new paradigm of assurance is required as the current assurance methodology is impractical if not impossible. The current expectation to keep an audit history and detailed record of every change that is required, for example, a detailed copy of a safety database or safety data test set when either is up-versioned, does nothing to support the implementation of AI/ML in pharmacovigilance but rather creates a data storage problem. The challenge then is with pharmaceutical companies to be able to demonstrate that without a typical audit trail, other controls are in place that give assurance the AI/ML is working as intended: indeed, alternative methodologies can be proposed. 41 Additional complications exist where AI/ML benefitting pharmacovigilance activities are utilized, and data privacy, ethical, and consent considerations may exist.

In addition, the quality assurance departments within pharmaceutical companies and regulatory authorities must adopt different approaches when it comes to the review of AI/ML in either audit or inspection scenarios. It is imperative that industry and regulators work together to ensure that assurance activities are robust and that expectations are aligned so that the benefits that AI/ML can offer to patient safety can be realized.

Conclusion

AI/ML offers great promise within pharmacovigilance for improving how the benefit–risk of medicines and vaccines is monitored; however, increased scrutiny on pharmacovigilance systems incorporating AI/ML can be expected and is welcomed. This presents an opportunity for pharmacovigilance departments to leverage their extensive experiences in the governance of computerized systems to form the basis of AI/ML governance. Organizing around a RACI matrix, appropriately governing the implemented AI/ML, developing and utilizing both a control plan and a plan for risk management, and being transparent for internal audits and external regulators, all leveraging experience and helping to build a high level of confidence that the pharmacovigilance department is performing appropriate risk-based management of AI/ML implementations. None of these activities is novel. All reflect existing processes within well-functioning pharmacovigilance departments that can be tailored and expanded to address requirements associated with AI/ML. As AI/ML expands into pharmacovigilance to ensure patient safety worldwide, it is important that regulators and the pharmaceutical industry have an open dialogue and agree on internationally aligned performance indicators and verification processes to prevent unnecessary added complexity and continue to ensure data integrity and patient safety.

Acknowledgments

The authors thank the Akkodis Belgium platform for editorial assistance and manuscript coordination, on behalf of GSK, and Dr Joanne Wolter (independent, on behalf of GSK) for providing writing support.

Footnotes

ORCID iD: Michael Glaser Inline graphic https://orcid.org/0000-0001-8843-1662

Contributor Information

Michael Glaser, GSK, Development Global Medical, Global Safety and Pharmacovigilance Systems, 1250 South Collegeville Road, Upper Providence, PA 19464, USA.

Rory Littlebury, GSK, Development Global Medical, Global Safety and Safety Governance, Stevenage, UK.

Declarations

Ethics approval and consent to participate: Not applicable.

Consent for publication: Not applicable.

Author contributions: Michael Glaser: Conceptualization; Writing – original draft; Writing – review & editing.

Rory Littlebury: Conceptualization; Writing – original draft; Writing – review & editing.

Funding: The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: GlaxoSmithKline Biologicals SA took responsibility for all costs associated with the development and publishing of the present manuscript.

Competing interests: All authors are employees of GSK and hold financial equities in GSK.

Availability of data and materials: Not applicable.

References


Articles from Therapeutic Advances in Drug Safety are provided here courtesy of SAGE Publications

RESOURCES