Skip to main content
Therapeutic Advances in Drug Safety logoLink to Therapeutic Advances in Drug Safety
. 2025 Jul 31;16:20420986251361435. doi: 10.1177/20420986251361435

Artificial intelligence in pharmacovigilance: advancing drug safety monitoring and regulatory integration

Ankit Nagar 1,, Joga Gobburu 2, Aloka Chakravarty 3
PMCID: PMC12317250  PMID: 40756610

Abstract

Artificial intelligence (AI) has rapidly evolved from experimental applications in pharmacovigilance (PV) to being considered for routine use. This review critically examines AI’s potential to revolutionize drug safety monitoring, focusing on practical implementation challenges such as ensuring AI’s consistent and transparent performance, reducing multiple sources of bias, and addressing interpretability issues. It emphasizes the transition from experimental use to a routine, scalable capability within PV. It examines AI’s evidence base in specific applications, its ability to enhance actionable insights, and how organizations can safeguard against unintended consequences in multi-AI system environments. These considerations are vital as AI moves from theory to practice in PV.

Keywords: adverse drug reactions, artificial intelligence, pharmacovigilance, regulatory integration, signal detection

Plain language summary

AI in pharmacovigilance: opportunities and challenges

Why is it important to implement AI in PV? With the growing complexity and volume of drug safety data, traditional pharmacovigilance (PV) methods have become insufficient. AI has the potential to significantly improve the efficiency and accuracy of adverse event detection and analysis, enabling a more proactive approach to enhancing patient safety.

What are the challenges in implementing AI for PV? While AI shows promise, there are significant challenges in its implementation, including ensuring data quality, addressing potential biases, integrating AI seamlessly into existing PV workflows, meeting regulatory requirements, and addressing the unintended consequences of deploying multiple interacting AI systems.

What does this article contribute to our understanding of AI in PV? This article explores the current state of AI in PV, presenting evidence of its effectiveness, outlining strategies for its implementation, and addressing key challenges, such as bias reduction and the need for explainable AI. It provides a comprehensive overview of how AI can transition from experimental to routine use, ultimately aiming to create more robust and trustworthy drug safety monitoring systems.

Introduction

Artificial intelligence’s (AI) application in pharmacovigilance (PV) has expanded significantly, promising to improve the speed and accuracy of adverse event detection. 1 The transition has been driven by the increasing complexities in drug development and post-market surveillance, including the unprecedented volume of data, complexity of drug–drug interactions, and patient variability.2,3 However, transitioning from experimental use to routine, trusted application presents new challenges. These include ensuring that AI functions as intended post-implementation, that it adapts to real-world complexities, and that biased and unfair outcomes are minimized. This review addresses the current state of AI in PV and explores how it can be effectively integrated into everyday workflows while adhering to ethical and legal standards.

The application of AI in PV began in the early 2000s with the introduction of data mining algorithms for signal detection in spontaneous reporting systems (SRS). Pioneering approaches such as the Bayesian Confidence Propagation Neural Network (BCPNN) method and the Multi-item Gamma Poisson Shrinker (MGPS) laid the groundwork for more sophisticated AI applications in PV. Since then, AI has been applied to various aspects of PV, including automated case processing, signal detection, and real-world evidence analysis.46

The PV lifecycle spans both pre-marketing and post-marketing phases and involves distinct data landscapes. While pre-marketing PV relies primarily on structured data from clinical trials, post-marketing surveillance involves both structured and unstructured data from diverse sources. These include individual case safety reports (ICSRs), adverse event reports, regulatory databases like FDA’s Adverse Event Reporting System (FAERS) and WHO’s VigiBase, Electronic Health Records (EHRs), insurance claims, social media posts, and biomedical literature.79 These diverse data sources offer invaluable real-world insights into drug safety, capturing intricate patterns that may not be evident in more structured reporting systems.

Figure 1 illustrates the multifaceted challenges in PV and the diverse AI technologies that can be applied to address these challenges. The concentric circles represent different layers of the PV ecosystem, from core challenges and data sources to the various AI and machine learning (ML) techniques that can be leveraged to enhance PV processes. This comprehensive view emphasizes the potential for AI to address the complexities of modern PV by integrating diverse data sources and applying sophisticated analytical techniques.

Figure 1.

This detailed infographic illustrates the leading-edge challenges in pharmacovigilance (PV) and the corresponding AI technologies that can address these issues, showcasing a multi-disciplinary approach to enhancing drug safety monitoring.

A comprehensive overview of state-of-the-art challenges in PV, sources of data, and the relevant AI technologies to address the challenges.

AI, artificial intelligence; PV, pharmacovigilance.

AI’s ability to efficiently process and derive meaningful insights from both structured and unstructured data has been game-changing, enabling the rapid and accurate identification of emerging safety signals across all stages of the PV lifecycle. This has facilitated a paradigm shift from passive to active surveillance methods, allowing for real-time detection of adverse drug reactions (ADRs) and potential safety issues.

Despite its promise, AI implementation in PV faces significant challenges. Key concerns include how to integrate AI as a routine capability, the availability of evidence supporting its use in specific applications, and ensuring trusted, transparent usage for all stakeholders. These challenges include ensuring data quality and representativeness, addressing potential biases in AI algorithms, and maintaining transparency and interpretability of AI-driven decisions. Moreover, the integration of multiple AI solutions in PV raises questions about potential interactions and unintended consequences.

This review examines the evolution of AI applications in PV, focusing on significant advancements over the past two decades. By assessing the current state of AI implementation and its future directions, we aim to provide a comprehensive reference for researchers and professionals.

Applications of AI in PV

The integration of AI technologies in PV is revolutionizing the way ADRs are detected, evaluated, and managed. This section explores the current applications of AI in PV, focusing on its role in enhancing ADR detection and evaluation.

AI for ADR detection

ADRs are a major public health concern, significantly impacting patient populations across demographics. The risk is especially pronounced in elderly populations (>60 years), with 15%–35% experiencing an ADR during their hospital stay. 10 This high proportion of ADRs puts a significant burden on healthcare systems, with an estimated annual management cost of $30.1 billion. 11 This highlights the critical need for more efficient methods to monitor, detect, and prevent ADRs throughout both the pre-marketing and post-marketing phases of drug surveillance.

The evolution of AI applications for ADR detection since the late 1990s has reshaped PV practices. It can be broadly categorized into three phases, each characterized by distinct advancements in statistical methods, natural language processing (NLP), and machine learning techniques.12,13

Early AI applications in PV focused on enhancing signal detection in spontaneous reporting systems. The BCPNN method, developed by Bate et al., 14 and the MGPS, introduced by DuMouchel, 15 marked significant improvements over traditional statistical methods. These algorithms allowed for more efficient processing of large volumes of ADR reports, potentially identifying safety signals that might be missed by manual review. However, these methods faced challenges in terms of handling rare events and drug–drug interactions. For example, although the BCPNN method demonstrates high sensitivity in ADR detection, it is prone to generating a higher rate of false positives, raising concerns about its specificity. This highlights the need for careful interpretation of signals generated by these algorithms and emphasizes the importance of expert review in the signal evaluation process.

As PV data sources expanded beyond structured spontaneous reports to include unstructured data from EHRs and social media, NLP techniques became crucial. Nikfarjam et al. demonstrated the potential of NLP in extracting ADR data from social media, achieving F-measures of 0.82 and 0.72 for ADR detection from networks such as DailyStrength and Twitter. 16 These F-measures indicate a good balance between precision and recall, suggesting that the NLP methods were effective in correctly identifying ADRs while minimizing false positives and false negatives. This study opened new avenues for real-time ADR monitoring, potentially capturing patient-reported outcomes that might not be reflected in formal reporting systems.

However, using social media data for ADR detection presents significant challenges, including data quality concerns, patient privacy issues, difficulties in verifying the accuracy of self-reported information, and recall bias. Recall bias is particularly challenging in social media-based ADR detection, as patients may selectively report or remember certain side effects while forgetting others, leading to potential distortions in the reported ADRs. These limitations highlight the need for robust data validation strategies and careful interpretation when incorporating social media data into PV systems.

As machine learning techniques in PV advanced, researchers began exploring more sophisticated approaches to integrate diverse data sources and capture complex relationships. One such approach is the use of knowledge graphs, which represent entities (such as drugs, adverse events, and patient characteristics) as nodes and their relationships as edges. These graphs can integrate diverse data and capture complex relationships between drugs, adverse events, and other factors. 17 Table 1 presents a summary of AI methods applied across diverse PV data sources, including social media, EHRs, and PV databases, with performance metrics such as F-scores and area under the receiver operating characteristic curves (AUCs). One study using a knowledge graph-based method achieved an AUC of 0.92 in classifying known causes of ADRs. 18 This performance is notably higher than traditional statistical methods, which typically achieve AUCs in the range of 0.7–0.8 for similar tasks. However, implementing these systems in clinical practice faces challenges related to data quality, model interpretability, and the need for large, diverse datasets for training. These issues are particularly relevant for rare ADRs and underrepresented populations.

Table 1.

A comparative overview of AI methods applied to ADR detection across multiple PV data sources, including social media platforms, EHRs, and spontaneous reporting systems.

Data source AI method Sample size Performance metric (F-score/AUC) References
Social media—Twitter Conditional random fields 1784 tweets 0.72 (F-score) Nikfarjam et al. 16
Social media—DailyStrength Conditional random fields 6279 reviews 0.82 (F-score)
EHR—Clinical notes Bi-LSTM with attention mechanism 1089 notes 0.66 (F-score) Li et al. 19
Open TG-GATEs and FAERS databases (ADR: duodenal ulcer) Deep neural networks 300 drug–ADR associations 0.94–0.99 (AUC) Mohsen et al. 20
Open TG-GATEs and FAERS databases (ADR: hepatitis fulminant) Deep neural networks 319 drug–ADR associations 0.76–0.96 (AUC) Mohsen et al. 20
Korea National Spontaneous Reporting Database (drug: nivolumab) GBM 136 suspected AEs 0.95 (AUC) Bae et al. 21
Korea National Spontaneous Reporting Database (drug: docetaxel) GBM 485 suspected AEs 0.92 (AUC) Bae et al. 21
FAERS Multi-task deep-learning framework 141,752 drug–ADR interactions 0.96 (AUC) Zhao et al. 22
Social media (Twitter) BERT fine-tuned with FARM 844 tweets 0.89 (F-score) Hussain et al. 23
Social media (PubMed) BERT fine-tuned with FARM 6821 sentences 0.97 (F-score) Hussain et al. 23
Eight parenting websites Disproportionality methods 1290 posts 0.69 (F-score) Hadzi-Puric and Grmusa 24

The table highlights performance metrics such as F-scores and AUC values, which are key indicators of a model’s accuracy and effectiveness. It is important to note that direct comparison between methods is not possible due to differences in dataset nature, size, specific tasks, and evaluation criteria. The table aims to provide an overview of the range of AI applications and their reported performance in the field of PV. Performance metrics like the F-score and AUC are context-specific, and a method’s success often hinges on the balance between sensitivity and specificity—key factors in ADR monitoring.

ADR, adverse drug reaction; AE, adverse event; AI, artificial intelligence; BERT, bidirectional encoder representations from transformers; Bi-LSTM, bidirectional long short-term memory; EHR, electronic health record; FAERS, FDA Adverse Event Reporting System; FARM, Framework for Adapting Representation Models; FDA, Food and Drug Administration; GBM, Gradient Boosting Machine; KAERS, Korea Adverse Event Reporting System; PV, pharmacovigilance; TG-GATEs, Toxicogenomics Project–Genomics Assisted Toxicity Evaluation Systems.

Recent years have seen the application of more sophisticated AI models in PV. Yang et al. demonstrated the effectiveness of machine learning techniques in detecting medications and ADRs from EHRs. 25 Their system, MADEx (Medication and Adverse Drug Event Extraction), showed potential for real-time monitoring and early warning systems for ADRs by automatically extracting this information from clinical notes. MADEx improved upon previous systems by incorporating advanced NLP techniques and a novel deep-learning architecture, resulting in higher accuracy and faster processing times compared to traditional rule-based systems.

Recent innovations in ADR detection have leveraged multi-modal deep-learning approaches to enhance accuracy and comprehensiveness. Sahoo et al. introduced a MultiModal Adverse Drug Event detection dataset, combining ADR-related textual information with visual aids. 26 This approach demonstrated the significance of integrating visual cues from images to improve overall performance in ADR detection, particularly for ADRs with visible symptoms.

By contrast, Harpaz et al. explored multi-modal signal detection for ADRs using diverse data sources: FAERS, insurance claims, the MEDLINE citation database, and logs of major Web search engines. 27 Their multi-modal approach provided AUC improvements ranging from 0.04 to 0.09 compared to unimodal methods, with an added lead-time of 7–22 months for detecting ADRs relative to labeling revision dates.

Zhao et al. developed a deep-learning framework for predicting the seriousness of adverse reactions to drugs. 22 By integrating multiple data representations, including Simplified Molecular Input Line Entry System (SMILES) sequences of drugs and semantic feature vectors of ADRs, it achieved improved performance in predicting potential drug–ADR interactions that cause serious clinical outcomes, focusing on the molecular and semantic aspects of drug–ADR relationships. These studies demonstrate the potential of combining multiple data modalities to enhance the accuracy and robustness of ADR detection systems, paving the way for improved patient safety and healthcare accessibility.

However, these advanced models often function as “black boxes,” raising concerns about interpretability and clinical applicability. Rudin argued for the development of inherently interpretable models rather than post hoc explanations, particularly in high-stakes medical decisions like ADR detection. 28 This highlights the ongoing challenge in balancing the power of complex AI models with the need for transparency and explainability in healthcare applications.

To address data privacy concerns in ADR detection, federated learning approaches have been proposed. Federated learning allows multiple institutions to collaboratively train AI models without sharing raw patient data, thereby preserving privacy. Choudhury et al. described a federated learning method for collaborative ADR detection across multiple institutions while preserving data privacy, demonstrating its effectiveness in predicting ADRs without sharing raw patient data. 29 However, an analysis by Enthoven and Al-Ars highlighted potential vulnerabilities to privacy attacks in federated learning systems, emphasizing the need for robust security measures. 30

Future research should focus on developing more robust, interpretable, and clinically validated AI models for PV. This includes addressing the challenges of rare ADRs and underrepresented populations, improving the integration of diverse data sources while maintaining privacy, and developing AI systems that can adapt to the evolving landscape of drug development and use. As suggested by Rieke et al., there is a need for careful consideration of the ethical and regulatory implications of AI in healthcare, particularly in sensitive areas like ADR detection. 31

AI for signal detection and active surveillance

Signal detection is an important component of PV that involves the identification of potential links between drugs and adverse events. 32 While traditional methods based on disproportionality analysis of SRS have been valuable, 33 they often suffer from limitations such as under-reporting, reporting biases, and challenges in detection of complex drug–event associations. AI and ML methodologies have emerged as critical tools in addressing these challenges, enhancing signal detection, improving accuracy, and enabling the analysis of diverse data sources. Beyond broad data source analysis, AI is also being applied to specialized regulatory intelligence platforms to streamline safety monitoring and reporting processes.

Recent advances in AI-enabled regulatory intelligence tools, in practice often AI-enabled chatbots set to search or web scrape public domain regulatory documents, are significantly improving how PV teams access, analyze, and operationalize large volumes of regulatory and safety data.

Several ML algorithms have been applied to signal detection, showing significant improvements over traditional methods. Bae et al. developed a Gradient Boosting Machine model to detect safety signals for the drugs nivolumab and docetaxel. 25 Their model achieved an impressive AUC of 0.95, significantly outperforming traditional disproportionality analysis methods, which typically achieve AUCs of 0.55 or lower.

Recent studies suggest that advanced ML/AI implementations may offer advantages over traditional disproportionality measures under specific conditions, but these findings warrant careful interpretation. While ML approaches have demonstrated improved performance metrics in controlled evaluations, several limitations affect their real-world applicability. First, benchmark datasets used for validation are often limited in size and diversity, potentially affecting generalizability across diverse patient populations. 2 Second, most comparative studies evaluate performance against known adverse events rather than unknown or emerging signals, which is the primary challenge in real-world PV. 13 Third, the temporal dimension of signal detection—how quickly a safety concern is identified—is rarely addressed in methodological comparisons. 34 These factors likely contribute to why traditional disproportionality methods remain widely used in regulatory settings despite the theoretical advantages of more sophisticated approaches. A balanced approach integrating both methodologies may offer the most robust framework for comprehensive safety surveillance.

Jeong et al. improved the accuracy and efficiency of ADR detection by combining traditional methods with ML approaches. 35 They utilized 21 years of inpatient EHR data by combining three existing algorithms—CERT (Comparison of Extreme Laboratory Test results), CLEAR (Comparison of Extreme Abnormality Ratio), and PACE (Percentile of Adverse event risk in Comparison to Entire population)—as inputs for ML models, including random forest, L1 regularized logistic regression, support vector machine, and neural networks. This approach outperformed individual existing algorithms significantly, achieving a sensitivity of 0.593–0.793, specificity of 0.619–0.796, and AUC of 0.737–0.816.

Building on these advancements in structured data analysis, researchers have also made significant strides in extracting valuable information from unstructured data sources. Social media analysis has emerged as an important component of modern PV. Stanovsky et al. developed a recurrent neural network (RNN) model specifically designed for this purpose, addressing the unique challenge of capturing context-dependent and informal descriptions of ADRs in online posts. 36 Their approach integrated knowledge graph embeddings from DBpedia into an RNN transducer, trained on the CSIRO Adverse Drug Event Corpus (CADEC), resulting in an F1-score of 93.4 on the CADEC test set. This significant improvement over traditional lexicon-based methods demonstrates the potential of AI in extracting valuable safety signals from social media data, enabling real-time monitoring and earlier detection of potential safety concerns.

Based on recent advances in biomedical literature mining, several approaches can be applied to enhance signal detection. 37 Deep-learning methods, particularly those utilizing bidirectional long short-term memory (Bi-LSTM) networks combined with convolutional neural networks (CNN) and conditional random fields (CRF), have shown promising results in biomedical named entity recognition (NER) and normalization (NEN). These methods achieved high F1-scores of 0.87 and 0.89 for NER and NEN, respectively, on benchmark datasets like BC5CDR and NCBI Disease. 38 Such advanced NER and NEN capabilities could significantly improve the identification of drugs, adverse events, and their relationships in PV literature. Furthermore, attention-based CNN models have demonstrated effectiveness in biomedical literature indexing, which could assist in efficient categorization and retrieval of relevant PV articles. The integration of deep-learning approaches with traditional statistical methods could potentially enhance the sensitivity and specificity of signal detection from biomedical literature.

The implementation of AI for signal detection in real-world settings has shown promising potential. The FDA’s Sentinel System, which includes the Active Risk Identification and Analysis component, enables the evaluation of post-market safety signals using real-world data with unprecedented speed and efficiency. 39 ML-based technologies can advance key elements of safety evaluations, including health outcome of interest validation, propensity score matching, and patient phenotyping. The Sentinel System has been used to conduct more than 250 safety analyses since its full-scale implementation. AI has the potential to automate repetitive manual processes, enhance consistency, and reduce human bias while providing valuable insights for early signal detection. They can also extract information from adverse drug event forms and evaluate case validity without human intervention, enabling rapid case processing. 5 However, further research and real-world implementation data are required to quantify specific benefits of using these tools compared to traditional methods.

While AI approaches have demonstrated significant potential in enhancing signal detection, challenges exist in the form of the need for large, high-quality datasets for model training, the interpretability of complex models, and the integration of AI-generated signals into existing PV workflows. In addition, the use of AI in PV raises important ethical and regulatory considerations. These include ensuring data privacy and security, addressing potential biases in AI algorithms, and establishing clear guidelines for the validation and implementation of AI-driven signal detection methods. Regulatory bodies such as the Food and Drug Administration (FDA) and European Medicines Agency (EMA) are actively working on frameworks to govern the use of AI in drug safety monitoring, emphasizing the need for transparency, reproducibility, and human oversight in AI-assisted decision-making processes.

AI for real-time monitoring and active surveillance

Advancements in AI are enabling the transition to more proactive surveillance approaches. This shift allows for earlier detection of potential safety issues and more timely interventions. Real-time monitoring in PV involves continuous analysis of varied data sources to detect potential ADRs as they occur. Active surveillance, on the contrary, involves systematic collection and analysis of safety data throughout a product’s lifecycle. AI approaches have significantly improved both these aspects of PV.

The previous section discussed a key application of AI in real-time monitoring—analysis of social media posts and patient forums for earlier signal detection. In addition to this, another crucial application of AI lies in the continuous monitoring of EHRs to detect ADRs in hospital settings. Van de Burgt et al. developed and implemented a text mining algorithm for identifying ADRs from free-text Dutch EHRs. 40 The algorithm utilized Medical Dictionary for Regulatory Activities (MedDRA) terms and Systematized Nomenclature of Medicine Clinical Terms (SNOMED-CT) to find possible ADRs in a huge dataset of 35,000 EHR notes. They were able to achieve a positive predictive value of 70% and a sensitivity of 73% in detecting ADRs. However, they acknowledge the need for external validation and testing before these algorithms can be deployed in a hospital setting.

In the realm of active surveillance, AI has also made significant strides. For instance, the Centers for Disease Control and Prevention (CDC) has implemented several systems for active surveillance of vaccine safety, including the Vaccine Safety Datalink (VSD) project. Established in 1990, the VSD monitors vaccine safety and conducts studies on adverse events post-immunization. In 2005, the VSD project team launched an active surveillance system called Rapid Cycle Analysis, designed to monitor adverse events following vaccination in near real time. The integration of AI into such systems promises to further improve their efficiency and accuracy.

Despite the progress so far, the implementation of AI for real-time monitoring has potential for false alarms and unnecessary resource allocation. Moreover, interpretability of these complex models is crucial for their regulatory acceptance and clinical decision-making. Nevertheless, continuous evolution of AI approaches promises their significant contribution to improving patient safety and public health.

Challenges and limitations of AI in PV

While AI has shown great promise in enhancing PV processes, several challenges and limitations need to be addressed for its widespread adoption and optimal performance. This section explores four key areas of concern: algorithmic bias in diverse populations, temporal dynamics of drug safety profiles, causal inference in complex polypharmacy scenarios, and integration of multi-modal data sources.

Algorithmic bias in diverse populations

AI models, especially those used in PV, are often trained on datasets that do not represent the diversity of real-world patient populations. This can lead to algorithmic bias, where the model performs well for the majority group but poorly for minority or underrepresented populations, such as ethnic minorities, older adults, or patients with comorbidities. For instance, models trained primarily on data from European or North American populations may struggle to detect ADRs in patients from Asia or Africa.

The issue is compounded by the fact that clinical trials—a key source of training data—tend to underrepresent these populations. This underrepresentation can lead to failures in generalizing AI predictions across all demographics, resulting in missed safety signals for certain groups. To mitigate these biases, data augmentation and resampling techniques should be utilized to address imbalances in training datasets, complemented by advanced quantitative bias detection and correction methods. 41 Techniques like federated learning allow for model training on decentralized datasets from multiple regions without sharing raw data, potentially mitigating this bias. 31

Temporal dynamics of drug-safety profiles

The safety profile of a drug is not static—it evolves over time as new data from real-world usage becomes available. AI models, especially those trained on pre-market clinical trial data, may not adapt well to these changing profiles. This is particularly relevant for long-term drugs or drugs with a delayed onset of adverse effects.

A phenomenon known as concept drift in machine learning occurs when the statistical properties of the target variable change over time, rendering the original model less effective (MS v2). For example, post-marketing data may reveal ADRs that were not detected in the limited, controlled environment of clinical trials, requiring continuous model updating.

Addressing temporal dynamics requires continuous learning models that adapt in real time as new data emerge. However, these models pose their challenges: ensuring model stability while allowing for adaptability is difficult. Moreover, integrating real-world evidence from various sources, such as EHRs and spontaneous reporting systems, must be done carefully to avoid overfitting to short-term trends.

Causal inference in complex polypharmacy scenarios

Polypharmacy, the use of multiple drugs by a single patient, particularly in older populations, presents a major challenge for AI systems in PV. While AI models are adept at uncovering correlations between drugs and adverse events, establishing causal relationships remains a significant challenge, particularly in polypharmacy scenarios. Overcoming this limitation is essential for AI to transition from experimental to routine use in PV, where the focus must be on generating actionable, unbiased insights.

Traditional machine learning models may identify that a particular combination of drugs is associated with a higher incidence of ADRs, but disentangling whether one drug or the interaction between several drugs is the root cause is far more complex. Causal inference models, such as causal graph neural networks, are being explored to handle this complexity.

Advanced methodologies have emerged specifically to address causality in drug interaction scenarios. The InferBERT framework represents a significant advancement by integrating transformer-based language models with do-calculus to establish causality in PV data, demonstrating high accuracy in discriminating between drug-induced adverse events. 42 Computational approaches such as graph-based causal networks, propensity score matching, and instrumental variable techniques offer powerful tools for disentangling confounding variables in observational data. 43 Expert-defined Bayesian networks have shown particular promise, allowing domain knowledge to inform causal structures while learning parameters from empirical data. 44 The integration of these specialized causal inference paradigms with traditional machine learning requires restructuring data collection protocols, as current PV data sources lack design elements necessary for robust causal inference. 43 Implementation of these techniques would significantly enhance AI’s capability to evaluate interactive effects in complex medication regimens and provide explanatory mechanisms beyond statistical association.

Integration of multi-model data sources

One of the most powerful promises of AI in PV is its ability to integrate multi-modal data. While this integration offers the potential for more comprehensive safety monitoring, it also introduces significant challenges. First, the quality and consistency of data across different sources can vary significantly. For instance, patient-reported outcomes on social media might lack the rigor of clinical trial data. In addition, data harmonization—the process of reconciling conflicting or incomplete information from different sources—remains a major hurdle.

Moreover, real-time data integration, especially when combining data from spontaneous reporting systems and real-world evidence, requires sophisticated methods for normalizing and analyzing disparate types of information.

Explainable AI and regulatory considerations

Explainable AI (XAI) is critical for establishing trust and transparency in AI-enabled PV systems. To ensure compliance with regulatory standards, AI algorithms must offer clear justifications for their outputs, enabling stakeholders to understand, validate, and act on AI-driven insights. This is particularly important for gaining regulatory approval and establishing AI as a reliable, routine capability in PV.

Several concrete methodologies have been developed to address the “black box” nature of AI in PV. Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) are widely utilized XAI libraries that successfully identify both important and unimportant features in PV models, with SHAP slightly outperforming LIME in feature importance quantification. 45 These methodologies have demonstrated effectiveness in PV applications such as predicting adverse outcomes with up to 72% accuracy while providing transparent explanations of feature contributions. 45 While powerful, these methods face challenges with feature collinearity, as most current XAI approaches either do not consider feature interdependence or incorrectly assume feature independence. 46 Additional approaches include Gradient-weighted Class Activation Mapping (Grad-CAM) for visual explanations, layer-wise relevance propagation for tracing predictions back through neural networks, and Anchors for rule-based explanations that improve human understanding of model decisions. 47 Implementation of these methods in PV systems would significantly enhance model transparency, allowing regulators and healthcare professionals to understand how AI systems reach specific safety conclusions and when human intervention is necessary.

Global regulatory bodies continuously evaluate the use of AI-based systems to ensure adherence to the regulatory principles. These include ensuring data quality, which is crucial for accurate algorithmic predictions but can be challenging due to disparate data sources and potential biases. Patient privacy is another major concern, involving the secure storage, management, and protection of sensitive data while complying with privacy regulations. Regulatory bodies also focus on system validation, employing risk-based approaches for validating and monitoring AI-based PV systems, and ensuring timely reporting of adverse events to preserve patient safety.

The Uppsala Monitoring Centre (UMC), which manages the World Health Organization (WHO) global database of ICSRs (VigiBase), has implemented notable machine learning tools to enhance PV processes. Their vigiMatch algorithm represents one of the earliest applications of machine learning in routine PV, efficiently detecting duplicate case reports by processing approximately 50 million report pairs per second. 48 In use since 2014, vigiMatch employs probabilistic methods to identify potential duplicate reports, addressing a significant challenge where even a small number of duplicates can trigger false safety signals. 49 UMC has also developed vigiRank, which enhances signal detection by incorporating multiple evidence aspects beyond traditional disproportionality analysis, demonstrating how targeted AI applications can improve core PV functions while maintaining transparency and interpretability.

The Council for International Organizations of Medical Sciences established Working Group XIV on Artificial Intelligence in Pharmacovigilance in 2022. This initiative has mapped the ICSR process and evaluated automation opportunities based on perceived risk, effort required, and expected benefits. TransCelerate has also developed validation frameworks for AI-based systems in PV, published guidance on implementing novel technologies, and created interactive tools to assist with implementation. 50 Such collaborative approaches help establish industry standards, reduce duplicative efforts, and accelerate responsible AI implementation while addressing the economic and validation challenges inherent in adopting these technologies.

Figure 2 illustrates the key regulatory and ethical considerations for AI-enhanced PV. It covers four critical aspects: the current regulatory landscape, data privacy and security measures, ethical frameworks, and regulatory evolution. These interconnected areas represent the multifaceted approach required to navigate the complex integration of AI in PV. The regulatory landscape outlines existing guidelines from bodies like the FDA and EMA, while data privacy and security highlight crucial aspects such as General Data Protection Regulation (GDPR) compliance and advanced encryption methods. Ethical frameworks address critical issues like bias mitigation and transparency in AI systems. Lastly, regulatory evolution captures the dynamic nature of this subject area, emphasizing the need for adaptive licensing pathways and global data standardization efforts. This comprehensive framework underscores the intricate balance between leveraging AI’s potential in drug safety monitoring and adhering to stringent regulatory and ethical standards.

Figure 2.

Regulatory and ethical considerations for AI-enhanced PV, highlighting current guidelines and expected course of evolution. AI, artificial intelligence; EAMS, Early Access to Medicines Scheme; EMA, European Medicines Agency; FDA, Food and Drug Administration; GDPR, General Data Protection Regulation; HIPAA, Health Insurance Portability and Accountability Act; ICH, International Council for Harmonization; MHRA, Medicines and Healthcare products Regulatory Agency; NDA, New Drug Application; PMDA, Pharmaceuticals and Medical Devices Agency; PV, pharmacovigilance; WHO, World Health Organization.

Regulatory and ethical considerations for AI-enhanced PV, highlighting current guidelines and expected course of evolution.

AI, artificial intelligence; EAMS, Early Access to Medicines Scheme; EMA, European Medicines Agency; FDA, Food and Drug Administration; GDPR, General Data Protection Regulation; HIPAA, Health Insurance Portability and Accountability Act; ICH, International Council for Harmonization; MHRA, Medicines and Healthcare products Regulatory Agency; NDA, New Drug Application; PMDA, Pharmaceuticals and Medical Devices Agency; PV, pharmacovigilance; WHO, World Health Organization.

While AI technologies promise significant cost reductions through automation of labor-intensive PV processes, the initial implementation demands substantial financial investment, particularly at the frontiers of advanced analytics and signal detection. With case processing activities consuming up to two-thirds of internal PV resources and dominating most pharmaceutical companies’ overall PV budgets, NCBI organizations face difficult economic calculations when evaluating AI investments. Quality control requirements to detect unusual or extraordinary cases often necessitate additional personnel expenditures. This creates a complex balance between automation savings and continued human oversight costs. As a domain historically perceived as predominantly cost-generating rather than revenue-producing, PV stands to benefit substantially from AI-driven efficiencies, but only if implementation strategies address both economic sustainability and maintenance of scientific rigor.

As AI becomes increasingly utilitarian within PV, ethical concerns come to the forefront. Sponsors need to ensure that their AI systems are transparent, trustworthy, focused on patient welfare, and preserve privacy. Key ethical considerations in regulatory AI-based PV include explainability, bias reduction, privacy protection, and trustworthiness. AI algorithms should be interpretable and explainable, helping healthcare professionals understand how the system derives its conclusions. Preventing and minimizing biases in the AI’s decision-making process is crucial for ensuring fairness and promoting best clinical practices. Safeguarding sensitive medical information and maintaining patient confidentiality are paramount, ensuring fairness in AI models to avoid discrimination or inequity in healthcare outcomes.

As AI evolves rapidly with large language models (LLMs) and increasingly complex architectures, the ideal of complete explainability faces significant challenges. Most current explainability approaches like SHAP and LIME assume feature independence, which does not hold true in complex interconnected biological systems where feature collinearity is common. 47 This limitation becomes more pronounced with deep-learning models and particularly with LLMs, where the dimensional complexity renders traditional explainability techniques inadequate. Complete transparency in such systems may be fundamentally unrealistic due to the inherent nature of dimension reduction in explainability. The reliance on erroneous or oversimplified “pseudo-explainability” carries its risks, potentially creating false confidence in model outputs. Rather than pursuing complete explainability, a more realistic approach may involve defining specific aspects requiring transparency based on contextual needs—whether focused on feature importance, counterfactual explanations, or decision boundaries. 45 In PV specifically, the field must establish clear standards for what constitutes sufficient explainability in different use cases, from adverse event detection to causality assessment, balancing the precision of advanced models with the transparency necessary for regulatory acceptance. 28

The concept of XAI is particularly important for PV and patient safety. XAI can help analyze large datasets, including EHRs and social media feeds, to identify potential ADRs and events, while maintaining transparency in decision-making processes. This approach ensures that machine learning algorithms are transparent, interpretable, and accountable, allowing clinicians and regulators to understand how the algorithm makes predictions and identifies potential biases. In addition, XAI to identify patterns in patient data that may indicate a particular drug that is causing unanticipated adverse events, or to identify patient groups that are particularly susceptible to certain ADRs. XAI can be combined with other ML models, like knowledge graphs, to help identify biomolecular features that may distinguish or identify a causal relationship between an ADR and a particular compound.

The WHO emphasizes the importance of AI integration into worldwide PV efforts, particularly considering resource constraints in certain regions. However, the lack of harmonization of PV requirements across regulatory authorities presents challenges for AI implementation. Authorities are actively evaluating and updating their regulatory frameworks to ensure the responsible use of AI in drug safety reporting.

In the detection and analysis of ICSRs, AI-based algorithms enable regulators and sponsors to process large volumes of data quickly and accurately, leveraging AI-based algorithms, regulators and sponsors can process large volumes of data quickly and accurately to extract relevant information from ICSRs, such as adverse events, drug dosage, patient demographics, and medical history. This automation can lead to faster processing of ICSRs, increased identification of safety signals, and better support for regulatory compliance. This enhanced decision-making capability leads to improved patient safety and more targeted drug development.

AI and ML technologies are shaping the entire value chain in PV, from data collection through information extraction, analytics, insights, and regulatory reporting to actionable intelligence. AI helps identify, classify, and prioritize relevant information from disparate sources, such as EHRs, medical literature, and social media, contributing to a more efficient and robust data collection process. AI-powered algorithms detect potential safety signals hidden within vast amounts of data, enabling PV experts to identify and assess emerging risks earlier and more accurately than traditional methods. In addition, AI continually updates its models as new data become available, further improving the detection and analysis of drug safety. AI supports human-in-the-loop decision-making by providing insights and predictions based on patterns and trends in the data, while helping maintain compliance with evolving regulations and reporting requirements through automated tasks and real-time monitoring of data quality and completeness.

Despite the fast development of technology, several challenges remain. The performance of AI models is heavily dependent on the quality and quantity of data available, particularly in resource-limited settings. Considering the heterogeneity of data sources, robust AI models capable of integrating various types of data while ensuring accurate and reliable outputs are needed. The acceptance of AI systems among PV professionals continues to depend on addressing these fundamental challenges.

Future directions and emerging trends

As AI continues to contribute toward making PV processes rapid and efficient, there are specific steps required to be completed in the future for building reliable and robust AI-integrated PV systems.

Federated learning approaches have gained substantial interest as a way of leveraging varied datasets while maintaining data privacy and security. 51 This allows AI models to be trained across multiple decentralized devices or servers holding local data samples, without exchanging them. In PV, this could enable collaborative learning across healthcare organizations, pharmaceutical companies, and regulatory bodies, enhancing signal detection without violating data protection regulations.

Regulatory authorities face an interpretability challenge due to “black box” nature of AI models. This calls for the need to develop XAI models that provide transparent reasoning for their predictions, to facilitate regulatory acceptance as well as clinical decision-making. Future PV systems may incorporate XAI methods such as LIME or SHAP to provide interpretable insights into ADR predictions. 45

While still in very early stages, quantum computing holds promise for solving complex computational problems in PV when coupled with AI. 52 Quantum algorithms could potentially analyze huge combinatorial spaces of drug–drug interactions and genomic variations, resulting in more precise and personalized ADR predictions. The integration of genomic data with traditional PV data sources is an emerging direction where pharmacogenomic information will also be utilized for training future AI models to predict individual-level vulnerabilities to specific ADRs.

To improve the reliability and traceability of PV data, scientists are exploring secure, transparent systems for recording and analyzing ADRs and safety signals. Blockchain technology is one such potential solution that creates a secure, shared record of information that remains unaltered without detection. 53 This can make ADR and safety signal detection more reliable. It will also assist different organizations to work together openly.

As PV systems expand globally, there is a growing demand for AI systems that can process and analyze safety information across multiple languages. Future NLP models may leverage methods like zero-shot learning and multilingual transformers to enable more effective cross-language ADR detection and signal evaluation.

Proactive safety assessments can be enhanced by developing and implementing AI models coupled with in silico trials and virtual patient simulations. These systems could simulate drug effects across diverse patient populations, potentially identifying safety issues before they manifest in real-world scenarios.

Due to continuously evolving safety profiles of drugs with time, it is also critical to have continuously learning AI models that can automatically update with the incorporation of new safety data, ensuring that ADR predictions remain in present and are accurate across the entire lifecycle of the drug. In addition to safety profiles, the changing environmental factors and lifestyles should also be incorporated into the models by considering data gathered from wearable devices, environmental sensors, and dietary logs to provide a more comprehensive understanding of ADR risks.

Aforementioned directions promise to enhance the efficiency, accuracy, and scope of PV processes. However, their successful implementation will require continued collaboration between AI scientists, PV experts, regulatory organizations, and healthcare providers to ensure responsible deployment of these technologies in the interest of patient safety.

Future directions must focus on improving the quality and standardization of datasets, advancing NLP techniques for better interpretation of clinical narratives, and developing XAI models. Regulatory frameworks should evolve to support AI deployment in PV, ensuring the establishment of best practices for AI implementation and the creation of large-scale, publicly available training datasets. Going beyond correlation-based approaches by integrating causal inference techniques will allow for a more accurate understanding of the relationship between drugs and ADRs. The integration of AI and big data in PV has the potential to transform drug safety monitoring, addressing many of the challenges posed by increasing data complexity and the need for real-time analysis. As these technologies continue to evolve, they promise to make PV more efficient, accurate, and comprehensive, thereby improving patient safety. Human oversight will still be required to validate AI findings, but ongoing efforts to improve the robustness of AI systems will reduce dependency on manual interventions and scale the use of AI in PV.

Conclusion

As AI continues to evolve from experimental stages to routine application in PV, its ability to integrate into everyday workflows hinges on overcoming significant challenges related to data quality, interpretability, and bias mitigation. A holistic approach involving regulatory alignment, transparency, and ongoing validation is crucial to ensure AI systems in PV not only perform as expected but also adapt to the ever-changing landscape of drug safety. Addressing these concerns will pave the way for AI’s broader acceptance and trusted use in PV.

Footnotes

Contributor Information

Ankit Nagar, University of Maryland Baltimore, Rm S41020 N Pine St, Baltimore, MD 21201, USA.

Joga Gobburu, Department of Practice, Science, and Health Outcomes Research, School of Pharmacy, University of Maryland, Baltimore, MD, USA.

Aloka Chakravarty, Pfizer Inc., New York, NY, USA.

Declarations

Ethics approval and consent to participate: This review is based solely on previously published research and does not require ethics approval or consent to participate.

Consent for publication: Not applicable—this review is based solely on previously published research.

Author contributions: Ankit Nagar: Data curation; Formal analysis; Methodology; Visualization; Writing – original draft; Writing – review & editing.

Joga Gobburu: Conceptualization; Project administration; Resources; Supervision; Writing – original draft; Writing – review & editing.

Aloka Chakravarty: Conceptualization; Formal analysis; Investigation; Methodology; Supervision; Validation; Writing – original draft; Writing – review & editing.

Funding: The authors received no financial support for the research, authorship, and/or publication of this article.

The authors declare that there is no conflict of interest.

Availability of data and materials: Not applicable—this review is based solely on previously published research and does not involve primary data collection.

References

  • 1. US Food and Drug Administration. Good pharmacovigilance practices and pharmacoepidemiologic assessment. Rockville, MD: US Department of Health and Human Services, 2005. [Google Scholar]
  • 2. Ventola CL. Big data and pharmacovigilance: data mining for adverse drug events and interactions. Pharm Ther 2018; 43(6): 340. [PMC free article] [PubMed] [Google Scholar]
  • 3. Price J. Drug–drug interactions: a pharmacovigilance road less traveled. Clin Ther 2023; 45(2): 94–98. [DOI] [PubMed] [Google Scholar]
  • 4. Hauben M, Hartford CG. Artificial intelligence in pharmacovigilance: scoping points to consider. Clin Ther 2021; 43(2): 372–379. [DOI] [PubMed] [Google Scholar]
  • 5. Salas M, Petracek J, Yalamanchili P, et al. The use of artificial intelligence in pharmacovigilance: a systematic review of the literature. Pharmaceut Med 2022; 36(5): 295–306. [DOI] [PubMed] [Google Scholar]
  • 6. Hauben M. Artificial intelligence and data mining for the pharmacovigilance of drug–drug interactions. Clin Ther 2023; 45(2): 117–133. [DOI] [PubMed] [Google Scholar]
  • 7. US Food and Drug Administration. FDA Adverse Events Reporting System (FAERS) public dashboard, fis.fda.gov (2017, accessed 26 July 2025). [Google Scholar]
  • 8. Lindquist M. VigiBase, the WHO global ICSR database system: basic facts. Drug Inf J 2008; 42(5): 409–419. [Google Scholar]
  • 9. Lavertu A, Vora B, Giacomini KM, et al. A new era in pharmacovigilance: toward real-world data and digital monitoring. Clin Pharmacol Ther 2021; 109(5): 1197–1202. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Yadesa TM, Kitutu FE, Deyno S, et al. Prevalence, characteristics and predicting risk factors of adverse drug reactions among hospitalized older adults: a systematic review and meta-analysis. SAGE Open Med 2021; 9: 20503121211039100. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Sultana J, Cutroneo P, Trifirò G. Clinical and economic burden of adverse drug reactions. J Pharmacol Pharmacother 2013; 4(1_suppl): S73–S77. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Bate A, Evans SJW. Quantitative signal detection using spontaneous ADR reporting. Pharmacoepidemiol Drug Saf 2009; 18(6): 427–436. [DOI] [PubMed] [Google Scholar]
  • 13. Harpaz R, DuMouchel W, Shah NH, et al. Novel data-mining methodologies for adverse drug event discovery and analysis. Clin Pharmacol Ther 2012; 91(6): 1010–1021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Bate A, Lindquist M, Edwards IR, et al. A Bayesian neural network method for adverse drug reaction signal generation. Eur J Clin Pharmacol 1998; 54: 315–321. [DOI] [PubMed] [Google Scholar]
  • 15. DuMouchel W. Bayesian data mining in large frequency tables, with an application to the FDA spontaneous reporting system. Am Stat 1999; 53(3): 177–190. [Google Scholar]
  • 16. Nikfarjam A, Sarker A, O’connor K, et al. Pharmacovigilance from social media: mining adverse drug reaction mentions using sequence labeling with word embedding cluster features. J Am Med Inf Assoc 2015; 22(3): 671–681. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Hauben M, Rafi M. Knowledge graphs in pharmacovigilance: a step-by-step guide. Clin Ther 2024; 46: 538–543. [DOI] [PubMed] [Google Scholar]
  • 18. Bean DM, Wu H, Iqbal E, et al. Knowledge graph prediction of unknown adverse drug reactions and validation in electronic health records. Sci Rep 2017; 7(1): 16416. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19. Li F, Liu W, Yu H. Extraction of information related to adverse drug events from electronic health record notes: design of an end-to-end model based on deep learning. JMIR Med Inform 2018; 6(4): e12159. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20. Mohsen A, Tripathi LP, Mizuguchi K. Deep learning prediction of adverse drug reactions using open TG-GATEs and FAERS databases. arXiv preprint arXiv:201005411, 2020. [Google Scholar]
  • 21. Bae JH, Baek YH, Lee JE, et al. Machine learning for detection of safety signals from spontaneous reporting system data: example of nivolumab and docetaxel. Front Pharmacol 2021; 11: 602365. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Zhao H, Ni P, Zhao Q, et al. Identifying the serious clinical outcomes of adverse reactions to drugs by a multi-task deep learning framework. Commun Biol 2023; 6(1): 870. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23. Hussain S, Afzal H, Saeed R, et al. Pharmacovigilance with transformers: a framework to detect adverse drug reactions using BERT fine-tuned with FARM. Comput Math Methods Med 2021; 2021(1): 5589829. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24. Hadzi-Puric J, Grmusa J. Automatic drug adverse reaction discovery from parenting websites using disproportionality methods. In: 2012 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, Istanbul, Turkey, 2012, pp. 792–797. [Google Scholar]
  • 25. Yang X, Bian J, Gong Y, et al. MADEx: a system for detecting medications, adverse drug events, and their relations from clinical notes. Drug Saf 2019; 42: 123–133. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26. Sahoo P, Singh AK, Saha S, et al. Enhancing adverse drug event detection with multimodal dataset: corpus creation and model development. arXiv preprint arXiv:240515766, 2024. [Google Scholar]
  • 27. Harpaz R, DuMouchel W, Schuemie M, et al. Toward multimodal signal detection of adverse drug reactions. J Biomed Inform 2017; 76: 41–49. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28. Rudin C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell 2019; 1(5): 206–215. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29. Choudhury O, Park Y, Salonidis T, et al. Predicting adverse drug reactions on distributed health data using federated learning. In: AMIA annual symposium proceedings, Online, 2019, p. 313. American Medical Informatics Association. [PMC free article] [PubMed] [Google Scholar]
  • 30. Enthoven D, Al-Ars Z. An overview of federated deep learning privacy attacks and defensive strategies. In: Ur Rehman MH, Gaber MM. (eds) Federated Learning Systems. 1st ed. Switzerland: Springer Cham, 2021, pp. 173–196. [Google Scholar]
  • 31. Rieke N, Hancox J, Li W, et al. The future of digital health with federated learning. NPJ Digit Med 2020; 3(1): 1–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32. Hauben M, Aronson JK. Defining “signal” and its subtypes in pharmacovigilance based on a systematic review of previous definitions. Drug Saf 2009; 32: 99–110. [DOI] [PubMed] [Google Scholar]
  • 33. Harpaz R, DuMouchel W, LePendu P, et al. Performance of pharmacovigilance signal-detection algorithms for the FDA adverse event reporting system. Clin Pharmacol Ther 2013; 93(6): 539–546. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34. Norén GN, Caster O, Juhlin K, et al. Zoo or savannah? Choice of training ground for evidence-based pharmacovigilance. Drug Saf 2014; 37: 655–659. [DOI] [PubMed] [Google Scholar]
  • 35. Jeong E, Park N, Choi Y, et al. Machine learning model combining features from algorithms with different analytical methodologies to detect laboratory-event-related adverse drug reaction signals. PLoS One 2018; 13(11): e0207749. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36. Stanovsky G, Gruhl D, Mendes P. Recognizing mentions of adverse drug reaction in social media using knowledge-infused recurrent models. In: Proceedings of the 15th conference of the European chapter of the association for computational linguistics: Volume 1, Long papers, Valencia, Spain, 2017, pp. 142–151. [Google Scholar]
  • 37. Zhao S, Su C, Lu Z, et al. Recent advances in biomedical literature mining. Brief Bioinform 2021; 22(3): bbaa057. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38. Zhao S, Liu T, Zhao S, et al. A neural multi-task learning framework to jointly model medical named entity recognition and normalization. In: Proceedings of the AAAI conference on artificial intelligence, Hawaii, 2019, vol. 33, pp. 817–824. [Google Scholar]
  • 39. US Food and Drug Administration. Sentinel system: five-year strategy 2019–2023. Silver Spring, MD: US Food and Drug Administration (FDA), 2019. [Google Scholar]
  • 40. Van de Burgt BWM, Wasylewicz ATM, Dullemond B, et al. Development of a text mining algorithm for identifying adverse drug reactions in electronic health records. JAMIA Open 2024; 7(3): ooae070. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41. Mehrabi N, Morstatter F, Saxena N, et al. A survey on bias and fairness in machine learning. ACM Comput Surv 2021; 54(6): 1–35. [Google Scholar]
  • 42. Wang X, Xu X, Tong W, et al. InferBERT: a transformer-based causal inference framework for enhancing pharmacovigilance. Front Artif Intell 2021; 4: 659622. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43. Zhao Y, Yu Y, Wang H, et al. Machine learning in causal inference: application in pharmacovigilance. Drug Saf 2022; 45(5): 459–476. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44. Rodrigues PP, Ferreira-Santos D, Silva A, et al. Causality assessment of adverse drug reaction reports using an expert-defined Bayesian network. Artif Intell Med 2018; 91: 12–22. [DOI] [PubMed] [Google Scholar]
  • 45. Ward IR, Wang L, Lu J, et al. Explainable artificial intelligence for pharmacovigilance: what features are important when predicting adverse outcomes? Comput Methods Programs Biomed 2021; 212: 106415. [DOI] [PubMed] [Google Scholar]
  • 46. Salih A, Raisi-Estabragh Z, Boscolo Galazzo I, et al. Commentary on explainable artificial intelligence methods: SHAP and LIME. arXiv:2305.02012v1, 2023. [DOI] [PubMed] [Google Scholar]
  • 47. Salih AM, Raisi-Estabragh Z, Galazzo IB, et al. A perspective on explainable artificial intelligence methods: SHAP and LIME. Adv Intell Syst 2025; 7(1): 2400304. [Google Scholar]
  • 48. Norén N. Artificial intelligence in pharmacovigilance: harnessing potential, navigating risks, https://uppsalareports.org/articles/artificial-intelligence-in-pharmacovigilance-harnessing-potential-navigating-risks/ (2024, accessed 9 April 2025).
  • 49. Tregunno PM, Fink DB, Fernandez-Fernandez C, et al. Performance of probabilistic method to detect duplicate individual case safety reports. Drug Saf 2014; 37: 249–258. [DOI] [PubMed] [Google Scholar]
  • 50. TransCelerate BioPharma Inc. Intelligent automation opportunities in pharmacovigilance, https://www.transceleratebiopharmainc.com/initiatives/intelligent-automation-opportunities-pharmacovigilance-2/ (2012, accessed 9 April 2025).
  • 51. Loftus TJ, Ruppert MM, Shickel B, et al. Federated learning for preserving data privacy in collaborative healthcare research. Digit Health 2022; 8: 20552076221134456. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52. Cova T, Vitorino C, Ferreira M, et al. Artificial intelligence and quantum computing as the next pharma disruptors. In: Heifetz A (ed) Artificial intelligence in drug design. 1st ed. New York: Humana, 2022, pp. 321–347. [DOI] [PubMed] [Google Scholar]
  • 53. Tripathi AM, Saini K, Mishra S. Advancing the pharmacovigilance practice using blockchain technology. In: 2024 IEEE international conference on computing, power and communication technologies (IC2PCT), Uttar Pradesh, India, 2024, vol. 5, pp. 1–5. [Google Scholar]

Articles from Therapeutic Advances in Drug Safety are provided here courtesy of SAGE Publications

RESOURCES