Abstract
Artificial Intelligence (AI) is increasingly being implemented in pharmaceutical sciences and has the potential to improve efficiency across the value chain, from drug candidate discovery to manufacturing, quality monitoring, and regulatory process support. Nonetheless, the integration of AI within the pharmaceutical sector encounters persistent obstacles, such as data interoperability and fragmentation, the necessity for model validation and governance to satisfy compliance standards, the potential for bias and accountability concerns, and deficiencies in workforce skills. This review consolidates significant advancements in AI applications, such as generative AI, laboratory automation, and the digital twin concept, highlighting that effective implementation relies on workflow integration, data quality and integrity, and sufficient human-in-the-loop mechanisms. We propose strategic recommendations centred on human resource readiness, governance structures, and technology maturity assessment to assist readers in differentiating feasible solutions from aspirational frameworks. Moving forward, research and adoption will likely highlight precision medicine and regulatory–industry collaboration mechanisms for AI evaluation. The integration of AI with supporting technologies such as tamper-evident provenance/audit layers (such as blockchain) remains exploratory and generally limited to pilots.
Keywords: generative ai in drug discovery, self-driving laboratories, closed-loop discovery, pharma 4.0, digital twins in healthcare
Introduction
The pharmaceutical sector currently stands at a crucial crossroads. While biomedical research has made rapid progress, the established economic model is facing significant pressure. A phenomenon known as the “Eroom Law”—where Research and Development (R&D) costs increase exponentially while the number of new drugs approved declines—continues to plague the industry.1,2 This productivity decline is rooted in the industry’s reliance on empirical high-throughput screening (HTS) and the classic Design-Synthesis-Test-Analysis (DMTA) cycle. This cycle is slow due to the need for physical testing (wet lab) and manual validation.3–5
These problems are exacerbated by persistent data fragmentation, or “data silos”. Molecular databases, clinical records, and real-world evidence (RWE) are stored on disparate platforms, hindering the interoperability necessary for data-driven decision-making.6 Similar challenges exist in manufacturing, which remains limited to rigid batch operations and labor-intensive clinical trials.7–10 On the regulatory side, regulatory bodies face significant challenges in evaluating innovations derived from complex algorithms, primarily due to concerns about transparency and the absence of standard validation frameworks.11,12
Amidst this impasse, AI has emerged as a transformative technology. Unlike previous waves of digitalization, AI introduces a paradigm shift toward predictive discovery methodologies.13 Newer generative AI models, such as Variational Autoencoders (VAE) and Generative Adversarial Networks (GAN), are now capable of de novo molecular design and predicting Absorption, Distribution, Metabolism, Excretion, and Toxicity (ADMET) profiles with significantly higher computational efficiency than conventional methods.14 This feature enables a shift from “random search” to “inverse rational design”.
Nevertheless, the integration of high-speed computing technologies into a relatively slow experimental ecosystem can result in a velocity gap between computational design cycles (minutes–hours) and experimental validation (days–weeks, or longer, contingent on automation readiness and operational reliability).15 This phenomenon is referred to as the “New Productivity Paradox” in this review. It is the risk that upstream acceleration merely relocates bottlenecks downstream when physical validation capacity, automation reliability, and operational readiness are not correspondingly enhanced. According to evidence from closed-loop studies, experimental throughput is frequently a significant constraint, and practice performance is affected by failures of the system robot malfunctions, and reagent logistics.16
In addition to technical challenges, the adoption of AI in highly regulated sectors is also constrained by policy preparedness and regulatory acceptance, particularly when models are complex (“black box”) and difficult to audit. In the drug safety domain, the literature on AI governance underscores the significance of transparency, auditability, and accountability in order to establish trust and allow humans to challenge AI output when necessary. Consequently, a more viable method is risk-based evaluation, which is consistent with the principles of Good Machine Learning Practice (GMLP) that prioritise the entire product lifecycle and incorporate human oversight mechanisms and model lifecycle documentation.17
This critical narrative review aims to dissect the dynamics of this transition and propose a strategic integration framework. To inform the analysis, evidence was identified through a targeted, non-exhaustive search with Scopus as the primary database, complemented where appropriate by regulatory and policy sources. We emphasize recent empirical literature (2023 onward) as a recency lens to capture fast-moving developments in deep learning and generative AI, while retaining earlier seminal work when foundational. Unlike previous reviews that often focus on a single phase, this article offers a holistic lifecycle analysis—from discovery to manufacturing—and differentiates theoretical concepts from validated, industrially scalable solutions.
Conceptual Framework
To organize evidence and provide analytic clarity and avoid terminological redundancy, this review uses two operational concepts. The Velocity Gap refers to the mismatch between rapidly accelerating upstream computational design/triage and comparatively rate-limiting downstream experimental, CMC, and clinical validation under GxP constraints. The Bio–Digital Gap refers to the translation and traceability discontinuity between in silico representations (datasets/model outputs) and biological/manufacturing reality, including domain shift, assay heterogeneity, provenance limitations, and comparability challenges. Together, these constructs explain why AI capabilities can advance quickly while end-to-end productivity gains remain uneven in regulated pharmaceutical settings. For clarity, we use the Velocity Gap to denote throughput/validation constraints (cycle time, capacity, evidence generation) and the Bio–Digital Gap to denote translation/traceability constraints (external validity, provenance, comparability) across the lifecycle.
Managing the Speed of AI Disruption
In practice, increasing the speed of predictive models does not automatically shorten the drug discovery cycle time, as experimental validation, logistics, and automation reliability are often key constraints.18,19 This section outlines the metrics and failure modes that contribute to the velocity gap, as well as realistic mitigation strategies (automation, data integration, and governance).
The Validation Bottleneck in AI-Driven Discovery
The most upstream transformation occurs during drug discovery. The integration of Deep Generative Models (DGMs) has replaced the “needle in a haystack” paradigm with “Inverse Design.” AI models now not only generate structures, but also use Multi-Objective Optimization (MPO) algorithms to rank candidates based on a combined score of binding affinity, solubility, and synthesizability.20 Algorithms such as GANs and Diffusion Models are capable of designing molecules de novo with predicted ADMET properties.20,21
However, these advances can widen the Velocity Gap. AI can generate thousands of high-quality candidates within hours, whereas synthesis and experimental validation remain constrained by physical throughput and GxP-aligned workflows. Consequently, the volume of in silico candidates can exceed current laboratory capacity for timely validation. The result is a backlog of unvalidated innovations, where out of thousands of AI candidates, only a handful can be physically tested, increasing the risk of potential candidates being overlooked.21–23
The Reality Gap in Preclinical Simulations
Disruptions continue into the preclinical phase. Algorithms such as Graph Neural Networks (GNNs) are used to predict toxicity to reduce animal experimentation.20,21,23 However, these efforts face a “Translational Gap.” Models trained on homogeneous datasets often fail to generalize to dynamic biological systems (overfitting), simply “memorizing” dataset patterns without understanding chemical principles.24,25 This weakness is not only an accuracy issue but also a security vulnerability; biased models are more easily manipulated, a risk discussed further in Section 2.6.
Regulatory studies indicate that although regulators are starting to acknowledge organ-on-chip data, animal data continues to be the benchmark for global safety standards.26,27 This results in a dual inefficiency: companies perform AI simulations for internal evaluation yet are still required to conduct traditional animal testing for external compliance.
The Rise of Synthetic Clinical Trials
Digital twins—virtual representations of trial participants derived from longitudinal clinical data—have been proposed as a way to reduce reliance on concurrent control arms and, in specific contexts, may help improve trial efficiency while maintaining statistical power.23,27 Such approaches use patient-level modelling to generate counterfactual trajectories and support in silico simulations and power calculations intended to support regulatory evidence planning.23 Consistent with the TRL mapping (Table 1), digital twins for hyper-personalisation remain predominantly early-stage (pilot/limited deployment), with regulatory use contingent on rigorous assurance and external validation. Importantly, these gains should not be assumed universally; their acceptability depends on fit-for-purpose validation, endpoint comparability, robust provenance/traceability, and ongoing monitoring and change control under a GxP-aligned QMS.
Table 1.
Pharma Lifecycle Use-Case Matrix: Typical Evidence Levels, Technology Maturity (TRL), and Regulatory Touchpoints
| Lifecycle Domain | Capability | Typical Evidence Level | Indicative Maturity (TRL Band) | Regulatory Touchpoints | Refs |
|---|---|---|---|---|---|
| Discovery/Preclinical | In silico mutagenicity assessment and triage in line with ICH M7: application of two complementary (Q)SAR models (expert rule-based and statistical-based) to predict bacterial Ames outcomes. When predictions are discordant or inconclusive, an expert review is performed to provide scientific justification and/or to trigger confirmatory Ames testing. Outputs support ICH M7 hazard classification and the selection of an appropriate control strategy for impurities predicted to be positive. | E2–E4 | 4–9 |
|
[28–30] |
| Discovery | 3D Generative Design uses diffusion or transformer combined with GNN to generate new candidate molecules and perform lead optimization, including atom and bond generation and property guidance such as QED, SA, LogP, TPSA, and affinity prediction. | E1–E2 | 3–5 | There are no detailed guidance for DGM at the discovery stage, so it is generally positioned as a decision-support or discovery tool. Therefore, the focus is on governance and traceability through data provenance, dataset documentation, and benchmarks and evaluation metrics, controlling for hallucinations and inaccuracies, and auditability of claims regarding structural validity, drug-likeness, synthetic accessibility, and model assumptions. | [31–36] |
| Discovery | Closed-loop automation with a Lab-in-a-Loop or self-driving labs approach, combines active learning and robotics or automation to accelerate the DMTA cycle. | E2–E3 | 4–6 | There are no truly specific guidelines or “hard guidance” for closed-loop automation at the discovery stage. Regulatory frameworks are generally approached through the principles of governance, validation, and conformity to intended use. A literature review of self-driving labs and closed-loop discovery shows that this field is still developing, with many pilot studies or limited implementations and challenges in integrating systems, data, and processes. | [31,37–39] |
| Discovery | Privacy-based collaboration through federated learning, such as MELLODDY, to train models across institutions without moving or sharing raw data. | E2–E3 | 4–6 | Regulatory concerns generally place more emphasis on data governance and privacy compliance, auditability and decision trails, and bias and data shift controls, than on regulating federated learning techniques themselves. | [31,40–42] |
| Preclinical (MIDD) | Utilization of AI in conjunction with PBPK and QSP for translation and decision support in MIDD, including dose determination, PK/PD prediction, and scenario simulation, with the aim of supporting model validation and development decision making. | E2–E4 | 5–8 | PBPK has a clear regulatory framework for reporting and submission, including report structure, transparency of assumptions, and parameter justification. With available guidance, such as from the EMA, it is more regulatory-anchored than AI components. Meanwhile, the use of QSPs in submissions is increasing and is used across development stages, but evidence requirements are generally assessed on a case-by-case basis based on the intended use and validation strategy. | [43–46] |
| Clinical | Optimization of clinical trial design and evidence augmentation using RWE, including optimization of eligibility criteria with Trial Pathfinder and the use of external controls or synthetic control arms based on RWD. | E2–E4 | 7–8 | The regulatory framework follows FDA guidelines for externally controlled trials and modern GCP principles ICH E6(R3). Analyses typically use the propensity score (sIPTW) and incorporate sensitivity analyses. The Pathfinder trial exemplifies the use of RWD to optimize eligibility, with validation based on Flatiron Health data demonstrating that broadening criteria can improve patient eligibility without changing the hazard ratio in a retrospective analysis. | [43–49] |
| Clinical | Digital twins based on patient simulation and counterfactual scenarios to support hyper-personalization. | E1–E2 | 3–6 | There are no specific regulatory guidelines for digital twins, so they are generally treated as PoCs or pilots, with an emphasis on AI governance. Most reported systems remain proof-of-concept or early clinical research; fully bidirectional, continuously updated twins are uncommon. A small number have reached TRL 5–6 through limited trials in clinics or hospitals. | [50–52] |
| Manufacturing (GMP) | Smart manufacturing analytics for GMP, including predictive maintenance, process monitoring and anomaly detection, and computer vision for inspection and visual quality control. | E3–E4 | 7–9 | Regulatory frameworks are beginning to accommodate structured evaluations for AI/ML-based advanced manufacturing, including risk-based assurance, validation, and oversight approaches. Under the EU AI Act, some AI systems used in regulated industrial contexts may fall under high-risk obligations depending on intended use and sectoral linkage, which would imply stronger requirements for documentation, transparency, human oversight, and post-market monitoring. Industrial implementations report operational benefits such as accelerated test-to-release pipelines, but still require change control, data integrity, and auditable performance evidence. | [53–58] |
| Evaluation/Supply chain | QR or RFID-based traceability and provenance with interoperable electronic tracking, with the option of using blockchain as an additional layer for integrity and audit trail. | E2 - E3 | 3–5 | The DSCSA encourages interoperable electronic tracking with a transition or stabilization period. Studies and pilots demonstrate the potential for increased efficiency and counterfeit drug detection, but are hampered by cost, interoperability, and scalability. In blockchain-based approaches, sensitive data is typically stored off-chain, with only hashes or pointers recorded, for example via IPFS, to maintain privacy and traceability. | [59–61] |
Notes: E1: conceptual/theoretical proposals, early prototypes. E2: proof-of-concept/retrospective benchmarks. E3: prospective evaluation/controlled pilots in relevant settings. E4: validated deployment/operational performance evidence (GxP-aligned where applicable).
Regulatory frameworks have not yet established consensus on the validity of SCA relative to randomized clinical trial controls, underscoring ongoing epistemological and acceptance challenges for synthetic data in drug development.62 Without a global consensus, these innovations risk delaying market access due to uncertainty about approval.
The Challenges of Smart Manufacturing in Pharma 4.0
The tension between the digital vision and physical reality is also evident in the manufacturing sector. The Pharma 4.0 concept, which focuses on real-time sensors, IoT, predictive maintenance, and closed-loop control, promises adaptive manufacturing based on real-time sensor data and predictive maintenance.63 However, actual implementation is hampered by infrastructure inertia and high capital expenditures (CapEx). Legacy facilities designed for rigid batch processes are difficult to convert to closed-loop systems without costly revalidation processes.63,64 Pharma 4.0 demands the ability to produce small batches or even personalized drugs (such as 3D printing) cost-effectively. Any variability in raw materials or process conditions must be tightly managed to avoid compromising Critical Quality Attributes (CQAs).63
Regulatory Lag and the Black Box Dilemma
The most profound systemic impact has occurred in the field of regulatory science. The crux of the problem lies in the validation of the algorithmic decisions. Deep learning models with the highest predictive accuracy are often “black box” in nature. The more complex an algorithm is, the lower its interpretability, making it difficult for regulators to validate the model’s credibility and ensure the absence of hidden biases.65 This contradicts the principle of pharmaceutical regulation, which demands transparency in causality to ensure patient safety.40,66
In the evolving EU regulatory context, AI systems used in medical device settings may be subject to high-risk obligations depending on their intended use and classification under the EU AI Act, which can include requirements related to transparency, human oversight, and robust technical documentation.67 This may be particularly relevant where the AI system functions as a safety component of, or is itself, a regulated medical device. These requirements can create a documentation and auditability burden: deep learning models with clinical potential are often challenging to justify in a regulator-ready form because of complex data dependencies, limited interpretability, and performance sensitivity to distribution shift.
This tension between predictive performance and regulatory-grade assurance (documentation, transparency, traceability, and human oversight) has intensified interest in Explainable AI (XAI) and broader assurance approaches that support traceability, external validation, and reproducible performance claims.40,67,68 Without sufficiently transparent evidence of how a model reaches decisions and under what conditions it remains reliable, AI-enabled innovations may stall during regulatory review.68,69
Legal Risks and Security Threats in AI Models
Operational issues are now spilling over into the legal and security realms. As mentioned in Section 2.2, reliance on data creates a new vulnerability: “Model Poisoning.” Malicious actors can manipulate (biased or unprotected) training data to undermine the integrity of drug safety predictions, shifting cyber issues from a direct threat to patient safety.41
Furthermore, Intellectual Property (IP) uncertainty is a barrier. Student-Aided Learning (SAL) Practical Report (2025) and The United States Patent and Trademark Office (USPTO) guidelines (2024) confirm that purely AI-based inventions are difficult to patent without “significant human contributions”, creating asset protection risks for digital pharmaceutical companies.70–72
The Human Capital Crisis: Obsolescence of Expertise
The effective deployment, validation, and governance of AI-enabled workflows can be restricted by skill disparities and role misalignment, which are significant human-capital constraints, representing a critical human capital challenge across the entire pharmaceutical product lifecycle, from research and development to clinical translation, manufacturing, and commercialization. Skill deficits are identified as a significant impediment to business transformation in global employer surveys, and they predict a significant disruption in skills in the coming years. This suggests the necessity of a systematic upskilling and reskilling process.73,74
In the biopharmaceutical sector, industry reports similarly emphasise the “crossover” between digital capabilities and domain-specific scientific expertise, as well as the shortages in computational and digital skills. Consequently, we employ the term “bilingual” to indicate cross-disciplinary translation proficiency—professionals who are capable of operationalising domain enquiries into data/Machine Learning (ML) tasks and interpreting model outputs within the context of pharmacology and clinical constraints—thereby minimising the likelihood of underutilisation or misapplication of advanced tools.75
Educational curricula must address several issues. The lack of personnel who understand the interaction between data science and pharmacy exacerbates the “black box” problem of algorithms, where AI decisions become difficult for regulators to explain and verify.65 Moreover, the transition toward Pharma 4.0 requires not only technical AI and data science skills, but also the preservation of critical thinking, clinical judgment, and ethical accountability. Overreliance on AI without a foundational understanding may undermine professional competence and increase the risk of errors with serious clinical consequences.76,77
Strategic Imperatives to Close the Velocity Gap and Improve Bio–Digital Translation
The previous section shows that today’s limited AI adoption can create validation bottlenecks, regulatory ambiguity, and talent gaps. Without ecosystem-level changes, investment in AI may not translate into higher R&D output because digital iteration can outpace evidence generation and real-world translation. Therefore, the industry must move beyond pilots toward strategic pillars that align AI’s digital velocity with operational and GxP realities.
Data Harmonization: From Fragmented Silos to “Smart Data”
Current AI models are highly dependent on the quality and accessibility of their training data. Therefore, data harmonization is a prerequisite for reducing the Bio–Digital Gap, the misalignment between in silico output and biological/manufacturing reality, often driven by inconsistent or unauditable data. Data fragmentation (“Silos”) causes models to simply “memorize” biased datasets rather than learning chemical principles. The fundamental challenge is no longer simply volume (Big Data), but interoperability (Smart Data).23
According to the principles of Data Democratization, harmonization must be implemented to break down the silos between the discovery, clinical, and manufacturing phases. Without this strategy, algorithms are vulnerable to the risk of “garbage in, garbage out”.78 The implementation of the Findable, Accessible, Interoperable, Reusable (FAIR) principles and the establishment of a Single Source of Truth (SSOT) are vital mechanisms. This ensures that every department has access to standardized data, enabling cross-functional collaboration previously hampered by data access restrictions.77 The Attributable, Legible, Contemporaneous, Original, and Accurate (ALCOA+) principles must also be applied to maintain data integrity from R&D to post-marketing.23,79
Bridging the Bio–Digital Gap: Model-Informed Validation
To address the “Translational Gap” in the preclinical phase, the industry requires a hybrid validation approach capable of detecting risks early. This risk is compounded by the limitations of model generalization, where AI may generate candidates that appear chemically plausible but have undesirable toxicity profiles or do not exhibit adequate biological activity.80 As a mitigating solution, the synergistic integration of Model-Informed Drug Development (MIDD) is emerging. This approach combines the power of data-driven AI with physiological mechanistic models (such as Quantitative Systems Pharmacology (QSP)/Physiologically Based Pharmacokinetics (PBPK)) to simulate drug interactions in virtual populations before conducting physical trials.81 Furthermore, the importance of rigorous In-Silico Validation protocols—using molecular dynamics simulations as an initial filter—before drug candidates enter the bottleneck of physical experiments minimizes the attrition rate in later stages.82
Regulatory Innovation: Sandboxes and Glass Box AI
The tension between “black box” model-based innovation and regulators’ expectations for auditability and traceable evidence requires new trust-building mechanisms. One approach that is gaining prominence is the regulatory sandbox, which allows controlled trials with regulators to explore pathways to evidence, validation, and post-market surveillance without exposing patients to unnecessary risks. A concrete example is the MHRA AI Airlock in the UK, a sandbox for AI as a Medical Device that launched in 2024 and ran a pilot phase until 2025, producing a program report and initial recommendations on AI-specific regulatory issues. Phase 2 includes seven technologies; candidates are expected to test in the Airlock until March 2026, and the Phase 2 programme runs until April 2026. The structure of Phase 2 explicitly targets challenges relevant to AIaMD, including managing evolving models, regulating AI for diagnostics, and strengthening post-market surveillance—so the sandbox serves as a space to generate practical lessons learned (policy-relevant evidence) that can inform the development of subsequent regulatory approaches and technical guidance.83,84
In a position also emerging in the European Union, the EU AI Act adopts a risk-based approach; AI systems that are part of medical devices or serve as safety components can be categorized as high-risk depending on the relevant classification and conformity assessment pathway, with implementation of obligations gradually occurring.69 In this context, XAI and its “assurance” approach (such as auditable technical documentation, cross-context performance evaluation, drift monitoring, and change control) are critical components for bridging the gap between predictive performance and regulatory evidence requirements. Rather than treating XAI as a single solution, a more realistic framework is a proof package that combines explainability with external validation, provenance/traceability, and model lifecycle governance to ensure AI-based decisions can be assessed credibly in a GxP setting.22,40
Workforce Transformation: Cultivating the Cross-Disciplinary Scientist
Technological disruption demands a transformation of human capital to reduce friction in collaboration between pharmacologists and data scientists. In this article, we use the term “bilingual scientist” operationally to refer to a cross-disciplinary profile capable of (i) understanding core data/AI concepts (such as data quality, applied statistics, interpretability, and model performance evaluation) and (ii) translating pharmacological/biomedical scientific needs and constraints (such as biological validity, study design, and clinical significance) into precise analytical specifications.75–77
This need for interdisciplinary competencies is relevant not only in early research but also at stages close to high-stakes decisions (such as clinical trial design and monitoring, evidence generation strategies, and validation/Quality Assurance (QA) processes for model use in workflows). Within the Human-in-the-Loop framework, the role of interdisciplinary scientists can be focused on specific governance functions: establishing scientific validity criteria, ensuring adequate verification/validation processes, and managing the risk of bias and failure modes before analytical results are used for patient-impacting decisions.
Proposed Solution: Lab-in-a-Loop
To overcome the translation bottleneck, we propose a closed-loop discovery (“Lab-in-a-Loop”) approach. In this system, AI predictions are validated directly by automated experimentation, and the results are fed back to retrain the model in near real time.23,85 This mechanism uses Active Learning, where AI strategically selects the most informative experiments to reduce model uncertainty. Operating 24/7, the system drastically narrows the gap between digital design and empirical evidence.86,87
The transition to autonomous systems faces significant implementation barriers. Financially, building an integrated robotics facility requires a significant initial capital investment (CapEx), which can be prohibitive for budget-constrained mid-sized companies. Technically, integrating legacy laboratory hardware with modern AI algorithms poses non-trivial interoperability challenges. Furthermore, most current implementations of autonomous laboratory systems remain at a maturity level ranging from proof-of-concept to limited validation in relevant environments (Technology Readiness Level (TRL) 4–6), and have not yet reached full-scale industry standard status (TRL 9).
Intermediary Technology: Blockchain as a Data-Integrity Layer in the AI Ecosystem
While the ALCOA+ and FAIR principles provide the foundation for internal data quality, blockchain acts as an intermediary layer when data needs to cross organizational boundaries. In this context, blockchain does not replace existing Good Practice (GxP) systems, but rather complements them with three key functions. First, as a digital “notary”, blockchain can provide a tamper-evident record, helping to demonstrate that data from Manufacturing Execution Systems (MES)/Laboratory Information Management Systems (LIMS) has not undergone undocumented changes after being exported from internal databases. Second, blockchain enables verifiable provenance, ensuring the provenance of AI data—for example, sensor data from raw material suppliers—has a transparent and auditable track record. Third, blockchain supports trustless collaboration, where organizations can share data with external partners without the need for a central authority while maintaining accountability according to regulatory standards.88,89 The study results show that most solutions are still exploratory (around TRL 3), which suggests blockchain is currently more suitable for pilot or limited-scale deployments than as a standard platform ready for GxP.90
Future Trajectories: The Second Wave of Disruption
The integration of AI in pharmaceutical sciences is currently in its formative phase. While its long-term potential is promising, we propose that the future of the industry will not be an instant “wave of disruption”, but rather a gradual evolution dependent on overcoming current technical barriers. This section outlines a future scenario based on technological plausibility, not inevitability, in which the convergence of digital biology and advanced computing could redefine the boundaries of precision medicine.
From Population to Hyper-Personalization
Precision medicine is increasingly moving beyond “one size fits all”, and digital-twin concepts combined with multi-omics data have been proposed to support patient-specific in silico scenario testing. Most digital-twins use for hyper-personalisation remains early-stage, and credible clinical use requires external validation, endpoint comparability, provenance/traceability, and ongoing monitoring under appropriate governance.91,92
In drug discovery, generative AI is progressing rapidly but still faces limitations (such as bias and hallucination-like errors). Near-term value is best framed as decision support for lead ideation and prioritisation with experimental confirmation, patient-specific molecular optimisation should be presented as aspirational until demonstrated prospectively and reproducibly.93
The Convergence of Computing: Quantum and Edge AI
Current limitations of high-fidelity molecular simulation on classical computers have motivated interest in quantum computing and Quantum Machine Learning (QML) as potential enablers of future acceleration, particularly for specific subroutines in optimization and quantum chemistry. Quantum advantage for drug discovery–relevant tasks (such as protein folding, binding affinity prediction, or large-scale combinatorial search) remains unproven in real-world, noisy hardware, and most applications today are best viewed as early-stage (TRL 2–4) hybrid workflows that may complement—rather than replace—state-of-the-art classical methods. Accordingly, near-term value is more plausibly framed around targeted use cases, careful benchmarking, and “proof packages” that demonstrate reproducible performance gains under realistic constraints, before claims about expanded chemical space or supercomputer-level validation can be made.94,95
In parallel, data architectures for healthcare AI are increasingly moving toward hybrid cloud–edge designs. Within Internet of Medical Things (IoMT) settings, certain processing can be performed locally on devices to reduce latency and support privacy-preserving workflows, while centralized infrastructure remains important for model training, audit logging, and governance. For high-risk functions—such as adaptive dosing or closed-loop decision support—deployment feasibility depends on rigorous external validation, cybersecurity assurance, drift monitoring, and change control under a regulated quality system, rather than on edge computing alone.96,97
Trust Architecture: Blockchain and Regulatory Science 2.0
A digital trust layer—covering provenance/traceability, audit trails, monitoring, and change control—may be needed to safely scale an increasingly autonomous ecosystem. In parallel, “Regulatory Science 2.0” is increasingly discussed as a shift toward a lifecycle-oriented validation model for AI-enabled systems.81 In this framing, regulatory assurance may extend beyond static pre-deployment documentation to include ongoing performance monitoring, periodic audits, drift detection, and controlled updates, while continuing to rely on fit-for-purpose documentation and risk-based review.
In this context, explainability and broader assurance practices are likely to become increasingly expected in regulated clinical AI workflows, supporting transparency, auditability, and fit-for-purpose evidence packages for machine-assisted decision-making.66 In parallel, regulatory sandbox models are being used to enable regulator–developer collaboration in a controlled setting—for example, the MHRA’s AI Airlock programme provides a supervised environment to test and learn from AI as a medical device and to generate practical regulatory insights ahead of, and alongside, market access planning.72
Towards Industry 5.0: The Human-Centric Evolution
Realizing the potential of AI-powered pharmaceutical operations requires complementing automation with a human-centered approach consistent with Industry 5.0 principles. This implies redesigning roles so that digital tools augment human capabilities (such as decision-making support, pattern recognition, and workflow automation) while accountability, ethical judgment, and contextual interpretation remain with humans (human-in-the-loop).64 Rapid technological advances are creating new jobs that require new skills, while simultaneously eliminating existing ones. This dynamic nature of the labor market emphasizes the need for workers to continuously adapt.98
Therefore, we recommend a hybrid curriculum that systematically integrates domain science with data/AI competencies rather than treating them as electives. This recommendation aligns with industry reports noting the growing demand for data science/data analytics capabilities in the biopharmaceutical sector.75–77 Practically, the curriculum can be operationalized into (i) basic data literacy and statistics for scientists, (ii) applied ML/AI concepts with validation and bias awareness, and (iii) interdisciplinary project-based training using datasets relevant to the pharmaceutical industry to strengthen collaboration between field experts and data specialists.99,100
Conclusion
This review confirms that the impact of AI on drug discovery, development, and manufacturing is determined not only by algorithmic advances, but by the ecosystem’s ability to generate auditable evidence and integrate AI into GxP-based workflows. The primary gap lies in the misalignment between the speed of digital iteration and the capacity for real-world validation and translation. Consequently, the benefits of AI will not be realized if predictive models stand alone without standardized data, clear provenance, and model lifecycle governance. Therefore, the most realistic implementation path is a closed-loop architecture (Lab-in-a-Loop) that connects prediction, experimental design, automation, and test result feedback, supported by an assurance package that includes documentation, validation, drift monitoring, and change control. Importantly, many of these enablers remain uneven in maturity and are still under active development, with adoption varying by use case and regulatory context. Nevertheless, the trajectory is toward more robust lifecycle assurance and better-integrated workflows as evidence standards, tooling, and governance practices continue to evolve. Ultimately, meaningful productivity will emerge when AI is used to scalably guide experiments and operational decisions—transforming computational efficiency into replicable biological evidence and therapeutic benefit.
Funding Statement
The authors gratefully acknowledge Padjadjaran University, Indonesia, for funding and supporting this research project. This research was funded by the Equity Review Article Grant-WCU Padjadjaran University with contract No. 4000/UN6.3.1/PT.00/2025.
Data Sharing Statement
Data sharing not applicable – no new data generated, or the article describes entirely theoretical research.
Author Contributions
The authors made a significant contribution to the work reported, including conception, study design, execution, acquisition of data, analysis, and interpretation; participated in drafting, revising, and critically reviewing the article; gave final approval of the version to be published; has agreed on the journal to which the article has been submitted; and agrees to be accountable for all aspects of the work.
Disclosure
The author(s) declare (s) no conflict of interest.
References
- 1.Fernald KDS, Förster PC, Claassen E, van de Burgwal LHM. The pharmaceutical productivity gap – incremental decline in R&D efficiency despite transient improvements. Drug Discov Today. 2024;29(11):104160. doi: 10.1016/j.drudis.2024.104160 [DOI] [PubMed] [Google Scholar]
- 2.Sun D, Gao W, Hu H, Zhou S. Why 90% of clinical drug development fails and how to improve it? Acta Pharm Sin B. 2022;12(7):3049–14. doi: 10.1016/j.apsb.2022.02.002 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Wallach I, Abdelhalim H, Patel K, et al. AI is a viable alternative to high throughput screening: a 318-target study. Sci Rep. 2024;14(1):1–16. doi: 10.1038/s41598-023-50600-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Awwad O, Ahram M, Coperchini F, Jalil MA. Editorial: precision medicine: recent advances, current challenges and future perspectives. Front Pharmacol. 2024;15:1–3. doi: 10.3389/fphar.2024.1439276 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Molla G, Bitew M. Revolutionizing personalized medicine: synergy with multi-omics data generation, main hurdles, and future perspectives. Biomedicines. 2024;12(12):1–30. doi: 10.3390/biomedicines12122750 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Costa V, Custodio MG, Gefen E, Fregni F. The relevance of the real-world evidence in research, clinical, and regulatory decision making. Front Public Health. 2025;13:1–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Franco LS, de Jesus BSM, Pinheiro PSM, Fraga CAM. Remapping the chemical space and the pharmacological space of drugs: what can we expect from the road ahead? Pharmaceuticals. 2024;17(6):742. doi: 10.3390/ph17060742 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Gryn’ova G, Bereau T, Müller C, et al. EDITORIAL: chemical compound space exploration by multiscale high-throughput screening and machine learning. J Chem Inf Model. 2024;64(15):5737–5738. doi: 10.1021/acs.jcim.4c01300 [DOI] [PubMed] [Google Scholar]
- 9.Bikou AG, Deligianni E, Dermiki-Gkana F, et al. Improving participant recruitment in clinical trials: comparative analysis of innovative digital platforms. J Med Internet Res. 2024;26:e60504. doi: 10.2196/60504 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Hung M, Mohajeri A, Almpani K, et al. Successes and challenges in clinical trial recruitment: the experience of a new study team. Med Sci. 2024;12:39. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Mehta V, Komanduri A, Bhadouriya RS, et al. Evaluating transparency in AI/ML model characteristics for FDA-reviewed medical devices. NPJ Digit Med. 2025;8(1):1–8. doi: 10.1038/s41746-025-02052-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Alderman JE, Palmer J, Laws E, et al. Tackling algorithmic bias and promoting transparency in health datasets: the STANDING together consensus recommendations. Lancet Digit Heal. 2025;7(1):e64–e88. doi: 10.1016/S2589-7500(24)00224-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Jarallah SJ, Almughem FA, Alhumaid NK, et al. Artificial intelligence revolution in drug discovery: a paradigm shift in pharmaceutical innovation. Int J Pharm. 2025;680:125789. doi: 10.1016/j.ijpharm.2025.125789 [DOI] [PubMed] [Google Scholar]
- 14.Wang H, Meng X, Zhang Y. Biomolecular interaction prediction: the era of AI. Adv Sci. 2025;12:e09501. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Nyangiwe NN. The role of computational materials science in achieving sustainable development goals: a review. Next Res. 2025;2(3):100692. doi: 10.1016/j.nexres.2025.100692 [DOI] [Google Scholar]
- 16.Tobias AV, Wahab A. Autonomous ‘self-driving’ laboratories: a review of technology and policy implications. R Soc Open Sci. 2025;12(7). doi: 10.1098/rsos.250646 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.US Food and Drug Administration. Good machine learning practice for medical device development: guiding principles. FDA 1. 2023. Available from: https://www.fda.gov/medical-devices/software-medical-device-samd/good-machine-learning-practice-medical-device-development-guiding-principles. Accessed March 10, 2026.
- 18.Bhat AR, Ahmed S. Artificial intelligence (AI) in drug design and discovery: a comprehensive review. Silico Res Biomed. 2025;1:100049. doi: 10.1016/j.insi.2025.100049 [DOI] [Google Scholar]
- 19.Edriss A, Yarra S, Vomo JA, Ismael K, Elshiekh YB. AI-powered nano formulation: revolutionizing drug development and delivery. Int J Sci Res Arch. 2025;14(2):1501–1512. doi: 10.30574/ijsra.2025.14.2.0491 [DOI] [Google Scholar]
- 20.Pathan I, Raza A, Sahu A, et al. Revolutionizing pharmacology: AI-powered approaches in molecular modeling and ADMET prediction. Med Drug Discov. 2025;28:100223. doi: 10.1016/j.medidd.2025.100223 [DOI] [Google Scholar]
- 21.Das U. Generative AI for drug discovery and protein design: the next frontier in AI-driven molecular science. Med Drug Discov. 2025;27:100213. doi: 10.1016/j.medidd.2025.100213 [DOI] [Google Scholar]
- 22.Ferreira FJN, Carneiro AS. AI-driven drug discovery: a comprehensive review. ACS Omega. 2025;10(23):23889–23903. doi: 10.1021/acsomega.5c00549 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Manan A, Baek E, Ilyas S, Lee D. Digital alchemy: the rise of machine and deep learning in small-molecule drug discovery. Int J Mol Sci. 2025;26(14):1–42. doi: 10.3390/ijms26146807 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Herdiana Y. Bridging the gap: the role of advanced formulation strategies in the clinical translation of nanoparticle-based drug delivery systems. Int J Nanomedicine Int. 2025;20:13039–13053. doi: 10.2147/IJN.S554821 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Niazi SK. Artificial intelligence in small-molecule drug discovery: a critical review of methods, applications, and real-world outcomes. Pharmaceuticals. 2025;18(9):1271. doi: 10.3390/ph18091271 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Rana P, Hollingshead B, Mangipudy R. Rethinking the necessity of long-term toxicity studies for biotherapeutics using weight of evidence assessment. Regul Toxicol Pharmacol. 2024;153:105710. doi: 10.1016/j.yrtph.2024.105710 [DOI] [PubMed] [Google Scholar]
- 27.Sewell F, Alexander-White C, Brescia S, et al. New approach methodologies (NAMs): identifying and overcoming hurdles to accelerated adoption. Toxicol Res. 2024;13(2):1–9. doi: 10.1093/toxres/tfae044 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.US Food & Drug Administration. Guidance for Industry: M7(R2) Assessment and Control of DNA Reactive (Mutagenic) Impurities in Pharmaceuticals to Limit Potential Carcinogenic Risk. Food Drug Adm. 2018;7. [Google Scholar]
- 29.Chintalapati KR, Jaywant MA, Mannino GG, et al. In silico approach for the identification and control of potential mutagenic impurities in drug substances: a lansoprazole case study. Org Process Res Dev. 2026;30(1):45–60. doi: 10.1021/acs.oprd.5c00197 [DOI] [Google Scholar]
- 30.Honma M. Guidelines for the assessment and control of mutagenic impurities in pharmaceuticals. Genes Environ. 2025;47(1):1–11. doi: 10.1186/s41021-025-00349-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.European Medicines Agency. Reflection paper on the use of ai in the medicinal product lifecycle. vol. 31. 2024. Available from: https://www.ema.europa.eu/en/use-artificial-intelligence-ai-medicinal-product-lifecycle-scientific-guideline. Accessed March 10, 2026.
- 32.Hu Q, Liang K, Zhao H, et al. Target-aware 3D molecular generation based on guided equivariant diffusion. Nat Commun. 2025;16(1):1–17. doi: 10.1038/s41467-024-52768-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Reidenbach D, Nikitin F, Isayev O, Paliwal SG. Applications of modular co-design for de novo 3D molecule generation. Digit Discov. 2026;5(2):754–768. doi: 10.1039/d5dd00380f [DOI] [Google Scholar]
- 34.Wang R, Zhuang C. Graph neural networks driven acceleration in drug discovery. Acta Pharm Sin B. 2025;15(12):6163–6177. doi: 10.1016/j.apsb.2025.10.011 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Abbasi K, Razzaghi P, Gharizadeh A, et al. Computational drug design in the artificial intelligence era: a systematic review of molecular representations, generative architectures, and performance assessment. Pharmacol Rev. 2026;78(1):100095. doi: 10.1016/j.pharmr.2025.100095 [DOI] [PubMed] [Google Scholar]
- 36.Sheshanarayana R, You F. Molecular representation learning: cross-domain foundations and future frontiers. Digit Discov. 2025;4(9):2298–2335. doi: 10.1039/D5DD00170F [DOI] [Google Scholar]
- 37.Kim Y, Doo H, Shin D, et al. Self-driving laboratories with artificial intelligence: an overview of process systems engineering perspective. Comput Chem Eng. 2025;203:109266. [Google Scholar]
- 38.Adesiji AD, Wang J, Kuo C-S, Brown KA. Benchmarking self-driving labs. Digit Discov. 2026;5. [Google Scholar]
- 39.Li J, Ding C, Liu D, Chen L, Jiang J. Autonomous laboratories in China: an embodied intelligence-driven platform to accelerate chemical discovery. Digit Discov. 2025;4(7):1672–1684. doi: 10.1039/D5DD00072F [DOI] [Google Scholar]
- 40.Nair S. Explainable AI in Gxp validation: balancing automation, traceability, and regulatory trust in the pharmaceutical industry. Clin Med Heal Res J. 2025;5(05):1430–1442. doi: 10.18535/cmhrj.v5i05.509 [DOI] [Google Scholar]
- 41.Drakshpalli R. AI-driven threat detection in pharmaceutical R and D: mitigating cyber risks in drug discovery platforms. Glob J Eng Technol Adv. 2025;23(3):048–062. doi: 10.30574/gjeta.2025.23.3.0176 [DOI] [Google Scholar]
- 42.Heyndrickx W, Mervin L, Morawietz T, et al. MELLODDY: cross-pharma federated learning at unprecedented scale unlocks benefits in QSAR without compromising proprietary information. J Chem Inf Model. 2024;64(7):2331–2344. doi: 10.1021/acs.jcim.3c00799 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Lee KH, Jang S, Kim GJ, et al. Large language models for automating clinical trial criteria conversion to OMOP CDM queries: accuracy and efficiency evaluation (Preprint). JMIR Med Inform. 2025;13:1–17. doi: 10.2196/71252 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Liu R, Rizzo S, Whipple S, et al. Evaluating eligibility criteria of oncology trials using real-world data and AI. Nature. 2021;592(7855):629–633. doi: 10.1038/s41586-021-03430-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Teodoro D, Naderi N, Yazdani A, Zhang B, Bornet A. A scoping review of artificial intelligence applications in clinical trial risk assessment. NPJ Digit Med. 2025;8(1). doi: 10.1038/s41746-025-01886-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46.Kanapari A, Lorenzoni G, Ocagli H, Gregori D. Current applications and future challenges of machine learning and artificial intelligence in clinical trials: a scoping review. Digit Heal. 2025;11:1–14. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47.Liu J, Yao M, Wang M, et al. Design, conduct, and analysis of externally controlled trials. JAMA Network Open. 2025;8(9):e2530277. doi: 10.1001/jamanetworkopen.2025.30277 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48.Lin L, Lucassen MJJ, van der Noort V, et al. The feasibility of using real world data as external control arms in oncology trials. Drug Discov Today. 2025;30(3):104324. doi: 10.1016/j.drudis.2025.104324 [DOI] [PubMed] [Google Scholar]
- 49.Letailleur V, Chaillol I, Cherblanc F, et al. Synthetic control arm from mixed clinical trials and real-world data from the LYSA group for untreated diffuse large B-cell lymphoma patients aged over 80 years: a bona fide strategy for innovative clinical trials. Blood Cancer J. 2025;15(1):1–10. doi: 10.1038/s41408-025-01374-x [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.Kabir MR, Shishir FS, Shomaji S, Ray S. Digital twins in healthcare IoT: a systematic review. High-Confidence Comput. 2025;5(3):100340. doi: 10.1016/j.hcc.2025.100340 [DOI] [Google Scholar]
- 51.Tudor BH, Shargo R, Gray GM, et al. A scoping review of human digital twins in healthcare applications and usage patterns. NPJ Digit Med. 2025;8(1). doi: 10.1038/s41746-025-01910-w [DOI] [PMC free article] [PubMed] [Google Scholar]
- 52.Ringeval M, Sosso FAE, Cousineau M, Paré G. Advancing health care with digital twins: meta-review of applications and implementation challenges. J Med Internet Res. 2025;27:1–12. doi: 10.2196/69544 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53.Niazi SK. Regulatory Perspectives for AI/ML implementation in pharmaceutical GMP environments. Pharmaceuticals. 2025;18(6):1–13. doi: 10.3390/ph18060901 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54.Lombardini G, Badr S, Schmid C, Knüppel S, Sugiyama H. Anomaly detection for drug product manufacturing considering data limitations and shifts: a case study on industrial freeze-dryers. Comput Chem Eng. 2025;198:109106. doi: 10.1016/j.compchemeng.2025.109106 [DOI] [Google Scholar]
- 55.Fitzgerald L, Niarchou E, Jones I, Naughton B. Emerging digital innovations in pharmaceutical manufacturing quality: a systematised review. J Pharm Innov. 2026;21(1). doi: 10.1007/s12247-025-10272-5 [DOI] [Google Scholar]
- 56.Vijayakumar A, Koilraj JAS, Rajappa M. PharmaNet deep: real-time pharmaceutical defect detection using defect-guided feature fusion and uncertainty-driven inspection. Int J Comput Intell Syst. 2025;18(1):1–30. doi: 10.1007/s44196-025-00986-2 [DOI] [Google Scholar]
- 57.Ribeiro AG, Vilaça L, Costa C, Soares da Costa T, Carvalho PM. Automatic visual inspection for industrial application. J Imaging. 2025;11(10):1–30. doi: 10.3390/jimaging11100350 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 58.US Food & Drug Administration. Artificial intelligence in manufacturing. Discussion paper vol. 1. 2023.
- 59.Kutybayeva K, Razaque A, Rai HM. Enhancing pharmaceutical supply chain transparency and security with blockchain and big data integration. Procedia Comput Sci. 2025;259:1511–1522. doi: 10.1016/j.procs.2025.04.106 [DOI] [Google Scholar]
- 60.Hassam R, Muhammad A, Habib A, Khalil A. Securing drug distribution from manufacturer to patient: a blockchain ‑ based model for counterfeit drugs detection, tracking and dispensing accuracy. J Supercomput. 2026;82:1–40. [Google Scholar]
- 61.U. S. Food & Drug Administration. DSCSA standards for the interoperable exchange of information for tracing of certain human, finished, prescription drugs. Guid Ind. 2022;1–9. [Google Scholar]
- 62.Pasculli G, Virgolin M, Myles P, et al. Synthetic data in healthcare and drug development: definitions, regulatory frameworks, issues. CPT Pharmacometrics Syst Pharmacol. 2025;14(5):840–852. doi: 10.1002/psp4.70021 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 63.Balam R, Mahesh P, Gandhi K, Poojary SG, Chandran A, Vadakkepushpakath AN. Pharma 4. 0: enhancing Process Robustness in Pharmaceutical Manufacturing through Industry 4. 0 Integration. J Young Pharm. 2025;17(4):784–789. doi: 10.5530/jyp.20250104 [DOI] [Google Scholar]
- 64.Phiri VJ, Battas I, Semmar A, Medromi H, Moutaouakkil F. Towards enterprise-wide pharma 4.0 adoption. Sci African. 2025;28:e02771. [Google Scholar]
- 65.Fu L, Jia G, Liu Z, Pang X, Cui Y. The applications and advances of artificial intelligence in drug regulation: a global perspective. Acta Pharm Sin B. 2025;15(1):1–14. doi: 10.1016/j.apsb.2024.11.006 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 66.Xu H, Shuttleworth KMJ. Medical artificial intelligence and the black box problem: a view based on the ethical principle of “do no harm”. Intell Med. 2024;4(1):52–57. doi: 10.1016/j.imed.2023.08.001 [DOI] [Google Scholar]
- 67.Garcia-Gomez JM, Blanes-Selva V, Alvarez Romero C, et al. Mitigating patient harm risks: a proposal of requirements for AI in healthcare. Artif Intell Med. 2025;167:103168. doi: 10.1016/j.artmed.2025.103168 [DOI] [PubMed] [Google Scholar]
- 68.Lakhan SE. In silico research is rewriting the rules of drug development: is it the end of human trials? Cureus. 2025;17(5):e84007. doi: 10.7759/cureus.84007 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 69.Aboy M, Minssen T, Vayena E. Navigating the EU AI Act: implications for regulated digital medical products. NPJ Digit Med. 2024;7(1):1–6. doi: 10.1038/s41746-024-01232-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 70.Lim A. Artificial intelligence in drug discovery. Regul Rapp. 2024;21:1–10. [Google Scholar]
- 71.Wang J. Navigating the USPTO’s AI inventorship guidance in AI-driven drug discovery. J Law Biosci. 2025;12(2). doi: 10.1093/jlb/lsaf014 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 72.Lenarczyk G, Minssen T, Price N, Rai A. The future of AI regulation in drug development: a comparative analysis. J Law Biosci. 2025;12(2). doi: 10.1093/jlb/lsaf028 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 73.Maclean N, Abrahmsén-Alami S, Clark C, et al. Empowering the pharmaceutical workforce for the digital future. Eur J Pharm Sci. 2026;107449. doi: 10.1016/j.ejps.2026.107449 [DOI] [PubMed] [Google Scholar]
- 74.Úbeda-García M, Marco-Lajara B, Zaragoza-Sáez PC, Poveda-Pareja E. Artificial intelligence, knowledge and human resource management: a systematic literature review of theoretical tensions and strategic implications. J Innov Knowl. 2025;10(6):100809. doi: 10.1016/j.jik.2025.100809 [DOI] [Google Scholar]
- 75.Gong J, Zhao Z, Niu X, et al. AI reshaping life sciences: intelligent transformation, application challenges, and future convergence in neuroscience, biology, and medicine. Front Digit Heal. 2025;7:1666415. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 76.Alghalbie F, Elnaem MH, McCarron PA. Perspectives on artificial intelligence use in pharmacy education in Northern Ireland: a qualitative study based on the unified theory of acceptance and use of technology. Curr Pharm Teach Learn. 2026;18(1):102490. doi: 10.1016/j.cptl.2025.102490 [DOI] [PubMed] [Google Scholar]
- 77.Malviya N, Malviya S, Dhere M. Transformation of pharma curriculum as per the anticipation of pharma industries-need to empower fresh breeds with globally accepted pharma syllabus, soft skills, AI and hands-on training. Indian J Pharm Educ Res. 2023;57(2):320–328. doi: 10.5530/ijper.57.2.41 [DOI] [Google Scholar]
- 78.Thorlund K, Dron L, Park JJH, Mills EJ. Synthetic and external controls in clinical trials – a primer for researchers. Clin Epidemiol. 2020;12:457–467. doi: 10.2147/CLEP.S242097 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 79.Santa Maria JP, Wang Y, Camargo LM. Perspective on the challenges and opportunities of accelerating drug discovery with artificial intelligence. Front Bioinforma. 2023;3:1–5. doi: 10.3389/fbinf.2023.1121591 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 80.Yuan S, Färber M. Can hallucinations help? Boosting LLMs for drug discovery. Arxiv. 2025;1. [Google Scholar]
- 81.Raman K, Kumar R, Musante CJ, Madhavan S. Integrating model-informed drug development with ai: a synergistic approach to accelerating pharmaceutical innovation. Clin Transl Sci. 2025;18(1):1–7. doi: 10.1111/cts.70124 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 82.Sodaei Z, Ekrami S, Hashemianzadeh SM. Machine learning analysis of molecular dynamics properties influencing drug solubility. Sci Rep. 2025;15(1):1–15. doi: 10.1038/s41598-025-11392-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 83.Medicines and Healthcare products Regulatory Agency (MHRA). AI breakthroughs drive expansion of ‘Airlock’ testing programme to support AI-powered healthcare innovation. Press Release. 2025. Available from: https://www.gov.uk/government/news/ai-breakthroughs-drive-expansion-of-airlock-testing-programme-to-support-ai-powered-healthcare-innovation. Accessed March 10, 2026.
- 84.Medicines and Healthcare products Regulatory Agency (MHRA). AI airlock phase 2 cohort. Guidance 1. 2025. Available from: https://www.gov.uk/government/publications/ai-airlock-phase-2-cohort. Accessed March 10, 2026.
- 85.Tan Z, Yang Q, Luo S. AI molecular catalysis: where are we now? Org Chem Front. 2025;12(8):2759–2776. doi: 10.1039/D4QO02363C [DOI] [Google Scholar]
- 86.Tom G, Schmid SP, Baird SG, et al. Self-driving laboratories for chemistry and materials science. Chem Rev. 2024;124(16):9633–9732. doi: 10.1021/acs.chemrev.4c00055 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 87.Ha T, Lee D, Kwon Y, et al. AI-driven robotic chemist for autonomous synthesis of organic molecules. Sci Adv. 2023;9(44):28–31. doi: 10.1126/sciadv.adj0461 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 88.Kumar N, Kumar K, Aeron A, Verre F. Blockchain technology in supply chain management: innovations, applications, and challenges. Telemat Informatics Reports. 2025;18:100204. doi: 10.1016/j.teler.2025.100204 [DOI] [Google Scholar]
- 89.Loss S, Cardoso L, Cacho N, Lopes F. Pharmaceutical audit trail blockchain-based microservice. In 16th International Joint Conference on Biomedical Engineering Systems andTechnologies (BIOSTEC 2023). vol. 5; 2023: 368–375. [Google Scholar]
- 90.Li K, Lohachab A, Dumontier M, Urovi V. Privacy preservation in blockchain-based healthcare data sharing: a systematic review. Peer-to-Peer Netw Appl. 2025;18:1–53. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 91.Alharthi S. AI-powered in silico twins: redefining precision medicine through simulation, personalization, and predictive healthcare. Saudi Pharm J. 2026;34(1). doi: 10.1007/s44446-025-00055-x [DOI] [PMC free article] [PubMed] [Google Scholar]
- 92.Vallée A. Multi-scale digital twins for personalized medicine. Front Digit Heal. 2026;1:1–12. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 93.Riemer A, Freund V. Generative artificial intelligence in pharmaceutical drug development: a systematic review of time and cost efficiency across discovery, preclinical, and clinical phases. Intell Pharm. 2026. doi: 10.1016/j.ipha.2025.12.006 [DOI] [Google Scholar]
- 94.Smaldone AM, Shee Y, Kyro GW, et al. Quantum machine learning in drug discovery: applications in academia and pharmaceutical industries. Chem Rev. 2025;125(12):5436–5460. doi: 10.1021/acs.chemrev.4c00678 [DOI] [PubMed] [Google Scholar]
- 95.Baidya ATK, Goswami AK, Das B, Darreh-Shori T, Kumar R. AI-enabled ultra-large virtual screening identifies potential inhibitors of choline acetyltransferase for theranostic purposes. ACS Chem Neurosci. 2024;15(22):4156–4170. doi: 10.1021/acschemneuro.4c00361 [DOI] [PubMed] [Google Scholar]
- 96.Seth M, Jalo H, Högstedt Å, et al. Technologies for interoperable internet of medical things platforms to manage medical emergencies in home and prehospital care: scoping review. J Med Internet Res. 2025;27:e54470. doi: 10.2196/54470 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 97.Alsahfi T, Badshah A, Aboulola OI, Daud A. Optimizing healthcare big data performance through regional computing. Sci Rep. 2025;15(1):1–19. doi: 10.1038/s41598-025-87515-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 98.Mhaske P, Bhattacharjee B, Haldar N, Upadhyay P, Mandal A. Bridging digital skill gaps in the global workforce: a synthesis and conceptual framework building. Res Glob. 2025;11:100311. [Google Scholar]
- 99.Rehman AU, Li M, Wu B, et al. Role of artificial intelligence in revolutionizing drug discovery. Fundam res. 2025;5(3):1273–1287. doi: 10.1016/j.fmre.2024.04.021 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 100.Blanco-González A, Cabezón A, Seco-González A, et al. The role of AI in drug discovery: challenges, opportunities, and strategies. Pharmaceuticals. 2023;16(6):891. doi: 10.3390/ph16060891 [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
Data sharing not applicable – no new data generated, or the article describes entirely theoretical research.
