Abstract
Recent advances in artificial intelligence (AI) tools and techniques can revolutionize the discovery, development, manufacturing, and delivery of new medicines by reducing the time and cost involved. The biopharmaceutical industry is a highly regulated sector where robust regulatory oversight is essential to ensure human therapeutics’ quality, efficacy, and safety. This Perspective examines the challenges of regulating AI-driven technologies in drug discovery and development. As AI is anticipated to play an unprecedented role in transforming drug development, the critical question is not whether to regulate these advancements but how to do so effectively. Here, we evaluate current global drug regulatory practices, discuss gaps and unknowns, and provide recommendations on addressing these issues to facilitate effective regulations of AI-driven innovation in the biopharmaceutical industry.
Subject terms: Target identification, Drug development
Singh, Paxton and Auclair provide insight into regulating the AI-enabled ecosystem for human therapeutics. Specific attention is paid to evaluating current global drug regulatory practices, identifying gaps, and offering recommendations to address these issues via facilitating effective regulation of biopharmaceutical industry AI-driven innovation.
Introduction
Developing new human medicines is typically estimated to take approximately 10 to 15 years and costs around $2.6 billion due to the extensive preclinical research, clinical trials, and regulatory requirements necessary to ensure safety and efficacy1. Utilizing artificial intelligence (AI) tools and techniques, new medicines could be discovered, developed, manufactured, and delivered to patients in a shorter time frame and at a fraction of the current cost2. However, using these new methodologies will necessitate robust regulatory oversight with appropriate guardrails to protect human health. In this Perspective, we address the potential challenges when regulating the use of AI-driven technologies for the discovery and development of new human medicines while ensuring they are high quality, efficacious, and safe. We also discuss the current processes being adopted by regulatory agencies around the world.
AI in drug discovery, development, manufacturing, and product lifecycle
While not universal, the commonly accepted definition of AI describes it as the development of computer systems capable of performing tasks that typically require human intelligence, such as learning, reasoning, problem-solving, and decision-making. Historically, predictive AI has been used in various aspects of drug development and regulatory operations3. Lately, generative AI has been transformative, revolutionizing fields such as healthcare and medicine through its ability to create new content and insights from vast datasets4. Here, we focus on the AI-enabled ecosystem for therapeutics that encompasses the full spectrum of tools, workflows, and outcomes driven by AI in the creation of human therapeutics. This ecosystem integrates AI systems, processes, platforms, and products to accelerate innovation in drug discovery, drug development, clinical trials, and manufacturing while ensuring their safety, efficacy, and quality.
Over the last five years, research publications and patents on AI in drug discovery have grown noticeably5. Most publications focus on understanding diseases, target discovery, rational drug design, and development. For example, platforms such as AlphaFold generate 3D protein structures with remarkable speed, which traditionally can take months or years to achieve experimentally6. AI-driven biopharmaceutical technologies, such as predictive modeling and simulation tools, are rapidly being utilized to identify promising drugs, followed by AI-assisted high-throughput screening2. Despite rapid progress in drug discovery, the number of new AI-discovered products entering the clinic is yet to live up to its promise7. The 2024 Nobel Prizes in Physics and Chemistry highlight the transformative potential of AI in shaping the future of human medicine. This included advances in neural networks that laid the groundwork for sophisticated AI models that enable precision diagnostics, personalized treatments, and accelerated drug discovery. Meanwhile, computational protein design and AlphaFold breakthroughs will revolutionize synthetic biology, allowing the rapid creation of novel enzymes and therapeutic proteins. Together, these innovations promise faster drug development, improved understanding of diseases, and more efficient, accessible healthcare, heralding a new era in medical science.
AI’s impact is not limited to drug discovery or repurposing older therapies. It can also be used to optimize clinical trials, which can bring new biotherapeutics to patients faster8. This is an example of new AI-created platforms accelerating clinical trials of traditionally discovered targets. Furthermore, AI plays a critical role in synthetic biology, where it can design and construct new biological parts, devices, and systems9. AI algorithms can predict the behavior of genetic circuits and optimize the design of synthetic organisms for various applications, including drug production, environmental sustainability, and industrial biotechnology.
Model-informed drug development (MIDD) using AI represents a transformative approach, offering the potential for more efficient, cost-effective, and personalized therapies10. In personalized medicine, AI can analyze clinical and molecular data to provide tailored treatment options, enhancing the precision and effectiveness of therapies, particularly for complex diseases such as cancer.
The use of ‘digital twins’ is revolutionizing human medicine11. Digital twins are virtual representations of patients, biological systems, or therapeutic processes that use real-time data and advanced simulations to enhance healthcare outcomes. They enable personalized medicine by simulating disease progression and treatment responses, improving drug development by modeling drug interactions, and optimizing clinical trial design through virtual patient cohorts. These tools can also monitor treatment effectiveness in real time, allowing dynamic adjustments for better outcomes. Digital twins promise faster therapeutic innovation, enhanced precision, and cost-effective healthcare, redefining how treatments are developed and administered12.
Regulation of digital twins currently aligns with existing frameworks for software as a medical device (SaMD) and AI in healthcare, emphasizing model validation, performance metrics, and data privacy. Drug regulators such as the US Food and Drug Administration (FDA) and the European Medicines Agency (EMA) require robust validation to ensure simulation accuracy and real-world reliability, with an increasing focus on risk classification and ethical use of data. Future regulatory strategies should include tailored guidelines for AI integration, regulatory sandboxes for innovation testing, and global harmonization of standards to address safety, equity, and transparency concerns. Continuous monitoring and postmarket surveillance will also be essential to maintain trust and efficacy in this transformative technology11.
Software as a Medical Device (SaMD) is one of the most common and successful examples of the impact of AI13. AI-driven platforms in biopharmaceutical manufacturing optimize production processes in real time, ensuring consistent product quality and compliance with regulatory standards14. AI is used to predict and monitor drug safety and pharmacovigilance early in drug development to predict adverse drug reactions, thereby improving drug safety profiles15.
A typical new drug application to the regulatory agency can be hundreds of thousands of pages. Compiling and submitting all this information to the regulatory authorities for approval is demanding and time-consuming. AI is already being used to expedite regulatory submissions in the biopharma industry by automating data collection, processing, and submission16. Natural language processing handles large volumes of unstructured data and leverages predictive analytics to forecast regulatory outcomes17.
The ongoing advances in AI are poised to accelerate drug discovery and development further, making it possible to bring innovative treatments to market even faster and more efficiently. As AI-enabled technologies evolve, they will probably unlock new frontiers in medicine, uncovering novel therapeutics and treatment strategies that were previously unimaginable, ultimately transforming healthcare and improving patient outcomes on an unprecedented scale.
Shortcomings of AI-enabled ecosystem
While AI offers notable benefits in drug development and lifecycle management of human therapeutics, it poses several risks. The term ‘AI hallucination’ refers to instances where generative models, such as large language models (LLMs) or image generation systems, produce factually incorrect, nonsensical, or fabricated outputs, despite appearing plausible or coherent. ‘AI bias’ describes systematic errors causing unfair outcomes due to skewed data, flawed algorithms, or societal prejudices18. In the development of AI-assisted human therapeutics, data biases in AI can lead to inequitable outcomes, such as underrepresenting diverse populations in clinical trial data or safety analyses. These biases can result in suboptimal therapeutic recommendations, incomplete labeling, or regulatory delays, as regulators may require additional data to ensure safety and efficacy across all patient groups19. Creating standards to address these biases is critical for developing AI-driven tools that support equitable healthcare and comply with regulatory requirements20.
Hallucination risks in AI, such as fabricating clinical insights or misinterpreting trends, pose significant challenges in drug development, safety monitoring, and regulatory submissions21. AI-generated inaccuracies can lead to poor clinical decisions or non-compliance with regulatory requirements, undermining trust in these tools. Mitigating these risks through diverse data collection, robust validation frameworks, transparency, and human oversight ensures that AI is reliable and effective in advancing human therapeutics while maintaining safety and fairness.
Regulators are actively addressing AI data biases and hallucination risks by implementing comprehensive guidelines, encouraging the inclusion of real-world data, and requiring transparency in AI development processes. They emphasize rigorous validation before and after market approval, mandate human oversight in critical decision-making, and promote ethical AI initiatives to ensure that AI applications in healthcare are safe, equitable, and reliable22.
AI systems require access to large volumes of sensitive health data to provide accurate estimates, raising concerns about data privacy and security, which must also be addressed through compliance with privacy regulations to protect patient information19. Additionally, validation and reliability are critical to ensure AI models, including LLMs, perform consistently across diverse populations and to identify reasons for varying outcomes. Ethical and legal considerations, such as obtaining informed consent for data use and determining liability for AI-related errors, are also essential to address in the responsible implementation of AI23.
Additionally, the “black box” nature of many AI models means their decision-making processes are not easily interpretable, undermining confidence and complicating clinical decision-making, particularly for regulators. Similarly, AI’s application in material science is progressing at an unprecedented rate24. New materials that are useful in human health are being developed and incorporated into medical devices. AI-driven advances in synthetic biology have fostered the creation of functional artificial cells, and the aggregation of these cellular units has led to the development of components necessary for functional organs25. This AI-enabled process of creating new biomaterials circumvents evolutionary development and results in synthetic bioproducts within months for potential use in humans19. This rapid discovery process skirts many traditional validation, reliability, and verification steps, potentially creating gaps in understanding the risks22. Regulations are evolving to address this gap; however, they are primarily focused on medical devices at this stage. This is a further reminder that maintaining a balance where AI facilitates but does not replace human decision-making is crucial.
How to regulate: the move 37 conundrum
The ancient Chinese board game Go, known for its immense complexity, was considered a grand challenge for AI due to its vast number of possible board configurations, estimated to exceed 10170. AlphaGo, an AI system, was created and trained to play the complex game of Go. Beating a human was considered impossible as this game required creativity, intuition, and strategic thinking, of which machines were thought incapable26. However, in 2016, AlphaGo beat the champion Go master through innovative, unexpected moves, including the famous Move 37, which was considered by experts to be unreasonable and illogical. The losing Go master is known to have said along the lines, “this is a move no human would make.”27
As AI continues to generate novel and innovative medical outcomes with a notable impact on public health, global drug regulators will face challenges akin to the “Move 37” scenario. Unanticipated products and systems are bound to be presented for review to the drug regulators. Since regulators are known to regulate through precedent or what is already established, the novel approach could create a conundrum for traditional modes of regulation. Do they reject the application since there is no precedent, or do they ask for an unexplainable human clarification on something AI created? Here, we will refer to it as the “Move 37 Conundrum”. The US FDA has guidelines on Generally Accepted Scientific Knowledge (GASK) in applications for drug and biological Products28. Owing to the “black box” nature of the AI-generated results, optimal use of the GASK will require human judgment in defining what is “scientific” and what is “knowledge.” Despite this conundrum, drug regulators worldwide must continue to regulate to safeguard public health.
Current status of regulation of AI-generated medicines
This section will provide a high-level overview of various initiatives by global drug regulators covering the regulation of AI-related technologies and the various platforms, systems, processes, and products they create. In 2020, the Center for Drug Evaluation and Research (CDER) established the AI Steering Committee (AISC) to coordinate ongoing AI initiatives within the US FDA20. They identified over 20 AI use cases across the agency, which had limited coordination. Under AISC, these and future initiatives will be organized to ensure effective collaboration and alignment across various offices. For example, the Office of Surveillance and Epidemiology (OSE) uses AI for drug labeling reviews and categorizing FDA Adverse Event Reporting System (FAERS) reports. At the same time, the Office of Clinical Pharmacology (OCP) employs an AI platform for clinical study report generation. The Office of Strategic Programs (OSP) integrates AI in the Opioid Data Warehouse for trend analysis, and the Office of Biostatistics uses AI to detect data anomalies in clinical trials. Meanwhile, the Center for Biologics Evaluation and Research (CBER) leverages AI for post-marketing surveillance, and the Office of Regulatory Affairs (ORA) utilizes AI for public health risk identification and import screening. The Center for Food Safety and Applied Nutrition (CFSAN) applies AI to monitor high-risk imports and foodborne pathogens. AISC facilitates coordination among these initiatives by setting strategic priorities, overseeing resource allocation, and ensuring that AI-driven efforts are aligned with the FDA’s regulatory goals. The creation of this intra-agency committee is a testament to the US FDA’s foresight in providing communication between different offices, disseminating best practices, and addressing challenges that arise during AI integration in regulatory decision-making.
A two-day FDA Digital Health Advisory Committee meeting in November 2024 primarily focused on using generative AI in medical devices, discussing premarket evaluation, risk management, and postmarket monitoring29. The meeting addressed challenges such as data biases, hallucination risks, and the need for transparency in AI development. While the primary emphasis was on medical devices, the broader implications of generative AI in healthcare were also discussed29. As a result of this meeting, it is likely that new regulations will be created or refined.
In early 2025, the US FDA issued draft guidance on AI in regulatory decision-making in response to the growing use of AI in drug development and the need for clear regulatory standards to ensure safety, effectiveness, and data integrity30. This draft guidance, which was built on the discussion paper issued in 2023, establishes a risk-based credibility assessment framework for AI applications in drug and biological product regulation, requiring context-specific model evaluation. The scope does not include early-stage drug discovery or operational efficiencies, such as regulatory operations, unless they impact patient safety, drug quality, or study reliability.
Similarly, the European Medicines Agency (EMA), in collaboration with the Heads of Medicines Agencies (HMAs), has published a work plan to guide the use of artificial intelligence in medicines regulation from 2023 to 202831. This plan aims to maximize AI’s benefits while managing associated risks by focusing on four key areas: guidance, policy, and product support; AI tools and technology; collaboration and training; and experimentation. The work plan seeks to enhance the European Medicines Regulatory Network’s (EMRN) capacity to use AI to improve personnel productivity, automate processes, and support decision-making32. This initiative appeals to forward-looking approaches and establishes responsible AI use principles.
Additionally, many other governmental bodies are creating guidelines on using AI in healthcare and the biopharmaceutical industry. For example, the UK’s AI Sector Deal and Centre for Data Ethics and Innovation emphasize ethical guidelines and innovation support33. China’s New Generation of AI Development Plan aims for global leadership in AI by 2030, with draft regulations addressing data security and algorithm transparency34. Japan’s AI Strategy 2019 and Society 5.0 Initiative integrate AI into societal frameworks, promoting ethical development35. Canada’s Pan-Canadian AI Strategy and Directive on Automated Decision-Making guide AI research and government use, ensuring fairness and accountability36.
Many drug regulators have recognized the utility of AI in various aspects of drug development, manufacturing, and post-marketing surveillance. However, they have not fully clarified how they will regulate these AI-generated systems, processes, platforms, and products. Developers of regulated products must implement AI solutions thoughtfully and responsibly to maximize the benefits of AI while ensuring compliance with regulatory expectations.
The use of AI in the biopharmaceutical industry is exploding, and having more regulations is not the panacea. Instead, drug regulators must reimagine how to regulate new products and changing technology. An unconventional approach is required to regulate AI-related technology and the human therapeutics it creates.
Reimagining how to regulate in the age of AI
AI will increasingly play an oversized role in creating new medicines and therapeutics at unprecedented speed and volume. Its increased application and usage will eventually present the regulators with a “Move 37 Conundrum”—a situation they have never seen before, meaning that they cannot rely upon precedent. Global regulatory authorities must approach regulations differently in the coming wave of AI-enabled products and systems. A more extensive discussion among all stakeholders is needed. Meanwhile the following are recommendations for how global drug regulatory authorities may begin to adapt to the large numbers of forthcoming AI-enabled systems, process, platforms, and products.
Investment in drug regulators’ capability and capacity building
The pharmaceutical industry is anticipated to see a notable increase in AI-related job postings. Estimates suggest an annual growth rate of over 20 percent in AI-related roles, and over twenty thousand companies in Western countries are recruiting for AI talent37. This hiring volume will likely translate into greater AI-related output from the industry, raising the question of whether the regulators have the capacity and capability to deal with the increased volume. Drug regulatory agencies must adapt to the AI revolution by developing agile and adaptive frameworks that accommodate rapidly evolving technologies like machine learning, generative AI, and predictive modeling. To achieve this, they must invest in capacity and capability building, ensuring they have the infrastructure, expertise, and resources to manage the complexities of AI-driven systems. This includes establishing specialized AI-focused teams and implementing continuous training programs to upskill staff. Additionally, enhancing technological infrastructure to support robust data analytics and AI applications and capability building also plays a critical role in ensuring transparency and explainability in AI systems. Regulators must require interpretable outputs, audit mechanisms, and the integration of ethical considerations, such as informed consent and accountability frameworks. Strengthening organizational capabilities will enable agencies to address emerging ethical and legal challenges while ensuring equitable access to AI-driven innovations. By embedding capacity and capability building into their strategies, regulatory agencies can effectively balance innovation with public health and safety, positioning themselves to lead in the age of AI. Similar arguments on capacity building have been made by the ex-Commissioner of the US FDA38.
Legislative action needed for AI-human health tools
Lawmakers will need to draft and pass new legislation that either establishes independent statutory bodies to regulate AI-generated human health and therapeutics or, at the very least, expands existing authority and update the statutory standards to address these emerging technologies. In a recent article, the ex-Commissioner of the US FDA argued that there is a need for an act of the US Congress to regulate AI in the healthcare and biopharma sectors39. The current regulatory frameworks are insufficient to address the unique challenges posed by AI technologies. The platforms that host and deploy AI applications require regulatory oversight to ensure they adhere to safety and efficacy standards. Regulating the platforms, rather than just individual AI tools, would provide a comprehensive approach to managing risks associated with AI technologies. Thus, there should be regulatory oversight of the source of information that goes into making the final product and the AI tools. The recent US Supreme Court ruling on the Chevron Doctrine adds further urgency for Congress to act, as the US FDA will likely face increased challenges to its regulatory decisions40. Unless a statutory regulatory mandate with allocated resources is created by an act of the US Congress to regulate AI in healthcare, there could be more litigation and slower implementation of new regulations. Through legislative action, the reimagined drug regulatory authorities will have appropriate capacity, capability, and authority. They will be nimble in adapting to AI-driven technological advancements in healthcare, ultimately becoming a more effective force in protecting public health.
Harmonization of AI regulations
AI is becoming a notable priority for many countries, and governments globally are increasingly recognizing the strategic importance of AI for economic growth, national security, and technological innovation. There is a risk that different countries will establish separate, incompatible regulations to govern AI-related biomedical research. This could create barriers to simultaneously bringing new AI-enabled human therapeutics to patients. Rather than competing, the approach taken by the International Conference on Harmonization of the Technical Requirements for Pharmaceuticals for Human Use (ICH) should be considered41. ICH is where industry and regulators have come together to coordinate technical requirements worldwide to ensure that safe, effective, and high-quality medicines are developed and registered efficiently. With the rapid integration of AI in drug discovery, development, and regulatory processes, there is a critical need for a harmonized global regulatory framework. This international effort would aim to create a unified regulatory approach that supports innovation while maintaining high safety and efficacy standards for healthcare technologies. This new body, either as a stand-alone organization or an adjunct to ICH, would provide consistent guidelines, fostering international collaboration. This initiative should aim to bridge gaps between different regulatory environments and streamline the adoption of AI innovations in human medicines, ultimately enhancing global health outcomes. Some progress has already been made, with the FDA, MHRA, and Health Canada collaborating to harmonize the regulatory framework for AI and machine learning-enabled medical devices42. This collaboration has developed guiding principles and new transparency guidelines to ensure these devices are safe, effective, and trustworthy. Key initiatives include real-time device performance monitoring, managing AI-driven modifications, and ensuring clear communication about the devices’ use and performance. Thus, the focus should be on coordination and cooperation between regulators, not competition.
Leveraging AI so AI and digital tools do the routine work
Integrating AI with the Internet of Things (IoT) can aid regulators in the biopharma industry by enhancing their oversight capabilities and routine tasks. Real-time monitoring and AI-driven analytics would allow for continuous tracking of critical parameters, detecting anomalies, and proactively ensuring compliance by automatically reporting timely and accurate documentation to reduce regulators’ manual workloads43. Additionally, remote audits facilitated by AI or in an augmented reality setting would enable regulators to access real-time data with minimal need for physical inspections. This could allow for quicker responses to compliance issues and improve overall regulatory efficiency and effectiveness44. Similarly, techniques such as digital twins12 pronouncedly would enhance the capabilities and benefits of in silico clinical trials, offering faster, cost-effective, and more accurate alternatives to traditional methods. The US FDA has already issued important guidelines on remote monitoring of devices45, remote inspections, and oversight of drug manufacturers46. These are good examples of how regulators can use AI and could be the template for other global drug regulators.
The regulators as human-in/on-the-loop
The regulators should focus on Human-in-the-Loop (HITL) and Human-on-the-Loop (HOTL) frameworks to integrate AI while maintaining essential human oversight47. HITL involves human intervention at critical points to guide and review AI outputs, while HOTL allows AI to operate autonomously with human supervision for necessary interventions. To prepare for this, the regulators must provide foundational and advanced AI training to staff, recruit AI experts, and form interdisciplinary teams.
Furthermore, developing clear ethical guidelines for AI use and ensuring adherence to them is essential. Establishing continuous learning programs and workshops will keep sponsors and regulators updated on AI advancements so that the oversight of AI applications is effective. These measures will ensure human experts can incorporate AI technologies efficiently while preserving critical human oversight through HITL and HOTL frameworks.
Regulators-sponsor partnership
The recent guidelines by major regulatory bodies such as the US FDA, EMA, and Health Canada all emphasize the need for collaboration with biopharmaceutical companies (sponsors) and have created a collaborative framework that places substantial responsibilities on both parties to ensure the safety, efficacy, and quality of AI-related efforts. The burden of maintaining data confidentiality, integrity, and compliance lies heavily on the regulators and the sponsors. The burden for routine audits and inspections to verify adherence to data management practices and regulatory guidelines will fall on the regulators. Meanwhile, the sponsors must proactively implement relevant quality-by-design (QbD) principles48, identify critical quality attributes (CQAs)49, and establish robust control strategies to ensure product quality. Quality Management Systems (QMS)49, which are contemporary to AI, should be employed to facilitate continuous improvement and risk management throughout the product lifecycle. Both parties should be responsible for sharing data and leveraging advanced technologies to enhance drug development processes. This shared responsibility ensures that new therapies are developed with the highest quality and safety standards, ultimately advancing public health. Ultimately, the burden should not be only that of the regulator to bear; it is a joint responsibility of sponsors and regulators.
Risk of inconsistent regulation
The concern is not the absence, excess, or insufficient regulation but rather the potential inconsistency in its application to AI-enabled ecosystem for human therapeutics. The OECD has identified over 1000 AI policy initiatives from 69 countries, territories, and the EU, covering various aspects of AI regulation and implementation50. As noted in the previous section, in many cases, there are multiple AI guidelines from within a single agency (e.g., the US FDA has over 20 AI-related initiatives). In the US, there are over 400 government agencies and sub-agencies, many of them involved in their own AI initiatives, and some guidelines are related to healthcare51. For example, the National Institute of Standards and Technology (NIST) is developing an AI Risk Management Framework to improve the trustworthiness of the AI system. The Department of Defense has established ethical principles for AI use in defense, emphasizing responsibility and governance. Legislative proposals such as the Algorithmic Accountability Act aim to require companies to assess the impact of their AI systems to ensure they do not produce unfair or discriminatory outcomes52. In addition to the federal regulations, various states in the US are creating their own regulations. For example, California is leading the way with about 30 legislative bills on how to regulate the impact of AI on individuals and society53.
Extrapolating the US example to other countries with their regulations for AI can result in the risk of inconsistent regulation and duplicative AI regulations, stifling innovation, increasing costs, and delaying the introduction of new therapies. Furthermore, uncertainty, inconsistency, and complexity in drug regulatory requirements create high barriers to entry for startups and smaller companies, limiting competition and the development of novel treatments. The rapid pace of technological advancements, such as AI-driven drug discovery, requires flexible and adaptive regulatory frameworks. Slow adoption or inconsistent regulation will create barriers to faster regulatory approvals and negate the advances made by AI-driven drug discovery and development.
Lastly, reconceptualizing AI and ML in the context of health regulatory sciences is necessary, as the definitions can vary depending on the audience and context of use. For instance, the US FDA defines AI too broadly, encompassing computer science, statistics, engineering, and decision sciences20. This broad definition could lead to an unclear scope, ambiguity, scope creep, resource dilution, and difficulty achieving focused objectives of regulating an emerging field.
Conclusions
Undoubtedly, AI will continue to play an exceptional role in discovering, developing, and manufacturing new human therapeutics. The industry and the regulators are best served resolving not whether to regulate these advancements but rather how. Historically, the regulators regulate based on precedent. As illustrated with the “Move 37 Conundrum”, regulating the AI-enabled ecosystems for human therapeutics may not be straightforward. Questions abound whether they should regulate the technology, the components that go into making the technology, or the outcome. Even by adopting risk-level-based regulations, it is not clear how regulators could fully understand the unknown risk that AI could bring without taking an overly risk-averse position and stifling the potential progress that AI promises. Under the current structure, one implication is clear: Regulators will encounter a capacity and capability gap that will persist for years without statutory intervention.
Supplementary information
Author contributions
R.S., M.P. and J.A. contributed to the concept and structure of the paper. R.S. drafted the manuscript. M.P. critically reviewed the manuscript. J.A. critically reviewed the manuscript.
Peer review
Peer review information
Communications Medicine thanks the anonymous reviewers for their contribution to the peer review of this work. [Peer review reports are available.]
Competing interests
The authors declare no competing interests.
Footnotes
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
The online version contains supplementary material available at 10.1038/s43856-025-00910-x.
References
- 1.Sertkaya, A., Beleche, T., Jessup, A. & Sommers, B. D. Costs of Drug Development and Research and Development Intensity in the US, 2000-2018. JAMA Netw. Open7, e2415445 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Deng, J., Yang, Z., Ojima, I., Samaras, D. & Wang, F. Artificial intelligence in drug discovery: applications and techniques. Brief. Bioinforma.23, bbab430 (2022). [DOI] [PubMed] [Google Scholar]
- 3.Gallego, V., Naveiro, R., Roca, C., Ríos Insua, D. & Campillo, N. E. AI in drug development: a multidisciplinary perspective. Mol. Divers25, 1461–1479 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Generative artificial intelligence in the metaverse era - ScienceDirect. https://www.sciencedirect.com/science/article/pii/S2667241323000198.
- 5.Paul, D. et al. Artificial intelligence in drug discovery and development. Drug Discov. Today26, 80–93 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Jumper, J. et al. Highly accurate protein structure prediction with AlphaFold. Nature596, 583–589 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.AI’s potential to accelerate drug discovery needs a reality check. Nature622, 217–217 (2023). [DOI] [PubMed]
- 8.Hutson, M. How AI is being used to accelerate clinical trials. Nature627, S2–S5 (2024). [DOI] [PubMed] [Google Scholar]
- 9.García Martín, H., Mazurenko, S. & Zhao, H. Special Issue on Artificial Intelligence for Synthetic Biology. ACS Synth. Biol.13, 408–410 (2024). [DOI] [PubMed] [Google Scholar]
- 10.Barrett, J. S., Goyal, R. K., Gobburu, J., Baran, S. & Varshney, J. An AI Approach to Generating MIDD Assets Across the Drug Development Continuum. AAPS J.25, 70 (2023). [DOI] [PubMed] [Google Scholar]
- 11.Katsoulakis, E. et al. Digital twins for health: a scoping review. npj Digit. Med.7, 1–11 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.National Academies of Sciences, Engineering, and Medicine. Opportunities and Challenges for Digital Twins in Biomedical Research: Proceedings of a Workshop—in Brief. (National Academies Press (US), Washington (DC), 2023). [PubMed]
- 13.Software as a Medical Device (SaMD). FDAhttps://www.fda.gov/medical-devices/digital-health-center-excellence/software-medical-device-samd (2020).
- 14.Arden, N. S. et al. Industry 4.0 for pharmaceutical manufacturing: Preparing for the smart factories of the future. Int. J. Pharmaceutics602, 120554 (2021). [DOI] [PubMed] [Google Scholar]
- 15.Li, Y. et al. Artificial intelligence-powered pharmacovigilance: A review of machine and deep learning in clinical text-based adverse drug event detection for benchmark datasets. J. Biomed. Inf.152, 104621 (2024). [DOI] [PubMed] [Google Scholar]
- 16.Macdonald, J. C., Isom, D. C., Evans, D. D. & Page, K. J. Digital Innovation in Medicinal Product Regulatory Submission, Review, and Approvals to Create a Dynamic Regulatory Ecosystem-Are We Ready for a Revolution? Front. Med. (Lausanne)8, 660808 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Patil, R. S., Kulkarni, S. B. & Gaikwad, V. L. Artificial intelligence in pharmaceutical regulatory affairs. Drug Discov. Today28, 103700 (2023). [DOI] [PubMed] [Google Scholar]
- 18.Mittermaier, M., Raza, M. M. & Kvedar, J. C. Bias in AI-based models for medical applications: challenges and mitigation strategies. npj Digit. Med.6, 1–3 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Williamson, S. M. & Prybutok, V. Balancing privacy and progress: a review of privacy challenges, systemic oversight, and patient perceptions in AI-driven healthcare. Appl. Sci. 14, 675 (2024). [Google Scholar]
- 20.Artificial Intelligence and Machine Learning (AI/ML) for Drug Development. FDAhttps://www.fda.gov/science-research/science-and-research-special-topics/artificial-intelligence-and-machine-learning-aiml-drug-development (2024).
- 21.Sun, Y., Sheng, D., Zhou, Z. & Wu, Y. AI hallucination: towards a comprehensive classification of distorted information in artificial intelligence-generated content. Humanit. Soc. Sci. Commun. 11, 1–14 (2024).
- 22.Acerbi, A. & Stubbersfield, J. M. Large language models show human-like content biases in transmission chain experiments. Proc. Natl Acad. Sci.120, e2313790120 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Tsopra, R. et al. A framework for validating AI in precision medicine: considerations from the European ITFoC consortium. BMC Med. Inform. Decis. Mak.21, 274 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Pyzer-Knapp, E. O. et al. Accelerating materials discovery using artificial intelligence, high performance computing and robotics. npj Comput Mater.8, 1–9 (2022). [Google Scholar]
- 25.Bai, L. et al. AI-enabled organoids: Construction, analysis, and application. Bioact. Mater.31, 525–548 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Silver, D. et al. Mastering the game of Go without human knowledge. Nature550, 354–359 (2017). [DOI] [PubMed] [Google Scholar]
- 27.The Coming Wave by Mustafa Suleyman: 9780593593950 | PenguinRandomHouse.com: Books. PenguinRandomhouse.com (2024).
- 28.Generally Accepted Scientific Knowledge in Applications for Drug and Biological Products: Nonclinical Information. https://www.fda.gov/regulatory-information/search-fda-guidance-documents/generally-accepted-scientific-knowledge-applications-drug-and-biological-products-nonclinical (2023).
- 29.Digital Health Center of Excellence. FDAhttps://www.fda.gov/medical-devices/digital-health-center-excellence (2024).
- 30.Considerations for the Use of Artificial Intelligence To Support Regulatory Decision-Making for Drug and Biological Products. https://www.fda.gov/regulatory-information/search-fda-guidance-documents/considerations-use-artificial-intelligence-support-regulatory-decision-making-drug-and-biological (2025).
- 31.Artificial intelligence workplan to guide use of AI in medicines regulation | European Medicines Agency (EMA). https://www.ema.europa.eu/en/news/artificial-intelligence-workplan-guide-use-ai-medicines-regulation (2023).
- 32.Big data | European Medicines Agency (EMA). https://www.ema.europa.eu/en/about-us/how-we-work/big-data (2021).
- 33.Impact of AI on the regulation of medical products. GOV.UKhttps://www.gov.uk/government/publications/impact-of-ai-on-the-regulation-of-medical-products (2024).
- 34.Hannas, W. C. & Chang, H.-M. China’s ‘New Generation’ AI-Brain Project (2021).
- 35.AI Safety in Japan 2024, https://aisi.go.jp/assets/pdf/j-aisi_factsheet_2024_en.pdf.
- 36.Attard-Frost, B., Brandusescu, A. & Lyons, K. The governance of artificial intelligence in Canada: Findings and opportunities from a review of 84 AI governance initiatives. Gov. Inf. Q.41, 101929 (2024). [Google Scholar]
- 37.Source, M. Microsoft and LinkedIn release the 2024 Work Trend Index on the state of AI at work. Storieshttps://news.microsoft.com/2024/05/08/microsoft-and-linkedin-release-the-2024-work-trend-index-on-the-state-of-ai-at-work/ (2024).
- 38.Warraich, H. J., Tazbaz, T. & Califf, R. M. FDA Perspective on the Regulation of Artificial Intelligence in Health Care and Biomedicine. JAMA333, 241–247 (2025). [DOI] [PubMed] [Google Scholar]
- 39.Gotlieb, S. Congress Must Update FDA Regulations for Medical AI, JAMA Health Forum (2024). [DOI] [PubMed]
- 40.22-451 Loper Bright Enterprises v. Raimondo (06/28/2024). Supreme Court of the United States (2024).
- 41.ICH Official web site: ICH. https://www.ich.org/page/ich-guidelines.
- 42.Commissioner of the FDA In Brief: FDA Collaborates with Health Canada and UK’s MHRA to Foster Good Machine Learning Practice. FDAhttps://www.fda.gov/news-events/press-announcements/fda-brief-fda-collaborates-health-canada-and-uks-mhra-foster-good-machine-learning-practice (2021).
- 43.Ajmal, C. S. et al. Innovative Approaches in Regulatory Affairs: Leveraging Artificial Intelligence and Machine Learning for Efficient Compliance and Decision-Making. AAPS J.27, 22 (2025). [DOI] [PubMed] [Google Scholar]
- 44.Baker, P., Cathey, T. & Auclair, J. R. Evaluation of a Pilot: Inspection Facilitation and Collaboration Using a Mixed Reality Device. Ther. Innov. Regul. Sci.58, 11–15 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Enforcement Policy for Non-Invasive Remote Monitoring Devices Used to Support Patient Monitoring. https://www.fda.gov/regulatory-information/search-fda-guidance-documents/enforcement-policy-non-invasive-remote-monitoring-devices-used-support-patient-monitoring (2023).
- 46.Remote Interactive Evaluations of Drug Manufacturing and Bioresearch Monitoring Facilities. https://www.fda.gov/regulatory-information/search-fda-guidance-documents/remote-interactive-evaluations-drug-manufacturing-and-bioresearch-monitoring-facilities (2023).
- 47.Mosqueira-Rey, E., Hernández-Pereira, E., Alonso-Ríos, D., Bobes-Bascarán, J. & Fernández-Leal, Á. Human-in-the-loop machine learning: a state of the art. Artif. Intell. Rev.56, 3005–3054 (2023). [Google Scholar]
- 48.Walsh, I. et al. Harnessing the potential of machine learning for advancing “Quality by Design” in biomanufacturing. MAbs14, 2013593 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Ullagaddi, P. Digital transformation in the pharmaceutical industry: Enhancing quality management systems and regulatory compliance. Int. J. Health Sci.12, 31–43 (2024). [Google Scholar]
- 50.The OECD Artificial Intelligence Policy Observatory. https://oecd.ai/en/.
- 51.A-Z index of U.S. government departments and agencies | USAGov. https://www.usa.gov/agency-index.
- 52.Rep. Clarke, Y. D. [D-N.-9. Text - H.R.6580 − 117th Congress (2021-2022): Algorithmic Accountability Act of 2022. https://www.congress.gov/bill/117th-congress/house-bill/6580/text (2022).
- 53.Johnson, K. California has 30 new proposals to rein in AI. Trump could complicate them. CalMatters (2025).
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
