ABSTRACT
Artificial intelligence (AI) is driving innovation in clinical pharmacology and translational science with tools to advance drug development, clinical trials, and patient care. This review summarizes the key takeaways from the AI preconference at the American Society for Clinical Pharmacology and Therapeutics (ASCPT) 2024 Annual Meeting in Colorado Springs, where experts from academia, industry, and regulatory bodies discussed how AI is streamlining drug discovery, dosing strategies, outcome assessment, and patient care. The theme of the preconference was centered around how AI can empower clinical pharmacologists and translational researchers to make informed decisions and translate research findings into practice. The preconference also looked at the impact of large language models in biomedical research and how these tools are democratizing data analysis and empowering researchers. The application of explainable AI in predicting drug efficacy and safety, and the ethical considerations that should be applied when integrating AI into clinical and biomedical research were also touched upon. By sharing these diverse perspectives and real‐world examples, this review shows how AI can be used in clinical pharmacology and translational science to bring efficiency and accelerate drug discovery and development to address patients' unmet clinical needs.
Keywords: artificial intelligence, explainable machine learning, large language models, machine learning
1. Introduction
Artificial intelligence (AI) and machine learning (ML) have the potential to transform the landscape of quantitative clinical pharmacology and translational science by offering unprecedented opportunities to improve drug development, optimize clinical trials, and enhance patient outcomes [1]. The application of AI in these fields is rapidly transitioning from experimental uses to becoming integral components of modern biomedical research and healthcare delivery [2]. The American Society for Clinical Pharmacology and Therapeutics (ASCPT) 2024 Annual Meeting in Colorado Springs hosted a preconference workshop titled “Empowering Clinical Pharmacologists and Translational Scientists Using Artificial Intelligence: Unlocking Potential with Cutting‐Edge Use Cases” [3]. This workshop explored the latest advancements in AI‐driven applications within clinical pharmacology and translational science presented by leading experts from academia, the pharmaceutical industry, and regulatory agencies to discuss the current state and future potential of AI in these fields.
This review aims to capture the highlights and discussions from the ASCPT 2024 AI preconference and provide an overview of different applications where AI is being used to advance clinical pharmacology and translational research. It covers a broad range of topics from AI in drug development and regulatory to dosing optimization, digital biomarker development, and patient monitoring. This review aims to present all the diverse views expressed during the preconference, so the reader can get a complete picture of how AI is driving innovation in these fields. By summarizing these proceedings, this review serves as a valuable resource for researchers, clinicians, and industry professionals to stay up to date on the latest AI trends and learn about AI applications in drug development, clinical pharmacology, and translational research. For readers unfamiliar with certain technical terms and concepts discussed in this manuscript, a glossary is provided in the supplemental materials with definitions of these terms.
2. AI in Drug Development and Regulation
2.1. Industry Perspective on AI‐Driven Applications in Drug Development
The integration of AI and ML into clinical pharmacology and translational science is rapidly reshaping drug development and medical research. These technologies are driving significant advancements across various stages of the pharmaceutical pipeline, from early drug discovery to clinical trials and regulatory submissions. At the AI preconference workshop, several current applications and forward‐looking use cases were presented, as described below.
2.2. Advancements and Applications
Generative AI in Medicine: Since the launch of ChatGPT by OpenAI in December 2022, there has been a significant evolution from early large language models (LLMs) to advanced multimodal systems. Models like GPT‐4 and Med‐PaLM 2 demonstrate capabilities on par with human experts in clinical knowledge, vastly improving in handling complex medical data, including images, genomic data, and radiology reports [4]. In the near future, AI is expected to assist in creating drafts of regulatory documents—such as protocols, statistical analysis plans, and clinical study reports—with dynamic links to source tables, pharmacokinetics/pharmacodynamics (PK/PD) data, patient narratives, and target product profiles. Optimized prompt engineering, retrieval‐augmented generation, and fine‐tuning of open‐source or proprietary LLMs can enhance the factual accuracy of outputs and reduce bias and errors [5].
AI in Drug Discovery: AI technologies, particularly deep generative models, are instrumental across the drug development continuum. From generating novel molecular structures to assisting in drug repurposing, AI significantly enhances drug discovery processes [6].
Digital Endpoints: Digital health technologies, often augmented by AI and validated through patient‐derived data, serve as innovative tools for continuous monitoring of drug efficacy and safety endpoints. For example, the MAGIC trial demonstrated the capability of digital health technologies to gather highly reliable and compliant data on gait and physical activity from pediatric populations, applicable in both clinical settings and home environments [7].
Impact on Patient Experience: AI‐driven digital workforce solutions are beginning to automate significant portions of clinical trial processes [8]. Tasks such as postcare follow‐up, subject consent, and vital sign tracking are being managed by AI, enhancing operational efficiencies and participant engagement.
Regulatory Submissions: The adoption of AI/ML in clinical development stages has notably increased, with a significant presence in regulatory submissions. Approximately 89% of AI submissions to the U.S. Food and Drug Administration (FDA) come from clinical development, particularly in fields like oncology, psychiatry, and neurology. Common analyses include outcome prediction, pharmacometrics modeling, and anomaly detection [9]. More in‐depth discussion regarding the regulatory perspective on AI‐driven applications in drug development was covered during the meeting with successful case studies of using AI in regulatory submissions, as explained in the next section.
2.3. Regulatory Perspective on AI‐Driven Applications in Drug Development
The landscape in drug development is undergoing a transformative shift with the rapid integration of AI/ML technology, which is evident by the significant increase of AI/ML‐driven regulatory submissions to the FDA in recent years. Before 2018, submissions with AI/ML components were rare; however, in 2021, the number of such submissions surged to 132—approximately a 10‐fold increase compared to the previous year—and continued to rise to 175 in 2022 [9]. This trend reflects the growing use of AI/ML in the biomedical field and is likely to continue.
The application of AI/ML now spans the entire drug development life cycle and covers a wide spectrum of therapeutic areas. These applications are used in various tasks, such as informing drug discovery and repurposing, enhancing clinical trial design elements, dose optimization, improving adherence to drug regimens, endpoint and biomarker assessment, and postmarketing surveillance [9]. Recognizing the transformative potential of these technologies, the FDA has not only acknowledged their increased use but has also begun leveraging AI and ML in its regulatory decision‐making processes. For instance, in 2022, the FDA issued an Emergency Use Authorization for anakinra for the treatment of COVID‐19 in hospitalized adults with pneumonia requiring supplemental oxygen who are at risk of progressing to severe respiratory failure and likely to have elevated plasma soluble urokinase plasminogen activator receptor (suPAR). The clinical efficacy and safety data supporting the Emergency Use Authorization were primarily based on the SAVEMORE trial, a randomized, double‐blind, placebo‐controlled study where patients were required to have a suPAR level ≥ 6 ng/mL. However, since the suPAR assay is not commercially available in the United States, the FDA review team used an AI/ML method to develop a scoring rule to identify patients most likely to benefit from anakinra treatment [10]. This was the first time the FDA used AI/ML to identify a population for drug therapy.
In 2023, the FDA released a discussion paper on the use of AI and ML in the development of drug and biological products, aiming to spur discussion with interested parties and encourage engagement and collaboration [11]. In 2024, the FDA's Center for Drug Evaluation and Research established an AI Council to improve oversight and coordination of internal and external AI efforts. These are just few examples that underscore the FDA's proactive and leadership role in embracing AI/ML technology to advance drug development.
Traditionally, drug development and regulatory decision‐making have followed a model compartmentalized into three well‐established domains: quality, nonclinical, and clinical. This model, dating back over 40 years, uses traditional methods and data types considered standard within these domains. However, some questions—such as dose‐finding and optimization, characterization of treatment effect size, and benefit–risk assessment—span across domains and should be addressed in a complementary and integrated manner. The availability of new data sources and types, including real‐world data, and innovative methods like AI/ML, offers new opportunities for drug development and regulatory decision‐making. Thus, there is a need to move toward a streamlined decision‐making process that includes advanced technologies such as AI/ML and advanced modeling and simulation.
To address this evolving landscape, the European Commission has funded a new Horizon project named ERAMET (Ecosystem for Rapid Adoption of Modeling and Simulation Methods to Address Regulatory Needs in the Development of Orphan and Pediatric Medicines) under the Grant Agreement 101137141. The ERAMET project proposes the use of a question‐centric approach implemented using the credibility assessment framework [12], combined with the attribute framework and uncertainty quantification, to be applied to regulatory questions as well as data and methods used to address them (Figure 1). This will ultimately take the form of an AI‐based platform working in a protected environment to illustrate this new way of thinking within the decision‐making process.
FIGURE 1.

Schematic representation of the implementation of the question‐centric approach. Different steps include data searching, data formatting, interrogation of relevant data sources, data analysis, and benchmarking of the approaches (if applicable) to inform the final regulatory decision making. AI, artificial intelligence; FAIR, findability, accessibility, interoperability, and reuse; M&S, modeling and simulation; ML, machine learning; NLP, natural language processing. Reprinted from Musuamba et al. [13]. Reprint permission granted.
The ERAMET framework will permit benchmarking alternative combinations of methods and data. Initially, this will be done from the perspective of regulatory assessors to facilitate the uptake of innovative approaches in pediatric and orphan drug development and regulatory assessment. It will subsequently be applied to selected use cases related to pediatric and orphan drug development. The first step involves developing a repository based on regulatory submissions that catalogs different combinations of regulatory questions and the methods and data used to answer them. In this repository, relevant drug development questions will be presented hierarchically. Data and methods to address each question will be described and benchmarked using both the credibility and attribute frameworks. Methods for uncertainty quantification will be developed and incorporated into the credibility framework to inform acceptance criteria for alternative data and methods addressing the same regulatory question. Model credibility assessment, the attribute framework, and uncertainty quantification will be implemented for each scenario, and the different data, methods, and models used as alternatives to address a question will be benchmarked and compared to the method accepted by regulators (considered the evidentiary standard).
AI/ML‐based tools will be developed and progressively connected to support different steps of regulatory assessment, namely data searching, formatting, analysis (including implementation of the credibility analysis, attribute framework, and uncertainty quantification), and method benchmarking. New standards will be developed and proposed based on these results, which could serve as a basis for regulatory guidance documents. Eventually, an AI‐based platform—the ERAMET platform—connecting the relevant internal and external tools and assessment methods to constitute a decision aid will be implemented in a secure environment to support and expedite regulatory decision‐making. The ERAMET project will be applied to use cases in the field of pediatric and orphan drug development and assessment. This example represents a step forward toward enhancing the rigor and efficiency of regulatory assessment processes.
In addition to the regulatory and industry perspectives on AI‐driven applications in drug development, other perspectives from leaders in academia and clinical research centers have been shared during the ASCPT 2024 AI preconference, which have been captured within the various case studies presented in the next sections of this review, illustrating various potential applications of AI in clinical research.
3. AI in Clinical Research
This section explores various case studies presented at the preconference illustrating the diverse applications of AI and ML in clinical and translational research. From optimizing dosing strategies and enhancing clinical trials to developing digital biomarkers, these examples underscore the potential role of AI in advancing clinical research and patient care.
3.1. AI in Clinical Trial Design and Dose Optimization
Data‐driven dosing optimization faces numerous challenges, including patient variability, data noise, concomitant medications, nonlinear biological feedback mechanisms, and confounding factors. Leveraging AI in collaboration with human expertise is essential to navigate these obstacles and optimize outcomes.
ML offers a promising approach for exposure‐response analysis due to its ability to handle large, complex data sets in a semi‐automated manner [14] and uncover intricate patterns that traditional methods might miss [15, 16]. For example, ML was used to analyze hyperglycemia—a common adverse event associated with AKT inhibitors [17]. Clinical data from patients treated with the AKT inhibitor ipatasertib were analyzed using a tree‐based XGBoost model. The model predicted the probability of hyperglycemia, identifying higher drug exposure and baseline HbA1c levels as significant factors. The relationship between HbA1c and hyperglycemia was notably nonlinear, with a sharp increase in risk around the prediabetic range.
In the context of efficacy, ML was applied to understand patient factors influencing remission in ulcerative colitis trials [18]. Data from four Phase 3 trials were merged, including ones involving a complex induction–maintenance design. A causally aware ML approach was employed, leveraging an underlying causal diagram that accounted for prognostic factors influencing both exposure and remission. This analysis revealed a positive exposure–response relationship and identified key patient factors with nonlinear relationships to remission.
The development of pharmacology‐informed neural network (PINN) architectures based on neural ordinary differential equations (neural‐ODEs) has enabled the encapsulation of causality principles that underlie pharmacology in the deep learning modeling context [19, 20, 21]. Such PINN models present an attractive framework for evaluating dosing regimens, particularly in scenarios with noisy and variable biomarker data. This approach was applied to address alternate dosing regimens that might mitigate an observed increase in a safety biomarker by modeling the pharmacodynamic response in relation to plasma PK and concomitant medication timing using neural‐PK/PD [21]. Model training generated patient embedding vectors that quantified the PD response at the individual level. The PINN architecture allowed for “what‐if” simulations, quantifying the effects of alternate dosing regimens and concomitant medications on biomarker dynamics.
These examples illustrate the transformative potential of AI in enhancing dose optimization. While further methodological advancements are needed, leveraging causally aware ML and deep learning models allows drug developers and researchers to uncover complex relationships within clinical data, optimize dosing regimens more effectively, enable precision medicine [22], and ultimately improve patient outcomes.
3.2. AI in Clinical Trial Enrichment for Type 1 Diabetes Studies
Synthetic patient‐level data generation is revolutionizing clinical trial enrichmentby making advanced tools more accessible while safeguarding data privacy and meeting regulatory requirements [23]. Through the development of an AI‐driven synthetic data generator, trained and validated on real‐world patient cohorts, this approach retains essential patient characteristics without exposing actual individual‐level data [24]. This method enhances data security and allows researchers to use enriched data sets to improve trial design and outcomes, offering a practical solution that balances accessibility with privacy protections.
T1dCteGui (https://t1d‐cte.c‐path.org) is a clinical trial enrichment tool designed for type 1 diabetes prevention studies [25]. This tool exemplifies the practical use of AI‐generated synthetic populations in real‐world research settings. Central to the tool's functionality is the use of a Conditional Tabular Generative Adversarial Network (CTGAN) model, implemented via the Synthetic Data Vault (SDV) Python package [24]. Using advanced modeling, the CTGAN framework generates synthetic patients based on insights from studies like TN01 and TEDDY, reflecting patterns observed in real‐world data without compromising privacy [26, 27]. Each synthetic patient is defined by nine covariates used in the parametric time‐to‐eventmodel, ensuring the synthetic data reflects meaningful patient characteristics without replicating any individual's data. The aggregate statistical behavior of the synthetic cohort aligns closely with that of the original data set, particularly in downstream model predictions. The synthetic data not only meets privacy requirements but also facilitates the identification of high‐risk subpopulations by applying the time‐to‐event model to predict type 1 diabetes onset across varied trial scenarios.
The development of T1dCteGui reflects a shift in clinical research, where AI‐generated synthetic data plays a critical role in transforming drug development, allowing innovative trial designs to progress while safeguarding patient privacy. Future expansions of this methodology to other therapeutic areas highlight the broader potential of AI/ML‐generated patient populations to streamline clinical research and accelerate regulatory‐grade evidence generation.
3.3. AI in Digital Biomarker Development and Outcome Assessment
Digital biomarkers are objective, quantifiable physiological and behavioral data collected and measured by digital health technologies such as wearable sensors, smartphones, and other portable or implantable devices [28]. These biomarkers offer the potential for more frequent, even continuous, monitoring of patient health status. The complexity of processing digital biomarker data stems from factors like high dimensionality, temporal nature involving time‐series data, noise and variability in raw data, heterogeneity of data sources and formats, and the challenge of translating raw digital signals into clinically meaningful insights [29].
AI and ML techniques have emerged as powerful tools for addressing these complexities, particularly in handling high‐dimensional data and extracting meaningful patterns. AI algorithms excel at signal processing, extracting relevant features from raw sensor data, while ML approaches aid in dimension reduction, identifying the most informative variables from large data sets. These techniques enable sophisticated pattern recognition, multimodal data integration, predictive modeling, and anomaly detection, all crucial for developing effective digital biomarkers. Currently, AI/ML techniques have been applied to various digital biomarker development projects, including image and voice analysis for neurological assessments, quantification of specific behaviors using wearable sensors, and integration of multiple data sources for comprehensive patient monitoring [30, 31, 32].
By leveraging AI and ML in digital biomarker analysis, researchers and clinicians can potentially detect disease progression earlier, personalize treatment approaches, and gain deeper insights into the complex nature of neurodegenerative diseases, offering hope for improved patient outcomes in the future.
3.4. Explainable AI for Predicting Adverse Drug Events
AI is increasingly revolutionizing healthcare by offering advanced tools for evaluating disease activity and progression [33]. With its ability to process vast, complex data sets, AI/ML can identify patterns and predictors that may not be apparent through traditional methods. This technology provides more accurate predictions of disease trajectories, enabling earlier interventions and personalized treatment strategies. In chronic conditions like cancer, AI enhances understanding of how diseases evolve over time and how patients respond to treatments [34]. Furthermore, explainable AI ensures transparency, allowing clinicians to trust and interpret the factors influencing predictions, a crucial element in clinical decision‐making [1, 35].
In line with these advancements, explainable ML was applied to predict edema, a common adverse event in patients treated with tepotinib, a MET inhibitor for nonsmall‐cell lung cancer [36]. Using longitudinal data from 612 patients across five Phase I/II clinical trials, researchers employed two ML models—random forest and gradient boosting trees—to assess predictors of edema occurrence and severity. The study developed a framework that integrated 54 time‐invariant and time‐varying clinical covariates to account for the temporal nature of disease progression.
One of the key strengths of the study was the use of explainable AI tools, particularly Shapley Additive Explanations (SHAP) [37], which helped identify the most influential factors in predicting edema. Serum albumin levels were found to be the most significant predictor, with lower levels associated with higher severity of edema. Other important factors aligned with existing knowledge of the adverse event's behavior. Probability calibration via Isotonic Regression was used to ensure accurate estimation of frequencies of edema grades.
This study demonstrates the power of AI in analyzing longitudinal clinical data, providing insights into both population‐level trends and individual patient risks. By integrating explainable AI, the researchers developed a framework that can predict the future progression of edema with high accuracy, achieving weighted F1 scores of up to 0.961 and enabling the interpretability of the results. This approach offers a valuable tool for clinicians, allowing them to make more informed treatment decisions while advancing precision medicine.
3.5. Machine Learning for Covariates Modeling and Selection
Understanding the fundamental principles of ML that underlie its modeling power and broad success is important for applications in clinical pharmacology and pharmacometrics. In these disciplines, model validation and interpretation are ongoing challenges for applying ML methods. Specifically, the principle of regularization and novel methods for interpretable ML bridge the gaps between modeling methods.
Regularization was applied in a Bayesian exposure‐response analysis by using a spike‐and‐slab prior for the covariate effect—a sparsity‐inducing prior interpreted through the perspective of covariate selection (Figure 2) [38]. The spike‐and‐slab prior uses a mixture distribution with two components: one for an informative covariate and another for a covariate considered to be noise. The regularization with the prior allows for a large number of covariates to be included in a single model while maintaining stability and reasonable parameter uncertainty, resulting in the majority of covariates being shrunk to have negligible estimates. Estimates of the probability of a covariate being influential were also derived.
FIGURE 2.

Spike and Slab prior (solid line) construction, using a mixture of normal distributions (dashed lines).
In an ML application, SHAP [37] was used to identify and interpret covariate relationships in an XGBoost model that predicted the transition between multiple sclerosis subtypes (Figure 3) [39]. Nonlinear relationships for the effect of baseline disease severity and the log‐odds of transitions were identified with SHAP. The magnitude and variability in effect across individuals for each covariate were derived, yielding a ranking of covariate importance in the model. The ranking was compared to clinical expectations and generally showed agreement. Using the ML model allowed for hypothesis generation for new covariates and flexible estimation of nonlinear relationships, greatly increasing the speed of model development compared to iterating through many parametric models.
FIGURE 3.

Estimated univariate relationship between EDSS (Expanded Disability Status Scale) score, a measure of disease severity, and the Shapley Value to visualize the covariate effects in an xgBoost model.
These examples show how ML and classical modeling are not distinct domains but rather part of a larger space of modeling approaches and techniques. Specific applications can borrow and cross‐pollinate ideas to address limitations in methods, from difficulty including many covariates in a parametric exposure‐response model to understanding the important predictors from a black‐box ML model. Covariate modeling in an era of big data and extensive clinical questions presents an opportunity for ML principles to extend traditional models and apply methods to ensure ML models are trusted, validated, and accepted in a clinical pharmacology context.
4. AI in Translational Science: Bridging Research and Clinical Practice
4.1. Harnessing AI for Translational Medicine
A major challenge in translational medicine is bridging the gap between optimizing drug potency in preclinical models and achieving clinical efficacy and safety [40]. Ideally, drugs should be designed to maximize therapeutic effects while minimizing side effects in humans. However, direct human experimentation is unethical and impractical. AI‐powered models provide a promising alternative, offering the ability to predict drug effects in humans by simulating the entire drug lifecycle at multiple scales, including genome‐wide drug–target interactions, cellular responses, and clinical outcomes.
A key focus in drug design is predicting how molecules interact with cellular components, particularly proteins. Recently, LLMs [41], semi‐supervised learning [42], and meta‐learning [43] have been leveraged to improve the modeling of protein‐small molecule interactions on a genome scale. While ligand binding kinetics—how fast a drug binds and unbinds from its target—are closely linked to drug efficacy and toxicity, predicting these kinetics remains difficult due to limited data availability. Integrating AI with molecular dynamicssimulations could enhance the accuracy of these predictions [44].
The increasing availability of single‐cell omics data offers new opportunities to model how cells respond to drug treatments. AI techniques, such as multimodal learning, can integrate diverse omics data to simulate these responses across different genetic and environmental backgrounds [45]. Deep learning, in particular, can use large amounts of unlabeled data to develop foundational models for biomolecules and cells while also enabling the fusion of diverse data types into a unified framework and simulating biological processes across multiple levels [46]. These methods hold great promise for predicting drug‐induced cellular responses.
To understand human biology and disease, it is essential to map interactions between cells, tissues, and organs. Advances in spatial omics have made it possible to study these interactions in detail, revealing the interconnected nature of many diseases. Computational models, such as studies on microbiome interactions, highlight the potential of a systematic approach to understanding and treating complex diseases [45, 47].
Perturbation genomics techniques, such as Perturb‐seq and epigenome editing, are invaluable for modeling human physiology. While these data sets are primarily derived from disease models, they are essential for developing AI models. However, even advanced systems like organ‐on‐a‐chip cannot fully replicate human biology. Therefore, translating insights from disease models to human systems is crucial. AI techniques such as contrastive learning, transfer learning, and generative models show promise in predicting clinical drug responses from disease data [48].
Despite progress, AI models in translational science still struggle with out‐of‐distribution problems, where new chemicals or disease states differ from the training data. Improving model generalizability, uncertainty quantification, and interpretability will be crucial for building trustworthy AI systems. Integrating advanced AI methods, systems biology, and biophysics offers tremendous potential for overcoming these challenges and realizing AI's full promise in translational science.
4.2. Integration of AI/ML Models in Drug Discovery
The goal of drug discovery programs is to identify compounds with a high probability of engaging the pharmacological target at a reasonable dose to trigger the desired pharmacodynamic response toward achieving the intended efficacy. The PK profile combined with the appropriate potency of the compounds dictates the extent and duration of target engagement. This relies on identifying compounds with the right balance of key properties such as solubility, permeability, and stability.
Clearance is one of the key PK parameters, and various translational approaches are utilized to estimate human clearance from preclinical studies. Mechanistic PK models, such as the well‐stirred and parallel tube models, have been routinely used for translating in vitro clearance [49]. Recently, ML models have also been reported to predict human clearance using various approaches. Here we briefly summarize three approaches as illustrated in Figure 4.
Approach A: Utilizes curated human clearance from literature. ML models are trained based on features derived from chemical structures to enable direct prediction of human clearance based on input chemical structures. This approach allows for virtual screening of a large set of designs to prioritize compounds for synthesis.
Approach B: A hybrid approach involving ML and mechanistic models. Multiple ML models are built using in vitro data for human hepatocyte clearance and various binding measurements required for mechanistic prediction of human clearance. The ML models are trained using features derived from chemical structures, enabling estimation of predicted human clearance based on chemical structures.
Approach C: Similar to Approach A but uses measured in vitro data for six properties (hepatocyte clearance, protein binding in plasma and microsomes, blood‐to‐plasma partitioning, permeability, and logD) as features for the ML model. Measured in vitro data are required to predict human clearance.
FIGURE 4.

Illustration of three approaches for preclinical to clinical translation of human clearance using machine learning and mechanistic pharmacokinetic models.
Comparative studies have shown varying predictive performance for these approaches, as shown in Table 1. For instance, Miljković et al. [50] used approach A to directly predict human clearance from chemical structure. For a cross‐validation test set, the predictive performance was rather poor with R 2 of 0.1–0.2. Parrott et al. [52] evaluated Approach B using a relatively challenging and prospective test set of 12 compounds with high lipophilicity and plasma protein binding along with poor aqueous solubility. While the absolute average fold error (AAFE) was 3.6, only 50% of compounds were predicted within threefold. Keefer et al. [51] reported all three approaches, but instead of total clearance, trained and predicted in vivo intrinsic clearance (Clint). In addition to cross‐validation, for two of these approaches, they also provided evaluation for a prospective test set. The latter is a rigorous assessment of ML models and simulates ‘real world’ scenarios in application of ML models. In their analysis, comparing Approaches B and C for the prospective test set of 16 compounds, all three parameters, AAFE, %within threefold, and R 2, were superior for Approach C. It should be noted that since the three publications described above used different training and testing sets, it would not be appropriate to compare their performance against each other. While direct comparison is challenging due to different data sets, these ML approaches represent valuable tools for predicting human clearance and guiding drug discovery efforts.
TABLE 1.
Summary of models exemplifying the three approaches used for preclinical to clinical translation of human clearance using ML and mechanistic PK models.
| Approach | ML method | Endpoint | Cross‐validation | Prospective validation | Reference | ||||
|---|---|---|---|---|---|---|---|---|---|
| AAFE | R 2 | %within 3‐fold | AAFE | R 2 | %within 3‐fold | ||||
| A | Random Forest | IV CL | NR | 0.1–0.2 | NR | NR | NR | NR | Miljković et al. [50] |
| XGBoost | In vivo Clint | 3.1 | 0.46 | 61 | NR | NR | NR | Keefer et al. [51] | |
| B | ADMET Predictor a | IV CL | NR | NR | NR | 3.6 | NR | 50 | Parrott et al. [52] |
| XGBoost | In vivo Clint | 3.1 | 0.51 | 57 | 2.9 | 0.32 | 56 | Keefer et al. [51] | |
| C | XGBoost | In vivo Clint | 2.5 | 0.67 | 65 | 1.9 | 0.84 | 88 | Keefer et al. [51] |
Abbreviations: AAFE, absolute average fold error; CL, clearance; Clint, intrinsic clearance; NR, not reported.
ADMET Predictor is a commercial software from Simulations Plus and uses proprietary algorithm for ML models.
4.3. AI and Personalized Immune Digital Twins
The human immune system is implicated in 80+ autoimmune diseases, 400+ primary immune disorders, thousands of immune‐inducing allergens, millions of annual infections, and is further implicated in a much broader spectrum of systemic biological responses, including chronic diseases, wound healing, and trauma responses. More than 14 billion laboratory tests are conducted annually in the United States, and 70% of medical decisions depend on these results [53]. These data currently exist in discrete snapshots of various scales of biological organization. For example, resulting blood count and chemistry, pathogen cultures/identification, iron, lipids, liver panel, immunoglobulins, nutrients, and T‐ and B‐cell assays are all displayed as individual data points without regard to contextual significance, biological scale, or anticipated trajectory [54]. The lack of efficient integration of these data means that much of the historical data is ignored in favor of the most recent data points. These historical data, however, are critical to understanding disease progression because prior events continue to adjust the balance (homeostasis) of a person's immune system. What is needed is a digital twin—a simulatable computer replica—of the human immune system that is capable of integrating patient data to personalize analyses and predict pathogenic trajectories on an individualized basis. If a patient's data can be utilized to understand disease progression, it can also guide the understanding of disease reversal and disease prediction—saving money, time, and lives.
The immune system is one of the most complex human systems as it is governed by nonlinear networks across many scales of biological organization (e.g., signal transduction, metabolism, intercellular). To fully understand the etiology and pathology of immune‐related diseases, we need immune digital twins capable of integrating multiple mathematical mechanistic approaches to fully represent the biological processes and interactions within the immune system, facilitating the exploration of causal relationships and system dynamics. The mechanistic aspects of immune digital twins can also simulate and explain the mechanism of intervention effects (e.g., drug treatments), providing valuable predictions for hypothesis testing and clinical decision‐making. The integration of AI into mechanism‐based immune digital twins enhances these models by providing advanced tools for decreasing compute requirements through surrogate ML models, data integration, and predictive analytics to forecast the trajectory of the baseline state of one's immune system or its ability to respond to a potential threat (e.g., infection) [55, 56]. Finally, immune digital twins also need advanced visualizations that can represent the intricacies of the immune system while facilitating engagement with the technology by multidisciplinary audiences, including researchers to collaborate on the development of the models and their use for drug discovery and development, clinicians to augment precision medicine by personalizing immune digital twins to their patients, and patients to use as a tool to increase their knowledge about their immune system and its health.
4.4. Decoding Neuronal Responses With Machine Learning
Parallel to developments in immune modeling, advancements in ML have also enabled the creation of highly accurate predictive models for neural population activity, particularly within the visual cortex. These models act as “digital twins” of the visual cortex, bridging the gap between biological and computational understandings of visual processing. They allow neuroscientists to test hypotheses in simulated and real‐world settings, paving the way for AI‐driven visual systems.
To predict neural responses, researchers use two main approaches: task‐driven methods, which apply pretrained features from other computer vision tasks, and data‐driven methods, which train an entire network end‐to‐end. Specifically tailored to neural activity, using the SENSORIUM data set—recordings of neuron responses from mice exposed to natural images—researchers developed a state‐of‐the‐art model that ranked first in the SENSORIUM 2022 Challenge [57]. This model incorporated object positioning data and utilized an ensemble strategy, yielding a 15% improvement in prediction accuracy. Remarkably, it mirrored biological patterns seen in the primary visual cortex,where responses across different mice exhibited similar characteristics to the same stimuli. Such advances enhance our ability to predict and potentially control neural behavior, providing a noninvasive tool for exploring the organization of the visual cortex and informing therapeutic applications.
4.5. Large Language Models Applications in Biomedical Research
4.5.1. AI in the Biomedical Knowledge Economy
Advancements in systems biology and medicine increasingly depend on the ability to disentangle complex causal relationships within high‐dimensional biological data. Traditional approaches often struggle with the curse of dimensionality, making it challenging to infer causation rather than mere correlation. This complexity is exacerbated by the intricate nature of modern experiments, which involve numerous variables from clinical parameters to specific cell culture conditions.
To address these challenges, two open‐source frameworks have been developed: BioCypher and BioChatter. These tools aim to democratize knowledge representation and streamline the integration of AI into biomedical research, thereby enhancing productivity and facilitating deeper insights into complex biological phenomena.
BioCypher is designed to organize and ground biomedical knowledge efficiently [58]. It reduces redundancies in data curation by providing reusable components, thus enhancing productivity in knowledge management tasks. The framework is modular, comprising three key components:
Input adapters: Ingest data from diverse resources, standardizing various formats and technologies into a unified structure.
Ontologies: Leverage domain‐specific ontologies to ground knowledge, ensuring harmonization and interoperability across data sets.
Output options: Support multiple output formats, from in‐memory data structures to database management systems, facilitating analysis and sharing.
By harmonizing disparate data sets into cohesive knowledge graphs, BioCypher promotes reproducibility through containerization and end‐to‐end testing.
BioChatter introduces a conversational interface powered by LLMs [59]. This framework allows researchers to interact with their knowledge graphs using natural language queries. Key features include
Retrieval‐augmented generation: Mitigates the tendency of LLMs to produce hallucinations by ensuring responses are grounded in factual data, drawing on data from BioCypher knowledge graphs and embeddings of unstructured information.
API parameterization: Uses LLMs to parameterize external APIs, allowing users to utilize complex tools without extensive technical knowledge.
Multi‐agent systems: Supports setups where chains of models control each other's output, potentially enhancing the scope, robustness, and accuracy of responses.
BioChatter offers multiple interfaces, including a Python API and developer toolkits for both prototyping and user‐facing applications, making it accessible to a broad range of users.
An application example was created within the DECIDER project, focusing on ovarian cancer (https://www.deciderproject.eu). BioCypher and BioChatter are employed to construct a knowledge graph integrating whole‐genome sequencing, transcriptomics, methylation profiles, and clinical data. These tools assist cancer geneticists in molecular tumor boards by providing insights into patient‐specific mutations and their therapeutic implications (https://biochatter.org/vignettes/custom‐decider‐use‐case/). By integrating external resources, such as OncoKB (https://www.oncokb.org), with specific patient data, BioChatter facilitates conversational queries that retrieve relevant clinical information, enhancing decision‐making processes in personalized medicine. Connecting to a literature database of relevant publications allows background checks of therapeutic hypotheses (http://decider‐next.biochatter.org/).
These frameworks represent significant strides toward integrating AI into biomedical research. By focusing on open‐source development and modular design, they aim to make advanced knowledge management and AI tools accessible to the broader research community.
5. Ethical and Societal Implications of AI‐Driven Research
Although the AI preconference workshop did not specifically address ethical considerations related to AI use in clinical pharmacology and translational sciences, audience questions and panel discussions frequently touched upon the topic. This section highlights those discussions and covers key ethical issues related to this topic.
Ensuring patient safety and well‐being is the utmost imperative in drug discovery and development. Inclusions of AI and LLMs in clinical and translational research workflows must be held to the same standard. The importance of regulating safety in the use of AI in healthcare is emphasized by President Biden's executive order on October 30, 2023, mandating the establishment of a system to track clinical errors related to AI implementation in the healthcare industry [60]. This includes instances that result in harm, bias, or discrimination. The order also specifies the development of improvement and best practice guidelines based on the collected data and the dissemination of these guidelines to healthcare providers and relevant stakeholders. Similar levels of transparency and accountability is expected in the use of AI in clinical pharmacology and translational sciences, as findings in these disciplines will ultimately influence patient care and safety.
In Europe, the proposed AI Act [61] sets out a clear and detailed framework for regulating AI systems, with particular consideration of high‐risk AI applications in healthcare and life sciences. The aim is thereby to ensure that these systems meet very high standards of safety, transparency, and accountability. It calls for robust risk management processes, ongoing monitoring, and measures aimed at minimizing biases that would produce negative consequences. For example, AI instruments utilized in clinical trials or pharmacovigilance will be subject to strict scrutiny and control, with a view to ensuring that no harm is caused. This law underlines the increasing demand for international harmonization of regulations concerning AI and emphasizes the need to build trust in AI technologies worldwide. As AI continues to play an increasing role in clinical pharmacology and translational sciences, adherence to these guidelines will be paramount to maintaining both ethical integrity and operational consistency.
Tracking the impacts of AI on decisions and outcomes necessitates a robust framework for traceability. AI systems must incorporate the recording of metadata that directly relates to the support provided by the AI system. Establishing a comprehensive set of metadata endpoints is essential prior to AI implementation. Robust traceability not only facilitates regulatory review and potential audit assessments but can also aid in addressing biases and other undesired features within AI models.
Data privacy and security remain significant concerns for both patients and sponsors. Data privacy and security systems must adhere to regulatory standards, which is further complicated when AI models play a bigger role in drug development. Bak et al. discuss in detail the balance necessary to meet guidelines from various regulatory agencies [62]. AI has provided modern data hackers with even more sophisticated tools to circumvent many security measures. However, this is also an opportunity to leverage AI to improve data privacy and security, as highlighted by Khalid et al. [63] Tools such as the T1dCteGui discussed above highlight the potential of AI in enhancing data privacy.
Intellectual property and ownership are critical drivers of innovation in drug discovery. The protections and benefits associated with intellectual property ownership justify the substantial financial investments and the associated risks required for modern pharmaceutical development. However, current laws are likely not ready to address AI‐supported drug discovery. Tsang et al. [64] provide an extensive legal discussion on AI‐driven drug discovery. This is highly relevant to clinical pharmacology and translational sciences, as findings in these areas are crucial for competitive advantage and patent disputes.
Bias and fairness are paramount issues in drug development. While AI can enhance human cognitive capabilities, it can also perpetuate biases. Sources of bias in AI models include data bias, algorithm bias, explicit bias, implicit bias, and selection bias. AI bias in healthcare is thoroughly reviewed by Chinta et al. [65]. Although bias in clinical pharmacology and translational sciences may not directly impact patient care, it can introduce bias in development strategies that ultimately affect how the medicine is used in particular populations or affect the eligibility of certain populations for the treatment.
AI is poised to cause disruptions in the workforce, including retraining, organizational changes, and shifts in the skill sets needed. Most projections indicate an overall increase in personnel, with a shift toward positions involving AI‐integrated workflows. The implementation of AI strategies to complement current workforce skill sets and reskilling plans can mitigate the negative impacts on the workforce. Also, it is worth noting that although AI offers the potential for wider access to complex analyses by removing technical barriers, it also introduces the risk that unqualified users might conduct analyses to support critical decisions. Proper training, appropriate access, and safeguards will ensure that critical analyses are conducted without compromising quality standards.
6. Conclusion and Future Directions
6.1. Summary of Key Insights
The integration of AI and ML is profoundly transforming clinical pharmacology and translational research. From drug development and regulatory processes to dosing optimization, biomarker development, and patient monitoring, AI/ML technologies are enabling significant advancements across the pharmaceutical industry. The ASCPT 2024 AI preconference workshop highlighted numerous applications, including AI‐driven regulatory submissions, generative AI in medicine, AI in drug discovery, digital endpoints, and the impact on patient experiences. Both industry and regulatory perspectives emphasize the increasing adoption of AI/ML and the necessity for innovative approaches and comprehensive governance.
Advanced AI/ML techniques are enhancing pharmacology research by enabling more accurate predictive models, such as those decoding neuronal responses in the visual cortex [57]. In clinical trial design and dose optimization, AI‐based methods are improving dosing strategies and enriching clinical trials, as demonstrated in type 1 diabetes studies [25]. AI is also playing a pivotal role in biomarker development and patient monitoring. Digital biomarkers, explainable AI for predicting efficacy and safety endpoints, and the use of ML methods for covariate selection and modeling all contribute to more personalized and effective patient care. In translational science, AI is bridging research and clinical practice by harnessing AI for translational medicine, integrating AI/ML models in drug discovery, and developing personalized immune digital twins.
LLMs are being applied in biomedical research, contributing to the biomedical knowledge economy by democratizing knowledge representation and integrating AI into research workflows. Ethical and societal implications of AI‐driven research were also discussed, emphasizing patient safety, data privacy, bias and fairness, intellectual property, workforce impact, and the need for transparency and accountability.
6.2. Future Trends
As AI and ML technologies continue to evolve, their applications in clinical pharmacology and translational science are expected to expand further. Future trends include the development of AI‐based platforms like ERAMET for regulatory decision‐making, the integration of advanced AI methods with systems biology and biophysics to overcome challenges in translational science, and the use of AI to enhance data privacy and security. The advancement of explainable AI will be crucial for ensuring transparency and trust in AI‐driven healthcare solutions.
Moreover, the use of AI in generating synthetic patient data and personalizing treatments will likely grow, contributing to more efficient clinical trials and personalized medicine. The integration of AI into routine clinical practice will necessitate addressing ethical considerations, ensuring patient safety, and fostering collaboration between AI developers, clinicians, researchers, and regulators.
6.3. Call to Action
To fully realize the potential of AI in clinical pharmacology and translational science, stakeholders must engage in continuous innovation, collaborative efforts, and comprehensive governance. Researchers and industry professionals should work together to develop and implement AI technologies that enhance drug development and patient care while adhering to ethical standards. Regulatory agencies should continue to adapt and provide guidance on the use of AI/ML in drug development and regulatory processes. Collaborations across academia, industry, and regulators are needed to develop evaluation frameworks for emerging technologies such as generative AI to ensure that risks are managed appropriately.
In addition, there is a need for ongoing education and training to prepare the workforce for the integration of AI into their workflows. Moreover, addressing ethical and societal implications, such as data privacy, bias, and intellectual property rights, is essential to ensure that AI‐driven advancements benefit all stakeholders and lead to improved health outcomes. By embracing these challenges and opportunities, the clinical pharmacology and translational science communities can leverage AI to drive innovation, enhance patient care, and shape the future of healthcare and drug development.
Conflicts of Interest
All authors were employees and additionally may be shareholders of their respective companies at the time of writing.
Supporting information
Data S1
Funding: The authors received no specific funding for this work.
Disclaimer: The contents of this article reflect the views of the authors and should not be construed to represent the FDA's views or policies. No official support or endorsement by the FDA is intended or should be inferred. As Associate Editors for Clinical and Translational Science, Qi Liu and Mohamed Shahin were not involved in the review or decision process for this paper.
References
- 1. Terranova N., Renard D., Shahin M. H., et al., “Artificial Intelligence for Quantitative Modeling in Drug Discovery and Development: An Innovation and Quality Consortium Perspective on Use Cases and Best Practices,” Clinical Pharmacology and Therapeutics 115 (2024): 658–672. [DOI] [PubMed] [Google Scholar]
- 2. Shahin M. H., Barth A., Podichetty J. T., et al., “Artificial Intelligence: From Buzzword to Useful Tool in Clinical Pharmacology,” Clinical Pharmacology and Therapeutics 115 (2024): 698–709. [DOI] [PubMed] [Google Scholar]
- 3.“ASCPT 2024 AI Preconference,” (2024), accepted August 11, 2024, https://ascpt2024.eventscribe.net/index.asp?presTarget=2536441.
- 4. Singhal K., Azizi S., Tu T., et al., “Large Language Models Encode Clinical Knowledge,” Nature 620 (2023): 172–180. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5. Li A., Shrestha R., Jegatheeswaran T., Chan H. O., Hong C., and Joshi R., “Mitigating Hallucinations in Large Language Models: A Comparative Study of RAG‐Enhanced vs. Human‐Generated Medical Templates,” medRxiv (2024) 2009.2027.24314506.
- 6. Gallego V., Naveiro R., Roca C., Ríos Insua D., and Campillo N. E., “AI in Drug Development: A Multidisciplinary Perspective,” Molecular Diversity 25 (2021): 1461–1479. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7. Di J., Tuttle P. G., Adamowicz L., et al., “Monitoring Activity and Gait in Children (MAGIC) Using Digital Health Technologies,” Pediatric Research 96 (2024): 750–758. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8. Jones C. H., Madhavan S., Natarajan K., Corbo M., True J. M., and Dolsten M., “Rewriting the Textbook for Pharma: How to Adapt and Thrive in a Digital, Personalized and Collaborative World,” Drug Discovery Today 29 (2024): 104112. [DOI] [PubMed] [Google Scholar]
- 9. Liu Q., Huang R., Hsieh J., et al., “Landscape Analysis of the Application of Artificial Intelligence and Machine Learning in Regulatory Submissions for Drug Development From 2016 to 2021,” Clinical Pharmacology and Therapeutics 113 (2023): 771–774. [DOI] [PubMed] [Google Scholar]
- 10. Liu Q., Nair R., Huang R., et al., “Using Machine Learning to Determine a Suitable Patient Population for Anakinra for the Treatment of COVID‐19 Under the Emergency Use Authorization,” Clinical Pharmacology and Therapeutics 115 (2024): 890–895. [DOI] [PubMed] [Google Scholar]
- 11. U.S. Food and Drug Administration , “Using Artificial Intelligence and Machine Learning in the Development of Drug and Biological Products: Discussion Paper,” (2023). 06/08/2023.
- 12. Musuamba F. T., Skottheim Rusten I., Lesage R., et al., “Scientific and Regulatory Evaluation of Mechanistic In Silico Drug and Disease Models in Drug Development: Building Model Credibility,” CPT: Pharmacometrics & Systems Pharmacology 10 (2021): 804–825. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13. Musuamba F. T., Cheung S. Y. A., Colin P., et al., “Moving Toward a Question‐Centric Approach for Regulatory Decision Making in the Context of Drug Assessment,” Clinical Pharmacology and Therapeutics 114 (2023): 41–50. [DOI] [PubMed] [Google Scholar]
- 14. Liu G., Lu D., and Lu J., “Pharm‐AutoML: An Open‐Source, End‐to‐End Automated Machine Learning Package for Clinical Outcome Prediction,” CPT: Pharmacometrics & Systems Pharmacology 10 (2021): 478–488. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15. Liu G., Lu J., Lim H. S., Jin J. Y., and Lu D., “Applying Interpretable Machine Learning Workflow to Evaluate Exposure‐Response Relationships for Large‐Molecule Oncology Drugs,” CPT: Pharmacometrics & Systems Pharmacology 11 (2022): 1614–1627. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16. Harun R., Yang E., Kassir N., Zhang W., and Lu J., “Machine Learning for Exposure‐Response Analysis: Methodological Considerations and Confirmation of Their Importance via Computational Experimentations,” Pharmaceutics 15 (2023): e1381. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17. Harun R., Sane R., Yoshida K., Sutaria D. S., Jin J. Y., and Lu J., “Risk Factors of Hyperglycemia After Treatment With the AKT Inhibitor Ipatasertib in the Prostate Cancer Setting: A Machine Learning‐Based Investigation,” JCO Clinical Cancer Informatics 7 (2023): e2200168, 10.1200/CCI.22.00168. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18. Harun R., Lu J., Kassir N., and Zhang W., “Machine Learning‐Based Quantification of Patient Factors Impacting Remission in Patients With Ulcerative Colitis: Insights From Etrolizumab Phase III Clinical Trials,” Clinical Pharmacology and Therapeutics 115 (2024): 815–824. [DOI] [PubMed] [Google Scholar]
- 19. Lu J., Deng K., Zhang X., Liu G., and Guan Y., “Neural‐ODE for Pharmacokinetics Modeling and Its Advantage to Alternative Machine Learning Models in Predicting New Dosing Regimens,” iScience 24 (2021): 102804. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20. Laurie M. and Lu J., “Explainable Deep Learning for Tumor Dynamic Modeling and Overall Survival Prediction Using Neural‐ODE,” NPJ Systems Biology and Applications 9 (2023): 58. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21. Lu J., Bender B., Jin J. Y., and Guan Y., “Deep Learning Prediction of Patient Response Time Course From Early Data via Neural‐Pharmacokinetic/Pharmacodynamic Modelling,” Nature Machine Intelligence 3 (2021): 696–704. [Google Scholar]
- 22. Naik K., Goyal R. K., Foschini L., et al., “Current Status and Future Directions: The Application of Artificial Intelligence/Machine Learning for Precision Medicine,” Clinical Pharmacology and Therapeutics 115 (2024): 673–686. [DOI] [PubMed] [Google Scholar]
- 23. Arora A., “Generative Adversarial Networks and Synthetic Patient Data: Current Challenges and Future Perspectives,” Future Healthcare Journal 9 (2022): 190–193. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24. Xu L., Skoularidou M., Cuesta‐Infante A., and Veeramachaneni K., “Modeling Tabular Data Using Conditional GAN,” Neural Information Processing Systems 32 (2019): 16404–16415. [Google Scholar]
- 25. Pauley M., Henscheid N., David S. E., et al., “T1dCteGui: A User‐Friendly Clinical Trial Enrichment Tool to Optimize T1D Prevention Studies by Leveraging AI/ML Based Synthetic Patient Population,” Clinical Pharmacology and Therapeutics 114 (2023): 704–711. [DOI] [PubMed] [Google Scholar]
- 26. Mahon J. L., Sosenko J. M., Rafkin‐Mervis L., et al., “The TrialNet Natural History Study of the Development of Type 1 Diabetes: Objectives, Design, and Initial Results,” Pediatric Diabetes 10 (2009): 97–104. [DOI] [PubMed] [Google Scholar]
- 27. Hagopian W. A., Lernmark A., Rewers M. J., et al., “TEDDY – The Environmental Determinants of Diabetes in the Young: An Observational Clinical Trial,” Annals of the New York Academy of Sciences 1079 (2006): 320–326. [DOI] [PubMed] [Google Scholar]
- 28. Vasudevan S., Saha A., Tarver M. E., and Patel B., “Digital Biomarkers: Convergence of Digital Health Technologies and Biomarkers,” NPJ Digital Medicine 5, no. 1 (2022): 36. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29. Di J., Demanuele C., Kettermann A., et al., “Considerations to Address Missing Data When Deriving Clinical Trial Endpoints From Digital Health Technologies,” Contemporary Clinical Trials 113 (2022): 106661. [DOI] [PubMed] [Google Scholar]
- 30. Adams J. L., Kangarloo T., Gong Y., et al., “Using a Smartwatch and Smartphone to Assess Early Parkinson's Disease in the WATCH‐PD Study Over 12 Months,” NPJ Parkinsons Disease 10 (2024): 112. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31. Straczkiewicz M., Karas M., Johnson S. A., et al., “Upper Limb Movements as Digital Biomarkers in People With ALS,” eBioMedicine 101 (2024): 105036, 10.1016/j.ebiom.2024.105036. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32. Czech M. D., Badley D., Yang L., et al., “Improved Measurement of Disease Progression in People Living With Early Parkinson's Disease Using Digital Health Technologies,” Communications Medicine 4, no. 1 (2024): 49. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33. Terranova N. and Venkatakrishnan K., “Machine Learning in Modeling Disease Trajectory and Treatment Outcomes: An Emerging Enabler for Model‐Informed Precision Medicine,” Clinical Pharmacology and Therapeutics 115 (2024): 720–726. [DOI] [PubMed] [Google Scholar]
- 34. Terranova N., French J., Dai H., et al., “Pharmacometric Modeling and Machine Learning Analyses of Prognostic and Predictive Factors in the JAVELIN Gastric 100 Phase III Trial of Avelumab,” CPT: Pharmacometrics & Systems Pharmacology 11, no. 3 (2022): 333–347, 10.1002/psp4.12754. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35. Basu S., Munafo A., Ben‐Amor A. F., Roy S., Girard P., and Terranova N., “Predicting Disease Activity in Patients With Multiple Sclerosis: An Explainable Machine‐Learning Approach in the Mavenclad Trials,” CPT: Pharmacometrics & Systems Pharmacology 11 (2022): 843–853. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36. Amato F., Strotmann R., Castello R., et al., “Explainable Machine Learning Prediction of Edema Adverse Events in Patients Treated With Tepotinib,” Clinical and Translational Science 17, no. 9 (2024): e70010, 10.1111/cts.70010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37. Lundberg S. M. and Lee S.‐I., “A Unified Approach to Interpreting Model Predictions,” Proceedings of the 31st International Conference on Neural Information Processing Systems. 4768–4777 (2017).
- 38. Fukae M., Rogers J., Garcia R., Tachibana M., and Shimizu T., “Bayesian Sparse Regression for Exposure‐Response Analyses of Efficacy and Safety Endpoints to Justify the Clinical Dose of Valemetostat for Adult T‐Cell Leukemia/Lymphoma,” CPT: Pharmacometrics & Systems Pharmacology 13 (2024): 1655–1669. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39. Chen T. and Guestrin C., “XGBoost: A Scalable Tree Boosting System,” Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2016).
- 40. Bender A. and Cortés‐Ciriano I., “Artificial Intelligence in Drug Discovery: What Is Realistic, What Are Illusions? Part 1: Ways to Make an Impact, and Why We Are Not There Yet,” Drug Discovery Today 26 (2021): 511–524. [DOI] [PubMed] [Google Scholar]
- 41. Badkul A., Xie L., Zhang S., and Xie L., “TrustAffinity: Accurate, Reliable and Scalable Out‐of‐Distribution Protein‐Ligand Binding Affinity Prediction Using Trustworthy Deep Learning,” bioRxiv (2024) 2001.2005.574359.
- 42. Wu Y., Xie L., and Liu Y., “Semi‐Supervised Meta‐Learning Elucidates Understudied Molecular Interactions,” Communications Biology 7, no. 1104 (2024): 49. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43. Cai T., Xie L., Zhang S., et al., “End‐to‐End Sequence‐Structure‐Function Meta‐Learning Predicts Genome‐Wide Chemical‐Protein Interactions for Dark Proteins,” PLoS Computational Biology 19, no. 1 (2023): e1010851, 10.1371/journal.pcbi.1010851. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44. Chiu S. H. and Xie L., “Toward High‐Throughput Predictive Modeling of Protein Binding/Unbinding Kinetics,” Journal of Chemical Information and Modeling 56 (2016): 1164–1174. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45. Wu Y. and Xie L., “AI‐Driven Multi‐Omics Integration for Multi‐Scale Predictive Modeling of Causal Genotype‐Environment‐Phenotype Relationships,” arXiv Preprint arXiv:2407.06405 (2024). [DOI] [PMC free article] [PubMed]
- 46. Wu Y., Liu Q., and Xie L., “Hierarchical Multi‐Omics Data Integration and Modeling Predict Cell‐Specific Chemical Proteomics and Drug Responses,” Cell Reports Methods 3 (2023): 100452. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47. Xavier J. B., Young V. B., Skufca J., et al., “The Cancer Microbiome: Distinguishing Direct and Indirect Effects Requires a Systemic View,” Trends Cancer 6 (2020): 192–204. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48. He D., Liu Q., Wu Y., and Xie L., “A Context‐Aware Deconfounding Autoencoder for Robust Prediction of Personalized Clinical Drug Response From Cell‐Line Compound Screening,” Nature Machine Intelligence 4 (2022): 879–892. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49. Ahmad A. B., Bennett P. N., and Rowland M., “Models of Hepatic Drug Clearance: Discrimination Between the ‘Well Stirred’ and ‘Parallel‐Tube’ Models,” Journal of Pharmacy and Pharmacology 35 (1983): 219–224. [DOI] [PubMed] [Google Scholar]
- 50. Miljković F., Martinsson A., Obrezanova O., et al., “Machine Learning Models for Human,” Molecular Pharmaceutics 18 (2021): 4520–4530. [DOI] [PubMed] [Google Scholar]
- 51. Keefer C. E., Chang G., di L., et al., “The Comparison of Machine Learning and Mechanistic In Vitro‐In Vivo Extrapolation Models for the Prediction of Human Intrinsic Clearance,” Molecular Pharmaceutics 20 (2023): 5616–5630. [DOI] [PubMed] [Google Scholar]
- 52. Parrott N., Manevski N., and Olivares‐Morales A., “Can we Predict Clinical Pharmacokinetics of Highly Lipophilic Compounds by Integration of Machine Learning or In Vitro Data Into Physiologically Based Models? A Feasibility Study Based on 12 Development Compounds,” Molecular Pharmaceutics 19 (2022): 3858–3868. [DOI] [PubMed] [Google Scholar]
- 53.“Strengthening Clinical Laboratories | CDC,” (2018), https://www.cdc.gov/csels/dls/strengthening‐clinical‐labs.html.
- 54. Blackwell A. D. and Garcia A. R., “Ecoimmunology in the Field: Measuring Multiple Dimensions of Immune Function With Minimally Invasive, Field‐Adapted Techniques,” American Journal of Human Biology 34 (2022): e23784. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55. Laubenbacher R., Adler F., An G., et al., “Toward Mechanistic Medical Digital Twins: Some Use Cases in Immunology,” Frontiers in Digital Health 6 (2024): 1349595. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 56. Laubenbacher R., Niarakis A., Helikar T., et al., “Building Digital Twins of the Human Immune System: Toward a Roadmap,” NPJ Digital Medicine 5 (2022): 64. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57. Deng K., Schwendeman P. S., and Guan Y., “Predicting Single Neuron Responses of the Primary Visual Cortex With Deep Learning Model,” Advanced Science 11 (2024): e2305626. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 58. Lobentanzer S., Aloy P., Baumbach J., et al., “Democratizing Knowledge Representation With BioCypher,” Nature Biotechnology 41 (2023): 1056–1059. [DOI] [PubMed] [Google Scholar]
- 59. Lobentanzer S., Feng S., Bruderer N., et al., “A Platform for the Biomedical Application of Large Language Models,” Nature Biotechnology 43 (2025): 166–169. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 60.“Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” (2023), https://www.whitehouse.gov/briefing‐room/presidential‐actions/2023/10/30/executive‐order‐on‐the‐safe‐secure‐and‐trustworthy‐development‐and‐use‐of‐artificial‐intelligence/.
- 61.“AI Act,” (2024), https://digital‐strategy.ec.europa.eu/en/policies/regulatory‐framework‐ai.
- 62. Bak M., Madai V. I., Fritzsche M. C., Mayrhofer M. T., and McLennan S., “You Can't Have AI Both Ways: Balancing Health Data Privacy and Access Fairly,” Frontiers in Genetics 13 (2022): 929453. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 63. Khalid N., Qayyum A., Bilal M., Al‐Fuqaha A., and Qadir J., “Privacy‐Preserving Artificial Intelligence in Healthcare: Techniques and Applications,” Computers in Biology and Medicine 158 (2023): 106848. [DOI] [PubMed] [Google Scholar]
- 64. Tsang F., Sequeira A., and Morales C., “Emerging Legal Terrain: IP Risks From AI's Role in Drug Discovery,” (2024).
- 65. Chinta S. V., Wang Z., Zhang X., et al., “AI‐Driven Healthcare: A Survey on Ensuring Fairness and Mitigating Bias,” arXiv preprint arXiv:2407.19655 (2024).
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data S1
