Skip to main content
JACC: Advances logoLink to JACC: Advances
. 2025 Feb 8;4(3):101593. doi: 10.1016/j.jacadv.2025.101593

Embracing Generative Artificial Intelligence in Clinical Research and Beyond

Opportunities, Challenges, and Solutions

Henry P Foote a, Chuan Hong b,c, Mohd Anwar d, Maria Borentain e, Kevin Bugin f, Nancy Dreyer g, Josh Fessel h, Nitender Goyal i, Morgan Hanger j, Adrian F Hernandez c, Christoph P Hornik c, Jennifer G Jackman c, Alistair C Lindsay k, Michael E Matheny l, Kerem Ozer m, Jan Seidel n, Norman Stockbridge f, Peter J Embi l, Christopher J Lindsell c,
PMCID: PMC11850149  PMID: 39923329

Abstract

To explore threats and opportunities and to chart a path for safely navigating the rapid changes that generative artificial intelligence (AI) will bring to clinical research, the Duke Clinical Research Institute convened a multidisciplinary think tank in January 2024. Leading experts from academia, industry, nonprofits, and government agencies highlighted the potential opportunities of generative AI in automation of documentation, strengthening of participant and community engagement, and improvement of trial accuracy and efficiency. Challenges include technical hurdles, ethical dilemmas, and regulatory uncertainties. Success is expected to require establishing rigorous data management and security protocols, fostering integrity and trust among stakeholders, and sharing information about the safety and effectiveness of AI applications. Meeting insights point towards a future where, through collaboration and transparency, generative AI will help to shorten the translational pipeline and increase the inclusivity and equitability of clinical research.

Key words: artificial intelligence, clinical research, generative AI, participant engagement, research ethics

Central Illustration

graphic file with name ga1.jpg

Highlights

  • Generative artificial intelligence will disrupt existing clinical research paradigms.

  • There are challenges to artificial intelligence's transformative potential; collaboration is crucial for future progress.

  • Regulatory guidelines and continuous monitoring, evaluation, transparency, knowledge sharing, and inclusivity should be established.


In the rapidly evolving field of artificial intelligence (AI), generative AI has emerged as a disruptor. Its unique capability to generate new data, from visual content to complex molecular structures, demonstrates a leap not only in the creative potential of AI but also in its capacity to address complex challenges previously believed to be beyond reach.1, 2, 3 Groundbreaking progress with AI has been made in critical areas such as drug discovery, patient diagnosis, and the customization of treatment strategies, paving the way for a new era of personalized medicine.4, 5, 6 The successful integration of generative AI into clinical research will mark a significant breakthrough, redefining traditional processes.

The current landscape for generative AI in clinical research is characterized by rapid expansion of efforts to eliminate administrative and process redundancies. Advances are increasingly acknowledged by agencies such as the Food and Drug Administration and the National Institutes of Health, which are crafting guidance on the use of generative AI for various clinical research practices.7,8 While innovation has the potential to notably reduce the overall time and cost of therapeutic development, advances do not come without risks, and progress must be accompanied with careful mitigation of potential individual and societal harms, including misleading outputs, bias propagation, and loss of privacy.9, 10, 11, 12 In this context, a Duke Clinical Research Institute Think Tank was convened to explore the extensive implications of generative AI within clinical research.13 Based on the resulting observations, key themes and actionable items for the adoption of generative AI in clinical research are outlined, striving to leverage its benefits while ameliorating the associated challenges.

Methods

The 2-day think tank workshop in January 2024 gathered experts from the academic, government, and industry sectors to identify critical gaps and strategize innovative solutions for augmenting clinical research with generative AI. Participants engaged in 8 hours of moderated conversation and 4 hours of informal discussions on topics including automating clinical research processes using AI tools, engaging participants and communities directly with AI, and leveraging AI for measurements and analysis (Figure 1). Each moderated session began with prepared remarks from 3 to 5 experts, followed by in-depth discussions for a comprehensive exchange of ideas and experiences. The workshop was recorded and summarized in a meeting brief.14 The present manuscript was conceptualized, drafted, and revised based on the insights gained in the workshop, with drafts circulated among attendees for ongoing discussion and synthesis.

Figure 1.

Figure 1

Summary of the Workshop Session and Topics

The sessions were structured around several discussion points: efficiency, engagement, novel endpoints, and overarching impacts. We aimed to cultivate a consensus among the diverse participants on the scope of generative artificial intelligence, the practical and ethical concerns, the required validation, and regulatory landscape within each of the sessions. AI = artificial intelligence.

Results and discussion

The conversations highlighted the following: 1) the capabilities and potential impact of generative AI; 2) challenges and ethical considerations for generative AI adoption; 3) progress on guidelines and safeguards; 4) the need to ensure trust and transparency; and 5) actionable items for the adoption of generative AI in clinical research (Central Illustration).

Central Illustration.

Central Illustration

Capacities, Impact, and Strategic Pathways of Generative Artificial Intelligence in Clinical Research

Generative artificial intelligence will facilitate both incremental and transformative changes. Collaboration and knowledge sharing will be key to navigating the challenges and ethical considerations inherent in its deployment. AI = artificial intelligence.

Capabilities and potential impact

Currently, clinical trials suffer from significant administrative burden, issues with participant recruitment and retention, burdensome data entry, and inadequate representation limiting trial generalizability, among other challenges.15, 16, 17 To address these challenges, numerous incremental process improvements, as well as more transformative changes to redefine the landscape of clinical research, were identified in the workshop (Table 1).

Table 1.

Potential Impacts of Generative AI on Clinical Research Processes

Clinical Research Process Current Approach Potential Changes Introduced by Generative AI
Incremental improvements
 Document generation Manual creation of research tools and documents. Generation of drafts by AI, speeding up document creation and ensuring consistency.18
 Participant engagement Traditional methods for recruitment and dissemination of results. Direct engagement through AI, from recruitment to results sharing, making the process more inclusive and efficient and improving retention.19,20
Potential transformative changes
 Clinical trial design Design based on historical data and standard practices. AI-simulated scenarios and optimized protocols, increasing the success probability of clinical trials.21, 22, 23
 Informed consent process Standardized documents and procedures, often not fully tailored to individual comprehension levels. Facilitation of a more personalized, engaging consent process through advanced technologies such as chatbots or avatars.24,25
 Data collection, analysis, and synthesis Reliance on conventional data gathering and analysis techniques. New avenues for AI-driven data collection and synthesis, enhancing accuracy and depth of research findings.25, 26, 27
 Ethical and bias mitigation Policy and standard practice, with varying degrees of effectiveness. Proactive efforts via AI to identify and correct biases in data and research practices, even within AI tools themselves, reinforcing equity and ethical integrity.28
 Data management and security Management through established IT protocols and practices, which involves challenges in data discoverability and reusability. AI-enhanced data management, ensuring data quality, privacy, discoverability, and security, while upholding research integrity and reproducibility standards.29,30

Bullet points indicate more incremental improvements, and stars represent potential transformative changes.

AI = artificial intelligence.

While nongenerative AI can integrate complex inputs from clinical data to make predictions of patient outcomes or patients' potential to benefit from a specific therapy, generative AI extends this ability by using clinical data to create new content, including text and images.31 Generative AI can be harnessed to create synthetic data sets for simulations that further improve prediction performance.32 Generative AI can facilitate research by rapidly producing initial drafts of trial protocols and generating informed consent documents.18,24,33 These advancements constitute incremental improvements by creating administrative efficiencies, which will result in faster trial development, more streamlined operations, and reduced costs that together could substantially shorten the overall drug development timetable.9

Beyond such incremental improvements, generative AI provides an opportunity for transformative change. AI-driven technologies can be used to reimagine clinical trial design by simulating potential clinical trajectories and optimizing protocols.21, 22, 23 Generative AI can revolutionize the consent process and disrupt current methods for data collection and analysis by directly engaging and communicating with individuals and communities.34, 35, 36, 37 Frequent bidirectional engagement with research participants throughout the clinical trial process can also create opportunities to establish novel endpoints, including from data as wide-ranging as those from imaging, wearables, and participant interactions with an app.25,38,39 Such strategies could enable earlier identification of disease onset or progression, facilitating faster studies and therapeutic development. Furthermore, AI has the potential to provide new avenues for data collection and synthesis while also upholding research integrity and reproducibility standards by ensuring data quality, privacy, discoverability, and security.26,27,29,40 These efforts could pair with the ability of generative AI to identify and correct biases in data and research practices to enhance the accuracy and depth of research findings while also reinforcing equity and ethical integrity.28

Overall, generative AI holds the potential to propel clinical research into a new era of efficiency, precision, and discovery.

Challenges and ethical considerations

Generative AI is being rapidly adopted. The widely used ChatGPT had over 180 million active users at the time of the think tank, and a third of U.S. adults 18 to 64 years old now use generative AI every week.41 Generative AI is commonly being used for applications such as document generation. As these technologies mature, the use of generative AI is expected to increase significantly, becoming more prevalent in clinical settings and more familiar to research participants.22,42,43 The pace of adoption underscores the urgency for organized, informed, and strategic integration of AI-informed tools into clinical research.44 Before technologies become fully embedded, the full spectrum of benefits and harms that generative AI has within the clinical research domain should be systematically assessed. Without sharing experiences, both positive and negative, the field runs the risk of creating random practice without truly improving it. The speed of implementation must be carefully balanced with the need for rigor, safety, and caution.

Achieving the full potential of generative AI via safe and effective adoption must account for the unique risks of generative AI: generation of misleading or erroneous output (hallucinations), inappropriate sharing of protected health information, coercive engagement, perpetuation of bias, limitations in interpretability, inaccurate broad generalization from limited training data, and human overreliance on models.45,46 Risks must be sufficiently mitigated so that the foundation for using generative AI in clinical research is trustworthy according to ethical, technical, and regulatory considerations.

The vast potential for generative AI to interact with research participants must be balanced with the imperative to preserve participant autonomy and maximize the generalizability of results. Chatbots that allow for personalized communication increase engagement, yet the risk is that communication might become coercive or manipulative.47 This could reduce the voluntariness of consent and introduce bias in participant responses.48 Generative AI's ability to recreate its training data, potentially revealing individual patient information, creates confidentiality risks.46 Additionally, acceptable boundaries regarding which data are usable must be determined. Generative AI can train on vast amounts of input, including the entire digital footprint of participants such as social media patterns, which could help predict mental health status or cardiovascular risk.49,50 However, the extent to which these data can be used for disease prediction or trial outcomes must be agreed upon across all stakeholders.

Beyond concerns related to the acceptable use of data, careful consideration of training data is critical to avoid perpetuation of bias. For example, the commonly used Generative Pretrained Transformer model propagates racial and gender stereotypes across clinical use cases, consequences of which have included overestimating the prevalence of HIV in Black patients and prioritizing a diagnosis of panic disorder for women, but not men, presenting with shortness of breath.51 Diagnostic accuracy is at risk, even when models reveal their biases.52 If uncorrected, such biases would have deleterious implications across the clinical research spectrum, including in how generative AI interacts with participants, how it is used to interrogate existing data, and how it is used to generate new data.53 Most critically, bias in the clinical research process would systematically bake inequities into health care for generations to come and should be avoided at all costs.

From a technical perspective, generative AI is expected to achieve acceptable accuracy and relevancy over time. However, it has limitations, such as producing incorrect or misleading outputs (hallucinations), and its performance may degrade, particularly if trained on biased or outdated data.54, 55, 56, 57, 58 The dynamic performance of generative AI will necessitate continuous surveillance throughout the use lifecycle. Robust monitoring will require human oversight and may demand the development of additional AI tools to help assess model performance.59 Further, new metrics for model evaluation and benchmarking are needed, especially given the nondeterministic nature of generative AI outputs.60 Transparency in model development and the data used for training are crucial. Secure data sharing systems are essential to understand model limitations and ensure diverse patient data in model development.61 Additionally, while synthetic data created by generative AI can help simulate expected changes in population data, they must be used with caution to prevent unintentional leakage of personal health information.30,62

Progress on guidelines and safeguards

Reflecting numerous concerns related to the use of AI in clinical research, regulatory agencies have initiated efforts to establish safeguards and guidelines. A 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence highlighted the essential role of oversight in the rapidly developing field of generative AI.63 A discussion paper from the Food and Drug Administration underscored the need for collaboration among stakeholders, encouraging a regulatory approach that supports innovation, development, and sharing of best practices and ongoing evaluation and monitoring of AI performance.64 In addition, the Center for Drug Evaluation and Research (CDER) recently established the Center for Drug Evaluation and Research AI Council to provide coordinated and consistent initiatives for regulatory decision-making and to better support innovation and best practices across AI-enabled medical products.8,65 Regulation will be essential for harnessing the transformative potential of generative AI.66 Additionally, it is critical that regulatory guidelines are iterative, reflecting the rapid pace of evolving technology.

The scientific community has also worked to enhance the evaluation of AI in health care. International consensus-based guidelines have been developed for protocol development (Standard Protocol Items: Recommendations for Interventional Trials–Artificial Intelligence) and publication of results (Consolidated Standards of Reporting Trials–Artificial Intelligence).67,68 These guidelines promote rigorous evaluation and transparency of methods and results, which has improved reporting practices in publications. Ongoing efforts from researchers and funders are needed to ensure continued progress beyond publications into broader clinical development outputs.69 Moreover, the scientific community needs to differentiate between use of AI as an intervention and use of AI as a tool in the research toolkit.

Most guidance on AI focuses on its role as an intervention rather than its potential to innovate trials process. Careful evaluation of the impact of generative AI on research processes is crucial due to its significant implications for knowledge generation and population health.70 The potential for generative AI to undermine trust in clinical research is high, demanding approaches that ensure equity and that mitigate biases.71

Looking ahead, model assessment should consider both the risk associated with a model and its use case. Determining an appropriate oversight level will be crucial for regulatory bodies and the clinical research community. Lower-risk generative AI tools, such as those used in clinical trial support or document generation, likely warrant less oversight, while those in higher-risk use cases such as data analysis or clinical trial endpoint adjudication must have much tighter scrutiny.

Fostering integrity through trust and transparency

Trust and trustworthiness are essential for integration and adoption of AI into clinical research. According to the National Institute for Standards and Technology, trustworthy AI must be valid and reliable; safe, secure, and resilient; accountable and transparent; explainable and interpretable; privacy-enhanced; and fair, with harmful bias managed.72 Building trust requires open and ongoing dialogue about the capabilities and limitations of AI models, regular evaluations, and transparent feedback mechanisms to address changes in population or medical practices.73 Moreover, ongoing monitoring and governance of models are critical to maintain performance and ensure sustained trust from all stakeholders.74

One framework proposes a coordinated 3-layered approach that requires evaluating the development process (governance audits), model performance (model audits), and ongoing model output for each specific use case (application audits).75 Independent auditing, including postrelease and impact assessment, is crucial to ensure robust performance over time and across diverse subpopulations.66,76 Given the dynamic nature of clinical data and the evolving capabilities of AI, continuous monitoring, surveillance, and vigilance over AI systems—referred to as algorithmovigilance—will also be essential.77,78

Establishing robust guardrails and industry-wide standards is crucial for an ethical AI environment in clinical research. Effective data management practices will also be critical to ensure data availability, quality, privacy, security, and for maintaining the rigor and reusability of research data. Although transparency levels may vary during initial design phases to protect intellectual property, critical elements such as training data and model performance must be disclosed upon final deployment to ensure accountability and trust while still respecting intellectual property rights.

As guardrails are established, they must encompass ethical standards and integrate principles such as fairness, accountability, and inclusivity.5 Ethical frameworks for evaluating generative AI in clinical research are not yet widely available and should be developed and disseminated.79 Sharing knowledge and best practices about the ethical deployment of clinical research studies augmented with AI will be integral, requiring tools to ensure that AI products conform to stringent standards of integrity and ethical use.

Industry-wide ethical standards and robust guardrails would serve to protect human dignity, privacy, and rights. Adhering to these principles will ensure that generative AI reaches its full potential to revolutionize medical discovery while also safeguarding the ethical values fundamental to clinical research. Individual researchers and developers should educate themselves on AI ethics and stay updated with the latest standards. Regulatory bodies and industry groups must enforce compliance through regular audits and guideline updates. The broader community, including academia and clinical practices, should engage in forums and discussions to adapt and refine ethical standards, aligning them with technological and societal changes.

Actionable items

To optimize the integration of AI in clinical research, we suggest an intentional strategy that details the successes and failures of AI at each step of the research process combined with dissemination of findings and collaboration on improvements. We propose a thorough mapping of the clinical research process to illuminate the existing bottlenecks to discern where AI could introduce both minor and major enhancements. A framework for achieving this step was provided by Dilts and Sandler, who meticulously documented the over 700 steps required to open a phase III oncology clinical trial.80 They identified a plethora of stages within clinical trials that are ripe for immediate AI application and enhanced efficiency, alongside those that present opportunities for more transformative, systemic changes. For example, the utility of AI in automating the translation of informed consent documents illustrates an area for swift, incremental improvement.24

Beyond identifying opportunities for immediate improvements, process mapping reveals areas where AI could fundamentally redefine existing methodologies—such as leveraging AI for dynamic participant engagement through chatbots or avatars. For example, the AllianceChicago study demonstrates that AI-driven chatbots can significantly improve patient engagement in underserved communities by producing personalized messages.19 These advances necessitate not only a reimagining of the role of technology but also robust interdisciplinary collaboration to navigate the accompanying ethical, legal, and social challenges. Identifying opportunities where generative AI could be actively deployed—such as in interactive informed consent—would allow for collaboration across stakeholders. In this context, generating and showcasing AI-based solutions in an open-access manner would speed the adoption of generative AI in clinical trials and promote the development of more transformative technologies. Importantly, early demonstration of the benefits and safety of this potential use case for generative AI would build trust among important stakeholders such as trial participants and institutional review boards and demonstrate to sponsors that generative AI can be successfully integrated into the clinical trial workflow.

Successful and rapid adoption of AI technologies in clinical research hinges on the formation of an aligned and collaborative clinical research community.81,82 Development should occur in tandem with regulatory guidance to ensure consistent evaluation standards and regulatory compliance.8 Implementation should mirror the ideals of learning research systems aiming for continuous learning and improvement.83 This inclusive approach also ensures that diverse perspectives are considered in the development and implementation of AI solutions and will facilitate successful clinical trial workflow integration. Such a collaborative environment is essential for maintaining consistency in technological advancements, rapidly progressing toward industry-wide standards, and ultimately creating a more harmonious and efficient clinical research landscape.

Leveraging change management strategies and lessons learned from other technological advances may also help expedite the transition from research to implementation.5 For example, drawing lessons from e-commerce, such as prioritizing intuitive user interfaces, and from games, such as ensuring engaging and responsive experience, can further enhance the usability of AI tools. As e-commerce and gaming leverage rapid iteration based on user feedback and strong community engagement, open-access clinical trial-related AI projects might incorporate these strategies to improve and evolve continuously.

We posit that establishing open-access platforms will enhance the effective integration of AI in clinical research.84 These platforms are instrumental for sharing training data sets, AI algorithms, and models, thereby democratizing access to advanced tools and fostering a culture of transparency and collaboration across the field. This may include making technologies publicly available for anyone to use, modify, and distribute. Developers can adopt open-source licenses, use platforms like GitHub or Hugging Face for easy access, and provide robust documentation and support to guide users. Encouraging the exchange of knowledge and insights across disciplines is vital for spurring innovation within clinical research practices.85

This manuscript conveys key opportunities and barriers to the successful implementation of generative AI in clinical research as discussed by a multidisciplinary group of stakeholders. The proposed framework provides a means to facilitate technological advancements across the clinical research ecosystem. While the think tank attempted to capture the most crucial areas, community and patient involvement will be critical to successful generative AI implementation and future work must integrate patient-centric viewpoints through collaboration with individuals and community advisory boards.86,87

Conclusions

Generative AI will transform clinical research. Before it becomes entrenched, it is critical to establish regulatory guidelines and guardrails and a culture of continuous monitoring, evaluation, transparency, knowledge sharing, and inclusivity. These efforts will be essential to ensure that generative AI furthers clinical research in a responsible and equitable manner.

Funding support and author disclosures

Funding support for the meeting from which this consensus document was generated was provided through registration fees from the following industry sponsors: Alnylam Pharmaceuticals, Inc (Cambridge, MA, USA); Amgen, Inc (Apex, NC, USA); AstraZeneca (Durham, NC, USA); Bayer Pharma AG (Berlin, Germany); Boehringer Ingelheim Pharma GmbH & Co KG (Ingelheim am Rhein, Germany); Bristol Myers Squibb Company (New York, NY, USA); Intellia Therapeutics, Inc (Cambridge, MA, USA); Janssen Pharmaceuticals (Beerse, Belgium); Companies of Johnson & Johnson (New Brunswick, NJ, USA); Novartis AG (Basel, Switzerland); and Novo Nordisk A/S (Bagsværd, Denmark). No government funds were used for this meeting. The think tank was funded by industry sponsors with representation at the meeting. Dr Foote is supported by the Eunice Kennedy Shriver National Institute of Child Health and Human Development under awards T32HD094671 and T32HD104576. Dr Borentain has stock or stock options with Bayer and is a Bayer employee. Dr Dreyer received travel support for attendance at a meeting by Picnic Health and has stock options from Picnic Health. Dr Fessel received payment for a lecture given at the University for Continuing Education, Kremes, Austria, and received travel support from Boehringer Ingelheim Pharma GmbH & Co KG, Department of Global Biostatistics & Data Sciences. Dr Goyal received travel support from Alnylam Pharmaceuticals for attendance at the think tank meeting and receives stock and stock options for Alnylam Pharmaceuticals as an employee. NG is also the co-founder of the patient recruitment platforms iTrials. Hanger works for the Clinical Trials Transformation Initiative, which is supported primarily by an award from the Food and Drug Administration (FDA) of the U.S. Department of Health and Human Services (HHS), with an additional 15% from nongovernmental entities (member organizations); also received travel support from the National Academies of Sciences, Engineering, Medicine Forum on Drug Discovery, Development, and Translation. Dr Hernandez has received grants or contracts from Pfizer Inc and Merck and consulting fees from Merck. Dr Lindsay has stock or stock options with GSK and Johnson & Johnson. Dr Hornik has received grants from the National Institutes of Health (award numbers: 5R01-HD106588-02, 1RL1-HD107784-03, 5R33-HL147833-05, 5T32HD104576-03, OT2OD034481, and 1U01-MD018294-01) and Burroughs Wellcome (1020016) and has received consulting fees from Lightship Inc, Cytokinetics, and UCB Pharma; and also has participated on an advisory board for The Emmes Corporation, LLC. Dr Matheny has received consulting fees from the Patient-Centered Outcomes Research Institute (PCORI) trust fund. Dr Ozer has stock with Novo Nordisk and is a Novo Nordisk employee. Dr Lindsell has received grants or contracts from the following, with all payments made to the institution: the National Institutes of Health, the Department of Defense, the U.S. Centers for Disease Control and Prevention, Novartis, AstraZeneca, Cytokinetics, Biomeme, Entegrion Inc, Endpoint Health, and bioMerieux; In addition, patents for risk stratification in sepsis and septic shock were issued to Cincinnati Children's Hospital Medical Center; has also participated in data safety monitoring boards unrelated to the current work and has participated on Clinical and Translational Science Awards (CTSA) external advisory boards; is also an ex officio member of Association for Clinical and Translational Science (ACTS) board, Clinical Research Forum (CRF) board, ACTS Advisory Committee, and the Center for Clinical and Translational Science (CCTS) Executive Committee; and has stock options with Bioscape Digital. In addition, CJL is the Editor-in-Chief of the Journal of Clinical and Translational Science. CH, JJ, KB, MA, NS, and PJE report no conflicts of interest. The first draft of the manuscript was written independently of sponsor input. Industry representatives were able to provide comments on subsequent drafts of the manuscript. The ultimate decision to include these comments rested with the primary and senior authors.

Acknowledgments

The DCRI Think Tank team would like to acknowledge Jennifer Gloc for her project management support of the think tank meeting series and Diana Steele Jones for her support in editing and preparing the manuscript for submission.

Footnotes

The authors attest they are in compliance with human studies committees and animal welfare regulations of the authors’ institutions and Food and Drug Administration guidelines, including patient consent where appropriate. For more information, visit the Author Center.

References

  • 1.Pinaya W.H., Graham M.S., Kerfoot E., et al. Generative AI for medical imaging: extending the MONAI framework. arXiv. 2023 doi: 10.48550/arXiv.2307.15208. [DOI] [Google Scholar]
  • 2.Dwivedi Y.K., Kshetri N., Hughes L., et al. Opinion paper:“So what if ChatGPT wrote it?” multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. Int J Inf Manag. 2023;71 [Google Scholar]
  • 3.Zeng X., Wang F., Luo Y., et al. Deep generative molecular design reshapes drug discovery. Cell Rep Med. 2022;3(12) doi: 10.1016/j.xcrm.2022.100794. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Vora L.K., Gholap A.D., Jetha K., Thakur R.R.S., Solanki H.K., Chavda V.P. Artificial intelligence in pharmaceutical technology and drug delivery design. Pharmaceutics. 2023;15(7):1916. doi: 10.3390/pharmaceutics15071916. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Reddy S. Generative AI in healthcare: an implementation science informed translational path on application, integration and governance. Implement Sci. 2024;19(1):27. doi: 10.1186/s13012-024-01357-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Cho Y.S. From code to cure: unleashing the power of generative artificial intelligence in medicine. Int Neurourol J. 2023;27(4):225. doi: 10.5213/inj.2323edi06. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Policy NIoHOoS Artifical intelligence in research: policy considerations an guidance. https://osp.od.nih.gov/policies/artificial-intelligence/
  • 8.Administration USFaD Artificial intelligence and medical products: how cber, cder, cdrh, and ocp are working together. https://www.fda.gov/media/177030/download?attachment
  • 9.Moore T.J., Zhang H., Anderson G., Alexander G.C. Estimated costs of pivotal trials for novel therapeutic agents approved by the US Food and Drug Administration, 2015-2016. JAMA Intern Med. 2018;178(11):1451–1457. doi: 10.1001/jamainternmed.2018.3931. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Viswa C.A., Bleys J., Leydon E., Shah B., Zurkiya D. McKinsey & Company; 2024. Generative AI in the pharmaceutical industry: moving from hype to reality. [Google Scholar]
  • 11.Kwong J.C.C., Wang S.C.Y., Nickel G.C., Cacciamani G.E., Kvedar J.C. The long but necessary road to responsible use of large language models in healthcare research. NPJ Digit Med. 2024;7(1):177. doi: 10.1038/s41746-024-01180-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Ong J.C.L., Chang S.Y., William W., et al. Ethical and regulatory challenges of large language models in medicine. Lancet Digit Health. 2024;6(6):e428–e432. doi: 10.1016/s2589-7500(24)00061-x. [DOI] [PubMed] [Google Scholar]
  • 13.Duke Clinical Research Institute Think tanks. https://dcri.org/insights-and-news/insights/dcri-think-tanks
  • 14.Duke Clinical Research Institute Think Tanks Embracing generative AI in clinical research and beyond: opportunities, challenges and solutions. 2024. https://dcri.org/sites/default/files/2024-03/DCRI%20AI%20in%20Clinical%20Research%20Think%20Tank%20brief%5B21%5D.pdf
  • 15.Ebrahimi H., Megally S., Plotkin E., et al. Barriers to clinical trial implementation among community care centers. JAMA Netw Open. 2024;7(4) doi: 10.1001/jamanetworkopen.2024.8739. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Geller S.E., Koch A.R., Roesch P., Filut A., Hallgren E., Carnes M. The more things change, the more they stay the same: a study to evaluate compliance with inclusion and assessment of women and minorities in randomized controlled trials. Acad Med. 2018;93(4):630–635. doi: 10.1097/ACM.0000000000002027. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Office of the Assistant Secretary for Planning and Evaluation Examination of clinical trial costs and barriers for drug development. https://aspe.hhs.gov/reports/examination-clinical-trial-costs-barriers-drug-development-0
  • 18.Hutson M. How AI is being used to accelerate clinical trials. Nature. 2024;627(8003):S2–S5. doi: 10.1038/d41586-024-00753-x. [DOI] [PubMed] [Google Scholar]
  • 19.Mohanty N., Yang T.-Y., Morrison J., Hossain T., Wilson A., Ekong A. Chec-up: a digital intervention to reduce disparities in well-child and immunization completion in community health. Telehealth Med Today. 2022;7(5) doi: 10.30953/thmt.v7.375. [DOI] [Google Scholar]
  • 20.Ismail A., Al-Zoubi T., El Naqa I., Saeed H. The role of artificial intelligence in hastening time to recruitment in clinical trials. BJR Open. 2023;5(1) doi: 10.1259/bjro.20220023. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Katsoulakis E., Wang Q., Wu H., et al. Digital twins for health: a scoping review. NPJ Digit Med. 2024;7(1):77. doi: 10.1038/s41746-024-01073-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Ghim J.L., Ahn S. Transforming clinical trials: the emerging roles of large language models. Transl Clin Pharmacol. 2023;31(3):131–138. doi: 10.12793/tcp.2023.31.e16. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Liu R., Rizzo S., Whipple S., et al. Evaluating eligibility criteria of oncology trials using real-world data and AI. Nature. 2021;592(7855):629–633. doi: 10.1038/s41586-021-03430-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Decker H., Trang K., Ramirez J., et al. Large language model− based chatbot vs surgeon-generated informed consent documentation for common procedures. JAMA Netw Open. 2023;6(10):e2336997. doi: 10.1001/jamanetworkopen.2023.36997. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Perochon S., Di Martino J.M., Carpenter K.L.H., et al. Early detection of autism using digital behavioral phenotyping. Nat Med. 2023;29(10):2489–2497. doi: 10.1038/s41591-023-02574-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Martin G.L., Jouganous J., Savidan R., et al. Validation of artificial intelligence to support the automatic coding of patient adverse drug reaction reports, using nationwide pharmacovigilance data. Drug Saf. 2022;45(5):535–548. doi: 10.1007/s40264-022-01153-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Subbiah V. The next generation of evidence-based medicine. Nat Med. 2023;29(1):49–58. doi: 10.1038/s41591-022-02160-z. [DOI] [PubMed] [Google Scholar]
  • 28.Sridharan K., Sivaramakrishnan G. Leveraging artificial intelligence to detect ethical concerns in medical research: a case study. J Med Ethics. 2024 doi: 10.1136/jme-2023-109767. [DOI] [PubMed] [Google Scholar]
  • 29.Churova V., Vyskovsky R., Marsalova K., Kudlacek D., Schwarz D. Anomaly detection algorithm for real-world data and evidence in clinical research: implementation, evaluation, and validation study. JMIR Med Inform. 2021;9(5) doi: 10.2196/27172. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Beaulieu-Jones B.K., Wu Z.S., Williams C., et al. Privacy-preserving generative deep neural networks support clinical data sharing. Circ Cardiovasc Qual Outcomes. 2019;12(7) doi: 10.1161/CIRCOUTCOMES.118.005122. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Khera R., Oikonomou E.K., Nadkarni G.N., et al. Transforming cardiovascular care with artificial intelligence: from discovery to practice: JACC state-of-the-art review. J Am Coll Cardiol. 2024;84(1):97–114. doi: 10.1016/j.jacc.2024.05.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Arora A., Arora A. Generative adversarial networks and synthetic patient data: current challenges and future perspectives. Future Healthc J. 2022;9(2):190–193. doi: 10.7861/fhj.2022-0013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Allen J.W., Earp B.D., Koplin J., Wilkinson D. Consent-gpt: is it ethical to delegate procedural consent to conversational AI? J Med Ethics. 2024;50(2):77–83. doi: 10.1136/jme-2023-109347. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Datta S., Lee K., Paek H., et al. Autocriteria: a generalizable clinical trial eligibility criteria extraction system powered by large language models. J Am Med Inf Assoc. 2024;31(2):375–385. doi: 10.1093/jamia/ocad218. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Chow R., Midroni J., Kaur J., et al. Use of artificial intelligence for cancer clinical trial enrollment: a systematic review and meta-analysis. J Natl Cancer Inst. 2023;115(4):365–374. doi: 10.1093/jnci/djad013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Kaskovich S., Wyatt K.D., Oliwa T., et al. Automated matching of patients to clinical trials: a patient-centric natural language processing approach for pediatric leukemia. JCO Clin Cancer Inform. 2023;7 doi: 10.1200/CCI.23.00009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Haddad T., Helgeson J.M., Pomerleau K.E., et al. Accuracy of an artificial intelligence system for cancer clinical trial eligibility screening: retrospective pilot study. JMIR Med Inform. 2021;9(3) doi: 10.2196/27767. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Inan O.T., Tenaerts P., Prindiville S.A., et al. Digitizing clinical trials. NPJ Digit Med. 2020;3:101. doi: 10.1038/s41746-020-0302-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Naseri Jahfari A., Tax D., Reinders M., van der Bilt I. Machine learning for cardiovascular outcomes from wearable data: systematic review from a technology readiness level point of view. JMIR Med Inform. 2022;10(1) doi: 10.2196/29434. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Qian Z., Callender T., Cebere B., Janes S.M., Navani N., van der Schaar M. Synthetic data for privacy-preserving clinical risk prediction. Sci Rep. 2024;14(1) doi: 10.1038/s41598-024-72894-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Bick A., Blandin A., Deming D.J. The rapid adoption of generative AI. 2024. https://www.nber.org/system/files/working_papers/w32966/w32966.pdf
  • 42.Boonstra M.J., Weissenbacher D., Moore J.H., Gonzalez-Hernandez G., Asselbergs F.W. Artificial intelligence: revolutionizing cardiology with large language models. Eur Heart J. 2024;45(5):332–345. doi: 10.1093/eurheartj/ehad838. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Raza M.M., Venkatesh K.P., Kvedar J.C. Generative AI and large language models in health care: pathways to implementation. NPJ Digit Med. 2024;7(1):62. doi: 10.1038/s41746-023-00988-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Jindal J.A., Lungren M.P., Shah N.H. Ensuring useful adoption of generative artificial intelligence in healthcare. J Am Med Inf Assoc. 2024;31(6):1441–1444. doi: 10.1093/jamia/ocae043. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Weidinger L., Uesato J., Rauh M., et al. FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency; 2022. Taxonomy of risks posed by language models; pp. 214–229. [Google Scholar]
  • 46.Manduchi L., Pandey K., Bamler R., et al. On the challenges and opportunities in generative AI. arXiv. 2024 doi: 10.48550/arXiv.2403.00025. [DOI] [Google Scholar]
  • 47.Chew H.S.J. The use of artificial intelligence–based conversational agents (chatbots) for weight loss: scoping review and practical recommendations. JMIR Med Inform. 2022;10(4) doi: 10.2196/32578. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Jakesch M., Bhat A., Buschek D., Zalmanson L., Naaman M. CHI '23: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems; 2023. Co-writing with opinionated language models affects users' views. [Google Scholar]
  • 49.Andy A.U., Guntuku S.C., Adusumalli S., et al. Predicting cardiovascular risk using social media data: performance evaluation of machine-learning models. JMIR Cardio. 2021;5(1) doi: 10.2196/24473. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Chancellor S., De Choudhury M. Methods in predictive techniques for mental health status on social media: a critical review. NPJ Digit Med. 2020;3(1):43. doi: 10.1038/s41746-020-0233-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Zack T., Lehman E., Suzgun M., et al. Assessing the potential of gpt-4 to perpetuate racial and gender biases in health care: a model evaluation study. Lancet Digit Health. 2024;6(1):e12–e22. doi: 10.1016/S2589-7500(23)00225-X. [DOI] [PubMed] [Google Scholar]
  • 52.Jabbour S., Fouhey D., Shepard S., et al. Measuring the impact of AI in the diagnosis of hospitalized patients: a randomized clinical vignette survey study. JAMA. 2023;330(23):2275–2284. doi: 10.1001/jama.2023.22295. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.Kim J., Cai Z.R., Chen M.L., Simard J.F., Linos E. Assessing biases in medical decisions via clinician and AI chatbot responses to patient vignettes. JAMA Netw Open. 2023;6(10) doi: 10.1001/jamanetworkopen.2023.38050. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Hatem R., Simmons B., Thornton J.E. A call to address AI “hallucinations” and how healthcare professionals can mitigate their risks. Cureus. 2023;15(9) doi: 10.7759/cureus.44720. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55.Hua H.U., Kaakour A.H., Rachitskaya A., Srivastava S., Sharma S., Mammo D.A. Evaluation and comparison of ophthalmic scientific abstracts and references by current artificial intelligence chatbots. JAMA Ophthalmol. 2023;141(9):819–824. doi: 10.1001/jamaophthalmol.2023.3119. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.Gilbert S., Harvey H., Melvin T., Vollebregt E., Wicks P. Large language model AI chatbots require approval as medical devices. Nat Med. 2023;29(10):2396–2398. doi: 10.1038/s41591-023-02412-6. [DOI] [PubMed] [Google Scholar]
  • 57.Chen L., Zaharia M., Zou J. How is chatgpt's behavior changing over time? arXiv. 2023 doi: 10.48550/arXiv.2307.09009. [DOI] [Google Scholar]
  • 58.Omiye J.A., Lester J.C., Spichak S., Rotemberg V., Daneshjou R. Large language models propagate race-based medicine. NPJ Digit Med. 2023;6(1):195. doi: 10.1038/s41746-023-00939-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Meskó B., Topol E.J. The imperative for regulatory oversight of large language models (or generative AI) in healthcare. NPJ Digit Med. 2023;6(1):120. doi: 10.1038/s41746-023-00873-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 60.Lu Z., Peng Y., Cohen T., Ghassemi M., Weng C., Tian S. Large language models in biomedicine and health: current research landscape and future directions. J Am Med Inf Assoc. 2024;31(9):1801–1811. doi: 10.1093/jamia/ocae202. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 61.Weissler E.H., Naumann T., Andersson T., et al. The role of machine learning in clinical research: transforming the future of evidence generation. Trials. 2021;22(1):537. doi: 10.1186/s13063-021-05489-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62.Chen Y., Esmaeilzadeh P. Generative AI in medical practice: in-depth exploration of privacy and security challenges. J Med Internet Res. 2024;26 doi: 10.2196/53008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 63.Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. 2023. WhiteHouse.gov
  • 64.World Health Organization WHO releases AI ethics and governance guidance for large multi-modal models. https://www.who.int/news/item/18-01-2024-who-releases-ai-ethics-and-governance-guidance-for-large-multi-modal-models
  • 65.U.S. Food and Drug Administration Using artificial intelligence and machine learning in the development of drug and biological products: discussion paper and request for feedback. https://www.fda.gov/media/167973/download
  • 66.U.S. Food and Drug Administration Artificial intelligence for drug development. https://www.fda.gov/about-fda/center-drug-evaluation-and-research-cder/artificial-intelligence-drug-development
  • 67.Liu X., Cruz Rivera S., Moher D., et al. Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the consort-AI extension. Nat Med. 2020;26(9):1364–1374. doi: 10.1038/s41591-020-1034-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 68.Rivera S.C., Liu X., Chan A.-W., et al. Guidelines for clinical trial protocols for interventions involving artificial intelligence: the spirit-AI extension. Lancet Digit Health. 2020;2(10):e549–e560. doi: 10.1016/S2589-7500(20)30219-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 69.Martindale A.P., Llewellyn C.D., De Visser R.O., et al. Concordance of randomised controlled trials for artificial intelligence interventions with the consort-AI reporting guidelines. Nat Commun. 2024;15(1):1619. doi: 10.1038/s41467-024-45355-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 70.Alowais S.A., Alghamdi S.S., Alsuhebany N., et al. Revolutionizing healthcare: the role of artificial intelligence in clinical practice. BMC Med Educ. 2023;23(1):689. doi: 10.1186/s12909-023-04698-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 71.Abràmoff M.D., Tarver M.E., Loyo-Berrios N., et al. Considerations for addressing bias in artificial intelligence for health equity. NPJ Digit Med. 2023;6(1):170. doi: 10.1038/s41746-023-00913-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 72.National Institute of Standards and Technology AI risks and trustworthiness. https://airc.nist.gov/AI_RMF_Knowledge_Base/AI_RMF/Foundational_Information/3-sec-characteristics
  • 73.Lee C.S., Lee A.Y. Clinical applications of continual learning machine learning. Lancet Digit Health. 2020;2(6):e279–e281. doi: 10.1016/S2589-7500(20)30102-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 74.Van de Sande D., Van Genderen M.E., Smit J.M., et al. Developing, implementing and governing artificial intelligence in medicine: a step-by-step approach to prevent an artificial intelligence winter. BMJ Health Care Inform. 2022;29(1):e100495. doi: 10.1136/bmjhci-2021-100495. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 75.Mökander J., Schuett J., Kirk H.R., Floridi L. Auditing large language models: a three-layered approach. AI and Ethics. 2023:1–31. [Google Scholar]
  • 76.Falco G., Shneiderman B., Badger J., et al. Governing AI safety through independent audits. Nat Mach Intell. 2021;3(7):566–571. [Google Scholar]
  • 77.Embi P.J. Algorithmovigilance-advancing methods to analyze and monitor artificial intelligence-driven health care for effectiveness and equity. JAMA Netw Open. 2021;4(4) doi: 10.1001/jamanetworkopen.2021.4622. [DOI] [PubMed] [Google Scholar]
  • 78.Minkkinen M., Laine J., Mäntymäki M. Continuous auditing of artificial intelligence: a conceptualization and assessment of tools and frameworks. Dig Soc. 2022;1(3):21. [Google Scholar]
  • 79.Oniani D., Hilsman J., Peng Y., et al. Adopting and expanding ethical principles for generative artificial intelligence from military to healthcare. NPJ Digit Med. 2023;6(1):225. doi: 10.1038/s41746-023-00965-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 80.Dilts D.M., Cheng S.K., Crites J.S., Sandler A.B., Doroshow J.H. Phase III clinical trial development: a process of chutes and ladders. Clin Cancer Res. 2010;16(22):5381–5389. doi: 10.1158/1078-0432.CCR-10-1273. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 81.Hogg H.D.J., Al-Zubaidy M., Technology Enhanced Macular Services Study Reference Group, et al. Stakeholder perspectives of clinical artificial intelligence implementation: systematic review of qualitative evidence. J Med Internet Res. 2023;25 doi: 10.2196/39742. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 82.Vo V., Chen G., Aquino Y.S.J., Carter S.M., Do Q.N., Woode M.E. Multi-stakeholder preferences for the use of artificial intelligence in healthcare: a systematic review and thematic analysis. Soc Sci Med. 2023;338 doi: 10.1016/j.socscimed.2023.116357. [DOI] [PubMed] [Google Scholar]
  • 83.Stensland K.D., Richesson R.L., Vince R.A., Skolarus T.A., Sales A.E. Evolving a national clinical trials learning health system. Learn Health Syst. 2023;7(2) doi: 10.1002/lrh2.10327. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 84.Navar A.M., Pencina M.J., Rymer J.A., Louzao D.M., Peterson E.D. Use of open access platforms for clinical trial data. JAMA. 2016;315(12):1283–1284. doi: 10.1001/jama.2016.2374. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 85.Kusters R., Misevic D., Berry H., et al. Interdisciplinary research in artificial intelligence: challenges and opportunities. Front Big Data. 2020;3 doi: 10.3389/fdata.2020.577974. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 86.Adus S., Macklin J., Pinto A. Exploring patient perspectives on how they can and should be engaged in the development of artificial intelligence (AI) applications in health care. BMC Health Serv Res. 2023;23(1):1163. doi: 10.1186/s12913-023-10098-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 87.Farmer N., Osei Baah F., Williams F., et al. Use of a community advisory board to build equitable algorithms for participation in clinical trials: a protocol paper for hopenet. BMJ Health Care Inform. 2022;29(1) doi: 10.1136/bmjhci-2021-100453. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from JACC: Advances are provided here courtesy of Elsevier

RESOURCES