Skip to main content
Advances in Medical Education and Practice logoLink to Advances in Medical Education and Practice
. 2025 Jun 14;16:1039–1046. doi: 10.2147/AMEP.S523255

Artificial Intelligence in Medical Education: Promise, Pitfalls, and Practical Pathways

Sarup Saroha 1,
PMCID: PMC12176979  PMID: 40539080

Abstract

Artificial intelligence (AI) is transforming healthcare, yet its integration into medical education remains limited. As AI-powered tools increasingly assist with diagnostics, administrative tasks, and clinical decision-making, future doctors must have the knowledge and skills to use them effectively. This article explores the role of AI in medical education, highlighting its potential to enhance efficiency, improve patient care, and foster innovation while addressing ethical and safety concerns. The widespread adoption of AI presents both opportunities and challenges. While AI-driven transcription tools reduce administrative burdens and machine learning algorithms enhance diagnostic accuracy, the risks of over-reliance, algorithmic bias, and patient data security remain critical concerns. To navigate these complexities, medical schools must incorporate AI-focused training into their curricula, ensuring graduates can critically assess and safely apply AI technologies in clinical practice. However, AI should not be seen as the only solution; non-technological improvements to clinical workflows must also be considered in parallel. This article proposes practical solutions, including optional AI modules, hands-on training with AI-powered diagnostic tools, and interdisciplinary collaboration through innovation laboratories. By embedding AI education into medical training, institutions can prepare students for a rapidly evolving healthcare landscape, ensuring AI is a tool for improved patient outcomes, not a source of unintended harm. As AI reshapes medicine, equipping future doctors with the skills to use it responsibly is essential for fostering a healthcare system that is efficient, ethical, and patient-centred.

Keywords: medical education, healthcare innovation, clinical training, medical technology, diagnostic AI, medical decision support

Introduction

On placement, it is common to hear our junior doctor colleagues express frustration at their substantial documentation obligations. Clinicians remain overburdened by laborious record-keeping practices, with tick-box proformas and defensive notetaking consuming nearly one-third of their working day and contributing to burnout.1,2 Combined with extensive waiting lists and stripped-back services,3 this heavy administrative burden raises concerns for budding clinicians about whether their future careers align with their younger selves’ vision of patient-centred care. Clearly, there is a need for solutions that alleviate this burden, one emerging solution being artificial intelligence (AI).

AI refers to computer systems that mimic specific aspects of human intelligence, such as learning, reasoning, and problem-solving, using a range of computational models and algorithms.4 AI is broadly classified into weak AI, which performs specific tasks without true understanding (eg, Siri, Alexa, self-driving cars),5,6 and strong AI, which would match or surpass human intelligence but currently remains theoretical.7,8

AI-powered scribes are beginning to automatically transcribe and summarise clinician–patient conversations in real-time, helping to reduce administrative burden by streamlining medical documentation.9,10 This frees up time for doctors to connect better with patients even in short GP appointments.11 These tools also capture critical information from consultations that might be overlooked, ensuring a more comprehensive record.

Integrating new technologies into medical practice has a long and continuous history. In the mid-20th century, clinicians began using Dictaphones to record notes, which were later transcribed by typing pools, an early form of documentation support that is now obsolete. Over time, healthcare settings adopted pagers, electronic prescribing systems, and electronic health records, followed by handheld diagnostic tools and mobile apps, each reflecting the field’s responsiveness to technological advances. These innovations have consistently aimed to improve efficiency, communication, and safety, but have also introduced new challenges, such as increased screen time, reduced face-to-face interaction, and administrative burden.12–14 In this context, AI “scribes” represent not a radical departure but rather the latest evolution of established practices, offering a modern, scalable, and flexible approach to clinical documentation support.11 However, as with previous technological shifts, the integration of AI must be thoughtful and evidence-based, balancing the benefits of automation and insight generation with concerns around data privacy, over-reliance, and patient engagement to ensure responsible adoption in clinical education and practice.

The General Medical Council (GMC) states that doctors “are responsible for the decisions they make when using new technologies like AI, and should work only within their competence.”15 This coincides with the World Medical Association calling for reviewing medical curricula and education for all healthcare stakeholders to improve understanding of the risks and benefits of AI in healthcare.16 It follows then that in fostering good medical practice, medical schools must prepare students for the clinical environment that awaits them through building competence and familiarity in this evolving domain.

With 2 in 3 physicians using AI in their clinical practice, an increase of 78% from 2023,17 enthusiasm for the technology is rapidly growing. Yet, despite this uptake, a 2024 international survey of over 4500 students across 192 medical, dental, and veterinary faculties found that over 75% reported no formal AI education in their curriculum, highlighting a critical gap between technological advancement and medical training.18 This discrepancy underscores the urgency for medical schools to proactively incorporate AI teaching to ensure graduates are ready for the realities of modern clinical practice.

This article explores the need for AI education in medical schools, highlighting its potential to enhance efficiency, improve patient care, and foster innovation while addressing ethical and safety concerns.

Why AI Education is Essential

While AI may seem most relevant to data-driven fields like radiology, its use now spans many specialities, from GP notetaking to triage in emergency medicine.11,19,20 However, engagement with AI will vary. For example, clinicians in hands-on or communication-focused roles may need only a foundational understanding, while others will work closely with AI systems. This variability raises the question of whether AI education should be mandatory. A core foundation in digital literacy and responsible use would likely benefit all future doctors. Moreover, high digital literacy improves academic results and reduces procrastination.21,22

Beyond its role as a technical tool, AI carries broader implications for the doctor–patient relationship. While AI may enhance efficiency and support clinical decision-making, over-reliance on automated outputs could depersonalise care or erode the human connection central to medicine. Trust, empathy, and individualised judgement are core components of patient-centred care, that cannot be replicated by algorithms. Excessive use of algorithmic outputs may discourage clinical reasoning, while opaque “black-box” systems, AI tools whose internal decision-making processes are not easily interpretable, risk undermining transparency and trust.23 Medical education must emphasise the importance of using AI to complement, rather than replace, human interaction. Clinicians should be free to use or reject AI tools without penalty, prioritising patient interests and retaining the right to disagree with AI outputs.

Several publications and national medical bodies, including the NHS, AMA, and Royal College of Physicians and Surgeons of Canada, have called for the integration of AI education at all levels of medical training.24–32 However, there remains a gap in both understanding and availability of structured AI curriculum frameworks tailored to medical education, which are essential for guiding effective teaching and learning.7,33

Mastery of key principles is essential for students to use AI tools effectively. Large Language Models (LLMs) such as ChatGPT,34 generate human-like text by predicting likely continuations based on vast training datasets. Although they do not “think” like humans, they can simulate logical reasoning through in-context learning are advanced AI systems trained on extremely large corpora of text data to learn statistical patterns and relationships between words, phrases, and structures in natural language.35 Machine learning (ML) is a subset of AI that enables systems to learn from data and improve their performance on specific tasks without being explicitly programmed.36 ML has demonstrated utility in diagnosis and outcome prediction, with recent examples surpassing radiologists in detecting breast cancer,37 and outperforming dermatologists in diagnosing malignant melanoma,38 with AI now acting as a “second pair of eyes.”

Education can help students identify and manage the inherent pitfalls of using AI applications, such as becoming over-reliant on them or breaching data privacy. Contemporary studies have shown that medical students feel inadequately equipped to address patient’s concerns about using AI in their care.39–41 Without formal training on proper oversight, the unique flaws of AI may go unnoticed, such as “hallucination” errors in LLMs generating plausible but incorrect clinical details that could compromise safety.42

Furthermore, as with discharge summaries once typed by hospital transcriptionists, AI-generated notes require careful proofreading and verification. Whether this responsibility lies with the clinician who was recorded or with a trained third party, clearly defined processes for reviewing AI outputs are essential to ensure accuracy and maintain patient safety. These concerns are compounded by questions of confidentiality and consent, particularly in cases where transcripts contain sensitive information. It is not always clear whether third-party reviewers should have access to such content or whether patients have explicitly consented to this use of their data.

As with traditional medical records, questions remain regarding the ownership of AI-generated transcripts, whether they belong to the clinician, the hospital, or the cloud-based AI provider, and how such data is stored, accessed, and governed. Although existing regulations like GDPR offer some protection, the rapid deployment of cloud-based AI tools highlights the urgent need for clear institutional policies on data ownership and governance. As such, educating students about clinical accountability, data governance, and ethical oversight is crucial for the safe integration of AI into healthcare settings. Rather than dehumanising medicine, AI should be a tool that complements healthcare professionals’ expertise and maintains trust from the outset. As stewards of sensitive patient information, graduates must also be vigilant about the data privacy implications associated with AI used to preserve patient confidentiality.43

While global AI adoption holds promise for reducing disparities in resource-limited settings by optimising care delivery,44 it carries the risk of exacerbating existing inequities. Unequal access to digital infrastructure, variation in institutional capacity, and the underrepresentation of certain populations in AI training datasets such as ethnic groups or rare disease cohorts may widen gaps in care. Algorithmic bias,45,46 systematic error arising from imbalanced data or flawed algorithm design, can perpetuate or even amplify healthcare inequalities, potentially resulting in poorer outcomes for already marginalised groups. Moreover, AI-integrated medical curricula may disproportionately benefit well-resourced institutions in high-income settings, leaving underfunded schools struggling to keep pace. Therefore, it is essential that medical education not only equips students with the technical skills to engage with AI but also fosters critical awareness of its potential to reinforce or mitigate global health disparities.

When using AI in healthcare, the vast amount of sensitive patient data exposed to the technology is at risk.47,48 Therefore, it is essential to ensure that data collection, storage, and usage are conducted responsibly, with explicit patient consent obtained at each stage.

Positioned between academic and clinical environments, medical students are uniquely placed to identify inefficiencies and propose creative solutions. Their dual exposure, combined with fewer entrenched habits and greater openness to emerging technologies like AI, encourages fresh thinking. While all healthcare professionals contribute to innovation, students often bring a distinct perspective, reflected in the growing trend of the “doctorpreneur.”49 Through thorough education in AI applications, students can drive systemic improvements across longstanding challenges.

AI tools that reduce administrative burdens, such as AI-powered scribes or documentation support, represent one of several technological responses to clinician overload. Current non-AI solutions in this space to be considered in parallel include structured electronic health records (EHRs), voice recognition software, and template-based note systems,19,50–52 each offering varying degrees of efficiency but often at the expense of flexibility or clinician satisfaction. AI-enhanced tools differ in their ability to adapt to natural speech, personalise content, and learn from user feedback. However, they are complementary rather than definitive solutions. Like all technologies, their integration requires thoughtful evaluation of cost, accuracy, and impact on clinical workflow. Medical education must hence position AI not as a universal fix but as one tool among many, requiring a critical understanding of when, how, and whether to use it in specific settings.

Proposed Solutions

Medical schools could begin by developing initially optional AI-focused modules targeted at interested students. These modules would cover foundational topics such as the principles of ML, ethical considerations like algorithmic bias and data privacy, and practical clinical applications (Figure 1). Notably, several medical schools, including Dartmouth, Harvard, and institutions in Germany, have already begun integrating formal AI training into their curricula,53–56 demonstrating the growing global momentum for AI education in medicine. Offering these modules on an elective basis would allow institutions to pilot content, gather feedback, and iteratively refine delivery methods. Following successful implementation and evaluation, these modules could be embedded into the core curriculum to ensure that all graduates attain essential competencies in AI. Incorporating hands-on training sessions using AI tools in diagnostics, scribing, and decision support would allow students to build confidence in their implementation. Platforms like PassMedicine, already simulate history-taking scenarios with virtual AI patients.57 Medical schools could develop this further by creating tailored cases where symptoms adapt to students’ decisions in real-time, helping refine clinical reasoning without subjecting patients to potential harm.

Figure 1.

Figure 1

An AI Education Framework for Medical Schools. A conceptual pyramid illustrates the progression of AI education from foundational technical literacy through ethical awareness, applied clinical tools, and innovation. Enabling conditions, including scalable delivery, low-cost resources, and faculty support, are essential for implementation across varied institutional settings. Continuous evaluation and feedback throughout the process are essential.

Optional advanced tracks could allow interested students to explore deeper integrations relevant to their intended specialities or leadership ambitions. Medical schools could establish AI innovation laboratories to foster collaboration between medical students, data scientists, and AI developers. These labs could host events such as hackathons, intensive, time-limited gatherings focused on rapidly developing and prototyping novel solutions,58 and design sprints, which are more structured, five-phase processes aimed at solving specific problems through ideation, prototyping, and user testing.59 While hackathons are well-suited to encouraging creativity and producing proof-of-concept tools quickly, design sprints are more appropriate for refining targeted challenges, such as improving clinical communication or streamlining documentation workflows, based on iterative feedback. Both formats offer valuable opportunities for students to apply their clinical knowledge to real-world innovation, fostering interdisciplinary thinking and experiential learning.

Medical schools should employ clear evaluation metrics to assess the effectiveness of AI integration into the curriculum. These could include pre- and post-module assessments to measure gains in conceptual understanding, simulation-based tasks to evaluate practical application, and reflective portfolios to gauge students’ ethical reasoning around AI use. Engagement metrics such as attendance, completion rates of optional modules, and participation in innovation activities like hackathons can provide insight into interest and accessibility. Longer-term outcomes, such as students’ confidence in using AI tools during clinical placements or performance in AI-assisted diagnostic tasks, could be tracked to inform curricular refinement. These metrics should not only capture technical competence but also students’ critical thinking, digital literacy, and readiness to engage with evolving technologies in real-world settings.

Notably, integrating AI into the medical curriculum presents logistical and structural challenges. Medical programmes are already dense, and adding new content requires either streamlining existing material or extending instructional time. Faculty readiness is another barrier, as many educators may lack AI expertise and require upskilling or support from interdisciplinary collaborators. Additionally, access to up-to-date software tools, real-world case datasets, and computational resources may be limited in underfunded institutions.

Medical schools may need to develop targeted funding strategies, such as digital education grants, partnerships with technology providers, or government-backed innovation funds, to support infrastructure, training, and curriculum development. To reduce financial and logistical barriers, particularly for under-resourced institutions, medical schools could leverage widely available, low-cost, or free AI education resources as foundational material for optional modules. Courses such as “AI for Everyone” (Coursera),60 “Elements of AI” (University of Helsinki),61 and Google’s “Machine Learning Crash Course”62 offer accessible introductions to AI concepts tailored for non-technical audiences, serving as effective primers for more advanced or contextualised training. Healthcare-specific offerings like NHS AI Lab webinars and Stanford University’s AI in Medicine and Imaging lecture series63,64 can expose students to real-world clinical applications without added costs. Scalability will depend on adaptable content delivery and resource-efficient models: modular curricula enable phased implementation based on institutional capacity, while collaboration with tech firms or national digital health bodies can provide shared infrastructure and expertise. Crucially, core content should remain flexible, updatable, and not overly dependent on specialist faculty to support sustainable, system-wide integration.

Furthermore, standardising AI education across diverse medical schools could prove difficult given regional variability in infrastructure, priorities, and regulatory guidance. Establishing minimum competency frameworks, similar to digital literacy or evidence-based medicine, may help ensure consistency while allowing flexible content delivery. Close collaboration with regulatory bodies, such as the GMC, and investment in educator training will be essential to overcoming these barriers and ensuring sustainable implementation.

It is important to recognise that this space remains highly dynamic. Numerous AI tools are emerging, yet many may not undergo rigorous evaluation through randomised controlled studies before entering the clinical setting. In such a volatile landscape, established providers of EHR systems may increasingly seek to acquire and integrate these technologies, reflecting broader consolidation trends and rapid evolution within the healthcare technology sector. Given the rapid pace of AI development, ongoing curriculum review and adaptation will be essential to ensure relevant and effective training.

Conclusions

By preparing students to engage thoughtfully and collaboratively with AI, medical schools have an unparalleled opportunity to shape the future of healthcare, one that is smarter, fairer, and more effective for both patients and practitioners. This includes equipping graduates with the skills to navigate ethical and clinical complexities and encouraging awareness of broader issues such as algorithmic bias and global disparities in access to AI tools.

With thoughtful integration into medical curricula, AI can enhance care, reduce inefficiencies, and serve as a force for innovation without compromising the human connection at the heart of medicine. By embedding thoughtful AI education today, medical schools can ensure that the doctors of tomorrow lead a healthcare system that is both technologically advanced and deeply human.

Disclosure

The author reports no conflicts of interest in this work.

References


Articles from Advances in Medical Education and Practice are provided here courtesy of Dove Press

RESOURCES