Skip to main content
Wiley Open Access Collection logoLink to Wiley Open Access Collection
. 2025 Mar 4;66(6):1838–1842. doi: 10.1111/epi.18326

The future of EEG education in the era of artificial intelligence

John R McLaren 1, Doyle Yuan 2, Sándor Beniczky 3,4, M Brandon Westover 5, Fábio A Nascimento 6,
PMCID: PMC12169395  PMID: 40035709

Key points.

  • Artificial intelligence (AI) in electroencephalography (EEG) interpretation shows tremendous potential but raises critical questions that must be addressed prior to implementation.

  • Current EEG training is often insufficient, with gaps in exposure and quality, leaving many neurologists unprepared to read independently.

  • AI may reduce human error in EEG interpretation, improving efficiency, accuracy, and access, but human oversight will be essential.

  • Future EEG education should integrate AI training, focusing on its operation, limitations, and ethical considerations in clinical practice.

Artificial intelligence (AI), the technology that enables computers to simulate human problem‐solving capabilities, is rapidly evolving. In the field of epilepsy, AI's application has already demonstrated potential for improved quality, cost, and access to patient care. 1 Although these advancements are exciting, they also pose several questions for the epilepsy community that are imperative to address now, before the imminent implementation of AI in clinical practice. As academics and educators, one question we are often asked is whether, or to what degree, electroencephalography (EEG) interpretation should be taught to the next generations of (human) neurologists. To address this in a meaningful way, we first examine our current predicament and then anticipate the road ahead. In doing so, we argue for a system that carefully integrates AI‐based algorithms into the workflow of expert human EEG interpretation, which will ultimately necessitate a change in the way we educate our trainees and the next generation of electroencephalographers.

EEG is the tool most often used in the diagnostic evaluation of individuals with suspected epilepsy and frequently employed during sleep studies, surgeries, and in neurocritical care settings; therefore, accurate and reliable EEG interpretation is essential for optimal care of a variety of patients. In real‐world practice, EEG misinterpretation does occur—either through over‐calling normal EEG patterns as abnormal, or under‐calling abnormal findings as benign, both of which can negatively impact patient care and outcomes. EEG “over‐calling” (i.e., false‐positive errors) can lead to epilepsy misdiagnosis with resultant unnecessary driving restrictions, employment difficulties, overprescription of anti‐seizure medications, unwarranted surgery, inappropriate prognostication, other forms of comorbid stigma and social marginalization, as well as notable negative effects to health care systems. Meanwhile, “under‐calling” (i.e., false‐negative errors) can result in missed opportunities to prevent seizure‐related injury, negative cognitive sequelae, and, in some circumstances, death. 2 , 3 , 4 , 5 , 6 , 7 , 8 Additional implications are well documented in patients without epilepsy but where EEG is still used. 9

Unfortunately, there are significant gaps in the current landscape of EEG education. For both adult and pediatric neurology training programs, there is limited exposure to EEG, subpar quality teaching, a paucity of objective competencies, and significant inter‐program variability. 10 , 11 As such, many neurology graduates leave training without feeling confident in their EEG‐reading capabilities. 12 , 13

This presents a substantial problem in clinical practice, as a large portion of EEG studies across the world—including the United States and many European countries—are interpreted by general neurologists without additional EEG/epilepsy training. 14 , 15

With so much room for improvement, it is easy to see why AI‐centered EEG interpretation holds promise. Several automated AI algorithms have shown high accuracy in identifying EEG patterns including interictal epileptiform discharges (IEDs), interictal slowing, seizures, and findings on the ictal‐interictal continuum. Some of these algorithms have been validated in routine and critical care EEGs, achieving human expert level performance with improved efficiency and consistency. 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 As such, it is more than plausible that the future will involve appreciable adoption of these technologies. However, we need to consider the larger implications of implementation and have the foresight to address them in a meaningful way through proper education.

First, one must recognize that the high accuracy and efficiency of a tool does not always lead to equivalent levels of credibility. Human, or in this case, patient, perception is an important variable that changes an otherwise straightforward equation. The credibility of AI systems, regardless of their accuracy, remains generally quite low. In large global surveys on perceptions of this new technology, most people are wary about trusting AI and have a low or moderate acceptance of it. 25 Due to the inherent legal liabilities and ethical considerations 26 involved in misdiagnosis and mismanagement, which have not yet been fully elucidated, it is implausible that fully autonomous AI would be applied to the diagnostic or therapeutic process until the community at large takes time to thoughtfully address these issues (e.g., through the establishment of minimum standards).

Furthermore, the many limitations of AI must be acknowledged. Even the highest quality AI models are primarily data‐driven algorithms that are trained and validated on finite, historical datasets with expert input. These datasets, if not properly designed, can perpetuate discriminatory practices, give rise to “hallucinations” (false predictions), 27 and create suboptimal behavioral change, all of which could impact patient care. As there is not yet a complete, automated EEG interpretation system available, we must carefully consider the datasets on which these AI algorithms were trained and validated to ensure that they are applied in the appropriate context. For example, SPaRCNET 16 was developed to read critical care EEG, whereas SCORE‐AI 18 , 19 was developed to read outpatient/routine EEG; hence, a one size fits all approach to interpretation would be imprudent without more rigorous investigation. We also must acknowledge that many seizure types have markedly different electrographic signals (e.g., neonatal seizures, epileptic spasms, atonic seizures, myoclonic seizures, and so on) for which AI algorithms have not been developed and will require trained human readers to recognize these patterns along with corresponding video of the events, if available, at least until more sophisticated models are developed. As such, the most plausible scenario, and the gold standard we should aim for, is a complementary “hybrid” model 20 —one in which augmentation with AI could allow experts to make higher quality decisions more efficiently than those not using the technology, while maintaining oversight and human credibility that will best serve our patients, particularly in cases where the algorithms fall short.

Over the past few years, examples of some hybrid approaches have been demonstrated that have the potential to improve interpretation and promote better, more‐efficient reading skills. In a comparison between three fully automated AI algorithms and a human‐supervised AI model applying an operational definition to detect IEDs, the specificity of the fully automated approaches was too low for clinical implementation, despite the high sensitivity. Meanwhile the human‐supervised approach significantly increased the specificity, maintained good sensitivity and accuracy, and decreased the time burden of review compared to conventional visual analysis. In another study, a more “interpretable” deep learning model that accurately classifies six patterns of potentially harmful EEG activity was applied to EEG recordings in the intensive care unit (ICU) setting and led to significant pattern classification accuracy improvement by human readers using this assisted technology. 28 It is important to note that this performance was also significantly better than that of a corresponding uninterpretable “black‐box” model, demonstrating further promise for AI–human collaboration but also highlighting the importance of design and context for the end user.

Additional applications that have not yet been described or validated should also make interpretation more efficient and education more focused. For example, applying appropriate AI algorithms with high sensitivities will allow trainees to focus less on the more time‐consuming aspects of interpretation, such as scrolling through long, multiday continuous EEG files, and instead focus on interpreting snippets flagged as “concerning” by AI. Similarly, rapid‐response EEG systems with high negative predictive value could be used to triage which EEG recordings require more detailed review overnight and which could wait until morning, freeing up considerable clinical burden on trainees. Better automated detection algorithms could be applied in ICU settings to reduce “alarm fatigue,” a longstanding issue that has been well documented to affect clinician concentration and diagnostic accuracy, leading to burnout and reduced quality of life. 29 , 30 , 31 , 32 In resource‐limited settings, where there are either no epileptologists, or general neurologists without proper exposure to EEG reading during training, AI implementation could help mitigate these deficits. Although these algorithms can run locally on a tablet without internet connection, we also acknowledge that many low‐ and middle‐income areas lack the important high‐tech infrastructure for easy adoption, 33 such as the resources for machinery, personnel, quality control, and cloud storage, all of which can contribute to an increased carbon footprint and may limit feasibility of use. 26 Finally, in challenging or equivocal EEG cases, AI interpretation could offer a “second opinion” for readers, with certain models validated to perform closer to the consensus of a group of experts than the average of the individual expert. 18

In an ideal world, we will not just be teaching trainees how to integrate AI‐based EEG interpretation into practice, but we will be using AI to help us learn as well. In a study where a machine learning model was applied to EEG studies of critically ill patients of various etiologies, the convolutional neural network was able to identify novel EEG signatures to predict future clinical outcomes. 34 Visual analysis showed that the machine application learned EEG patterns typically recognized by human experts but also suggested new criteria that could serve as important electrographic biomarkers to improve care, demonstrating more potential novel applications for AI–human collaboration.

This is all to say that, although there are still important challenges we must address prior to a larger rollout, we advocate for academic programs to implement a curriculum that includes the operation, implementation, and oversight of AI in clinical practice. To create a roadmap for EEG and epilepsy education in the time of AI, we must simultaneously acknowledge that our current educational system would benefit from higher quality teaching, more consistent exposure to EEG during training, and standardized competencies that prepare our neurologists to perform well, 10 , 11 , 35 and also that better trained readers will help ensure the highest quality care in a future “hybrid” model of AI augmentation. We know from our colleagues in cardiology that inexperienced readers may not have the confidence to override the automated output of ECG studies 36 ; therefore, to reduce false‐positive and false‐negative errors of AI interpreted EEG results, we will need to continue to produce well‐trained, confident human readers. In the same vein, to ensure optimal AI output, we must acknowledge that the data fed into the algorithms will be pulled from human interpretation; therefore, a byproduct of high‐quality EEG education should be an improved AI.

We propose that—once these AI‐based algorithms are commercially available for EEG interpretation—all trainees should have meaningful exposure to this emerging technology. There should be clear parameters for where and when this technology can be used, including discussions on the ethical, practical, and equity issues surrounding implementation and the establishment of minimal acceptable standards. We may not need to teach our trainees or patients how a given algorithm works from a technical perspective, but all clinicians using this technology should be able to explain the clinical implication of these results to the patient; therefore, any educational curriculum must include modules on both proper interpretation and communication. For oversight, particularly as we continue to refine these technologies and train newer models, we will need dedicated teaching on the limitations of the technology and how humans best operate in this environment. Finally, continuing education on both the capabilities and limitations of AI will be essential for all neurologists not in active training programs to effectively incorporate these tools into their practice, for we believe AI should not be a tool available just for the technologically savvy, but for everyone in the community willing to learn how to safely use it. If we can implement these changes, rather than resist the sea change ahead, the future of epilepsy will be brighter than ever.

CONFLICT OF INTEREST STATEMENT

Dr. McLaren, Dr. Yuan, Dr. Beniczky, and Dr. Nascimento have no conflicts of interest. Dr. Westover has received support from grants from the National Institutes of Health (NIH; RF1AG064312, RF1NS120947, R01AG073410, R01HL161253, R01NS126282, R01AG073598, R01NS131347, and R01NS130119), and the National Science Foundation (NSF; 2014431). Dr. Westover is a co‐founder and scientific advisor of, and consultant to Beacon Biosignals, and he has a personal equity interest in the company. He also receives royalties for authoring “Pocket Neurology” from Wolters Kluwer and “Atlas of Intensive Care Quantitative EEG” by Demos Medical.

ETHICS STATEMENT

We confirm that we have read the Journal's position on issues involved in ethical publication and affirm that this report is consistent with those guidelines.

ACKNOWLEDGMENTS

None.

McLaren JR, Yuan D, Beniczky S, Westover MB, Nascimento FA. The future of EEG education in the era of artificial intelligence. Epilepsia. 2025;66:1838–1842. 10.1111/epi.18326

DATA AVAILABILITY STATEMENT

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

REFERENCES

  • 1. Lucas A, Revell A, Davis KA. Artificial intelligence in epilepsy – applications and pathways to the clinic. Nat Rev Neurol. 2024;20(6):319–336. 10.1038/s41582-024-00965-9 [DOI] [PubMed] [Google Scholar]
  • 2. Benbadis SR, Tatum WO. Overintepretation of EEGs and misdiagnosis of epilepsy. J Clin Neurophysiol. 2003;20(1):42–44. 10.1097/00004691-200302000-00005 [DOI] [PubMed] [Google Scholar]
  • 3. Benbadis SR. Errors in EEGs and the misdiagnosis of epilepsy: importance, causes, consequences, and proposed remedies. Epilepsy Behav. 2007;11(3):257–262. 10.1016/j.yebeh.2007.05.013 [DOI] [PubMed] [Google Scholar]
  • 4. Benbadis SR, Lin K. Errors in EEG interpretation and misdiagnosis of epilepsy. Which EEG patterns are overread? Eur Neurol. 2008;59(5):267–271. 10.1159/000115641 [DOI] [PubMed] [Google Scholar]
  • 5. Kang JY, Krauss GL. Normal variants are commonly overread as interictal epileptiform abnormalities. J Clin Neurophysiol. 2019;36(4):257–263. 10.1097/WNP.0000000000000613 [DOI] [PubMed] [Google Scholar]
  • 6. Oto MM. The misdiagnosis of epilepsy: appraising risks and managing uncertainty. Seizure. 2017;44:143–146. 10.1016/j.seizure.2016.11.029 [DOI] [PubMed] [Google Scholar]
  • 7. Amin U, Benbadis SR. The role of EEG in the erroneous diagnosis of epilepsy. J Clin Neurophysiol. 2019;36(4):294–297. 10.1097/WNP.0000000000000572 [DOI] [PubMed] [Google Scholar]
  • 8. Smith D, Defalla BA, Chadwick DW. The misdiagnosis of epilepsy and the management of refractory epilepsy in a specialist clinic. QJM. 1999;92(1):15–23. 10.1093/qjmed/92.1.15 [DOI] [PubMed] [Google Scholar]
  • 9. Friedman D, Claassen J, Hirsch LJ. Continuous electroencephalogram monitoring in the intensive care unit. Anesth Analg. 2009;109(2):506–523. 10.1213/ane.0b013e3181a9d8b5 [DOI] [PubMed] [Google Scholar]
  • 10. Nascimento FA, Gavvala JR. Education research: neurology resident EEG education: a survey of US neurology residency program directors. Neurology. 2021;96(17):821–824. 10.1212/WNL.0000000000011354 [DOI] [PubMed] [Google Scholar]
  • 11. Katyal R. EEG education in child neurology and neurodevelopmental disabilities residencies: a survey of US and Canadian program directors. Neurol Educ. 2024;3(1):e200112. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Nascimento FA, Maheshwari A, Chu J, Gavvala JR. EEG education in neurology residency: background knowledge and focal challenges. Epileptic Disord Int Epilepsy J Videotape. 2020;22(6):769–774. 10.1684/epd.2020.1231 [DOI] [PubMed] [Google Scholar]
  • 13. Mahajan A, Cahill C, Scharf E, Gupta S, Ahrens S, Joe E, et al. Neurology residency training in 2017: a survey of preparation, perspectives, and plans. Neurology. 2019;92(2):76–83. 10.1212/WNL.0000000000006739 [DOI] [PubMed] [Google Scholar]
  • 14. Nascimento FA, Gavvala JR, Tankisi H, Beniczky S. Neurology resident EEG training in Europe. Clin Neurophysiol Pract. 2022;7:252–259. 10.1016/j.cnp.2022.08.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Adornato BT, Drogan O, Thoresen P, Coleman M, Henderson VW, Henry KA, et al. The practice of neurology, 2000‐2010: report of the AAN member research subcommittee. Neurology. 2011;77(21):1921–1928. 10.1212/WNL.0b013e318238ee13 [DOI] [PubMed] [Google Scholar]
  • 16. Jing J, Ge W, Hong S, Fernandes MB, Lin Z, Yang C, et al. Development of expert‐level classification of seizures and rhythmic and periodic patterns during EEG interpretation. Neurology. 2023;100(17):e1750–e1762. 10.1212/WNL.0000000000207127 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Jing J, Sun H, Kim JA, Herlopian A, Karakis I, Ng M, et al. Development of expert‐level automated detection of epileptiform discharges during electroencephalogram interpretation. JAMA Neurol. 2020;77(1):103–108. 10.1001/jamaneurol.2019.3485 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. Tveit J, Aurlien H, Plis S, Calhoun VD, Tatum WO, Schomer DL, et al. Automated interpretation of clinical electroencephalograms using artificial intelligence. JAMA Neurol. 2023;80(8):805–812. 10.1001/jamaneurol.2023.1645 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19. Mansilla D, Tveit J, Aurlien H, Avigdor T, Ros‐Castello V, Ho A, et al. Generalizability of electroencephalographic interpretation using artificial intelligence: an external validation study. Epilepsia. 2024;65(10):3028–3037. 10.1111/epi.18082 [DOI] [PubMed] [Google Scholar]
  • 20. Kural MA, Jing J, Fürbass F, Perko H, Qerama E, Johnsen B, et al. Accurate identification of EEG recordings with interictal epileptiform discharges using a hybrid approach: artificial intelligence supervised by human experts. Epilepsia. 2022;63(5):1064–1073. 10.1111/epi.17206 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. Fürbass F, Kural MA, Gritsch G, Hartmann M, Kluge T, Beniczky S. An artificial intelligence‐based EEG algorithm for detection of epileptiform EEG discharges: validation against the diagnostic gold standard. Clin Neurophysiol. 2020;131(6):1174–1179. 10.1016/j.clinph.2020.02.032 [DOI] [PubMed] [Google Scholar]
  • 22. Beniczky S, Aurlien H, Brøgger JC, Fuglsang‐Frederiksen A, Martins‐da‐Silva A, Trinka E, et al. Standardized computer‐based organized reporting of EEG: SCORE. Epilepsia. 2013;54(6):1112–1124. 10.1111/epi.12135 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23. Beniczky S, Aurlien H, Brøgger JC, Hirsch LJ, Schomer DL, Trinka E, et al. Standardized computer‐based organized reporting of EEG: SCORE – second version. Clin Neurophysiol. 2017;128(11):2334–2346. 10.1016/j.clinph.2017.07.418 [DOI] [PubMed] [Google Scholar]
  • 24. Thomas J, Thangavel P, Peh WY, Jing J, Yuvaraj R, Cash SS, et al. Automated adult epilepsy diagnostic tool based on interictal scalp electroencephalogram characteristics: a six‐center study. Int J Neural Syst. 2021;31(5):2050074. 10.1142/S0129065720500744 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25. Gillespie N, Curtis C, Lockey S, Pool J, Akbari A. Trust in artificial intelligence: a global study. Brisbane, Australia: The University of Queensland and KPMG Australia; 2023. [Google Scholar]
  • 26. Chiang S, Picard RW, Chiong W, Moss R, Worrell GA, Rao VR, et al. Guidelines for conducting ethical artificial intelligence research in neurology: a systematic approach for clinicians and researchers. Neurology. 2021;97(13):632–640. 10.1212/WNL.0000000000012570 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27. MIT Sloan Teaching & Learning Technologies . When AI gets it wrong: addressing AI hallucinations and bias [accessed 2024 Sep 15]. Available from: https://mitsloanedtech.mit.edu/ai/basics/addressing‐ai‐hallucinations‐and‐bias/
  • 28. Barnett AJ, Guo Z, Jing J, Ge W, Kaplan PW, Kong WY, et al. Improving clinician performance in classifying EEG patterns on the ictal‐interictal injury continuum using interpretable machine learning. NEJM AI. 2024;1(6). 10.1056/aioa2300331 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29. Zeng‐Treitler Q, Nelson SJ. Will artificial intelligence translate big data into improved medical care or Be a source of confusing intrusion? A discussion between a (cautious) physician informatician and an (optimistic) medical informatics researcher. JMIR. 2019;21(11):e16272. 10.2196/16272 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30. Dehghan M, Mokhtarabadi S, Rashidi E, Rahiminejad E, Asadi N. Correlation between professional quality of life and alarm fatigue symptoms among intensive care unit nurses. Health Sci Rep. 2023;6(10):e1583. 10.1002/hsr2.1583 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31. Co Z, Holmgren AJ, Classen DC, Newmark L, Seger DL, Danforth M, et al. The tradeoffs between safety and alert fatigue: data from a national evaluation of hospital medication‐related clinical decision support. J Am Med Inform Assoc. 2020;27(8):1252–1258. 10.1093/jamia/ocaa098 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32. Jankovic I, Chen JH. Clinical decision support and implications for the clinician burnout crisis. Yearb Med Inform. 2020;29(1):145–154. 10.1055/s-0040-1701986 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33. Meyer AC, Dua T, Ma J, Saxena S, Birbeck G. Global disparities in the epilepsy treatment gap: a systematic review. Bull World Health Organ. 2010;88(4):260–266. 10.2471/BLT.09.064147 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34. Jonas S, Müller M, Rossetti AO, Rüegg S, Alvarez V, Schindler K, et al. Diagnostic and prognostic EEG analysis of critically ill patients: a deep learning study. NeuroImage Clin. 2022;36:103167. 10.1016/j.nicl.2022.103167 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35. Nascimento FA, Jing J, Strowd R, Sheikh IS, Weber D, Gavvala JR, et al. Competency‐based EEG education: a list of “must‐know” EEG findings for adult and child neurology residents. Epileptic Disord Int Epilepsy J Videotape. 2022;24(5):979–982. 10.1684/epd.2022.1476 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36. Schläpfer J, Wellens HJ. Computer‐interpreted electrocardiograms: benefits and limitations. J Am Coll Cardiol. 2017;70(9):1183–1192. 10.1016/j.jacc.2017.07.723 [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Data sharing is not applicable to this article as no new data were created or analyzed in this study.


Articles from Epilepsia are provided here courtesy of Wiley

RESOURCES