Skip to main content
Epilepsy Currents logoLink to Epilepsy Currents
. 2022 Jan 7;22(1):46–47. doi: 10.1177/15357597211062385

Choose Your Own Adventure: Expert Advice at Your Fingertips

Samuel W Terman
PMCID: PMC8832356  PMID: 35233199

A Web-Based Algorithm to Rapidly Classify Seizures For the Purpose of Drug Selection

Beniczky S, Asadi-Pooya AA, Perucca E, et al. Epilepsia. 2021;62:2474-2484.

Objective: To develop and validate a pragmatic algorithm that classifies seizure types, to facilitate therapeutic decision-making. Methods: Using a modified Delphi method, 5 experts developed a pragmatic classification of 9 types of epileptic seizures or combinations of seizures that influence choice of medication, and constructed a simple algorithm, freely available on the internet. The algorithm consists of 7 questions applicable to patients with seizure onset at the age of 10 years or older. Questions to screen for nonepileptic attacks were added. Junior physicians, nurses, and physician assistants applied the algorithm to consecutive patients in a multicenter prospective validation study (ClinicalTrials.gov identifier: NCT03796520). The reference standard was the seizure classification by expert epileptologists, based on all available data, including electroencephalogram (EEG), video-EEG monitoring, and neuroimaging. In addition, physicians working in underserved areas assessed the feasibility of using the web-based algorithm in their clinical setting. Results: A total of 262 patients were assessed, of whom 157 had focal, 51 had generalized, and 10 had unknown onset epileptic seizures, and 44 had nonepileptic paroxysmal events. Agreement between the algorithm and the expert classification was 83.2% (95% confidence interval = 78.6%–87.8%), with an agreement coefficient (AC1) of .82 (95% confidence interval = .77–.87), indicating almost perfect agreement. Thirty-two health care professionals from 14 countries evaluated the feasibility of the web-based algorithm in their clinical setting, and found it applicable and useful for their practice (median = 6.5 on 7-point Likert scale). Significance: The web-based algorithm provides an accurate classification of seizure types, which can be used for selecting antiseizure medications in adolescents and adults.

Commentary

What if expert care was available at the click of a button? Unfortunately, there are not enough experts to go around. There are as many as 4 million people per neurologist in some countries, 1 to say nothing of people per epileptologist. Potential consequences include misdiagnosis and/or incorrect treatment.2,3

Beniczky and colleagues have tackled this problem with a rapid algorithmic web tool (EpiPick.org) intended to provide clinicians, especially in underserved regions, instant expert advice. Using iterative Delphi methods, they had 2 goals: 1) To develop a simple algorithm classifying seizure types, 4 and 2) To determine optimal antiseizure medication (ASM) choices for each seizure type.5,6

For the first goal (classifying seizure types), 5 expert epileptologists met to agree upon the minimum number of distinct seizure types that would influence treatment decisions. They picked 9: focal, 7 combinations of generalized seizure types (absences, myoclonic, and/or generalized tonic-clonic), unknown, and a 10th nonepileptic category. Then, they agreed upon 7 distinguishing questions (eg gray matter lesion, lip smacking, staring <20 seconds, bilateral tonic-clonic) and 6 ‘red flag’ questions raising suspicion for epilepsy mimics (eg triggered by urination or posture change, longer than 10 minutes with eyes closed). The algorithm’s classification matched the epileptologist ‘gold standard’ determination (which included EEG results when available, not known to the algorithm) in 218/262 of test patients (82% agreement beyond chance; 72% after excluding the MRI question). Of course, the deck was stacked in favor of the algorithm’s accuracy – presumably clinicians were using the same rules as they programmed into the algorithm. Nonetheless, they deemed that the algorithm committed only 22/262 (8%) treatment-relevant errors. Pretty good. Most notably, the algorithm falsely classified 16/44 (36%) of those with nonepileptic ‘gold standard’ epileptologist diagnoses as having epilepsy (specificity 64%), though admittedly the number of nonepileptic cases was small and ruling epilepsy in or out was not the study’s main stated purpose (moreso about classifying seizure types among epilepsy cases). In contrast, the positive predictive value for epilepsy was 93%, and negative predictive value and sensitivity were both 100%. Other examples, the algorithm classified only 2/157 ‘gold standard’ focal epilepsy cases as absences and 4/51 generalized cases as focal. They finished their study by recruiting 32 health care professionals who felt the algorithm was clear, feasible, and useful.

For the second goal (choosing the optimal ASM), the 5 epileptologists developed a short list of ASM suggestions for each seizure type in the context of existing guidelines. 5 Then 24 experts each reviewed 25 cases and independently chose their first-choice ASM. They found 38% agreement beyond chance among the 24 experts, and 48% agreement beyond chance between the experts’ and the algorithm’s selections. Though, participants felt that none of the algorithm’s initial selections were necessarily incorrect or harmful – just…not identical to their own top choice. The point though of the tool was not to specify the single ‘correct’ ASM choice, but rather to rank tiers of options. A sample adult focal case with no particular contraindications leads to 6 first-line options (eg lamotrigine, levetiracetam), 6 second-line options (eg topiramate, phenytoin), and 5 third-line options (eg gabapentin, clobazam).

As for the finished product - the user checks boxes for the 7 classification and 5 red flag questions, additional clinical variables (eg age, depression, contraception, hepatic dysfunction, renal stones, obesity, depression, migraine, etc.), and voila. The website displays a suggested seizure type and ASM choice(s) in addition to informational handouts about each ASM.

The work (somewhat refreshingly) takes a step back from the otherwise advancing tide of machine learning. Oftentimes, classification work is produced by software automatically testing combinations of predictors to minimize residual errors (ie decision trees and random forests). Here, the decision tree was designed purposefully by humans, rather than any automatic approach prone to overfitting in-sample statistical noise. The downside is that even experts are not perfect; undiscovered patterns in the data or biology may always exist that may have been uncovered by more typical machine learning approaches, and the ‘optimal’ ASM even in terms of broad tiered rankings is not so cut and dried. Discovering new predictors or detailed comparative/cost effectiveness was not the point though – the point was to make expert consensus more available to the non-expert.

It is also interesting that the algorithm classified patients fairly well even in absence of EEG results, and not much worse when throwing out MRI results. While this does not negate the usefulness of such tests, it does underscore that the history remains paramount to correct diagnosis and classification. Though even still – the tool falls apart if given inaccurate information. The question ‘staring less than 20 seconds’ was intended to flag absence seizures, but patients may not know their own staring duration.

Have the experts done all of the thinking for us, so we only need to point and click? There inevitably will be cases where patient scenarios do not fit neatly into these predefined decision trees. ‘Red’ flags all have shades of gray and deal with likelihood ratios rather than certainties. The algorithm intentionally ignores EEG results at hand when classifying seizure or diagnosing epilepsy because resource-poor settings may lack EEG capability and EEG is fraught with possible misinterpretation. The algorithm does not inform WHETHER to treat (let alone dosing or duration), and deals only with initial monotherapy in patients at least 10 years old rather than polytherapy, substitution, refractory, or younger pediatric scenarios. Furthermore, these clinical variables are a good start but may not be a ‘catch-all’, for example ‘contraceptive medication’ and ‘takes any other medication’ are broad questions that may miss the nuances of specific drug-drug interactions.

The authors have gone a long way towards packaging expert advice into a neatly structured highly accessible format applicable to many routine situations. This is a nice illustrative example of making advances towards solving the ‘shortage’ problem, given current gaps in availability of specialty care across the world. Still, inevitably, clinical judgment remains important. Moreover, the elephant in the room is – if a setting lacks epilepsy expertise such that providers are reliant upon algorithmic results for initial treatment decisions, EpiPick gets you only so far without clinicians experienced in what to do from there. Unfortunately, maldistribution and shortage of specialists has no easy fix, only partway addressed by EpiPick. The next tricks will be raising awareness of the website for its intended users and understanding if and how viewing its results may impact treatment decisions and outcomes, all the while updating algorithms as science advances regarding the classification and treatment of epilepsy.

ORCID iD

Samuel W Terman https://orcid.org/0000-0001-6179-9467

References

  • 1.Bergen DC. Training and distribution of neurologists worldwide. J Neurol Sci. 2002;198(1–2):3-7. [DOI] [PubMed] [Google Scholar]
  • 2.Smith D, Defalla BA, Chadwick DW. The misdiagnosis of epilepsy and the management of refractory epilepsy in a specialist clinic. QJM. 1999;92:15-23. [DOI] [PubMed] [Google Scholar]
  • 3.Perucca E. Overtreatment in epilepsy: adverse consequences and mechanisms. Epilepsy Res. 2002;52:25-33. [DOI] [PubMed] [Google Scholar]
  • 4.Beniczky S, Asadi-Pooya AA, Perucca E, et al. A web-based algorithm to rapidly classify seizures for the purpose of drug selection. Epilepsia. 2021;62(10):2474-2484. [DOI] [PubMed] [Google Scholar]
  • 5.Asadi-Pooya AA, Beniczky S, Rubboli G, Sperling MR, Rampp S, Perucca E. A pragmatic algorithm to select appropriate antiseizure medications in patients with epilepsy. Epilepsia. 2020;61(8):1668-1677. [DOI] [PubMed] [Google Scholar]
  • 6.Beniczky S, Rampp S, Asadi-Pooya AA, Rubboli G, Perucca E, Sperling MR. Optimal choice of antiseizure medication: agreement among experts and validation of a web-based decision support application. Epilepsia. 2021;62(1):220-227. [DOI] [PubMed] [Google Scholar]

Articles from Epilepsy Currents are provided here courtesy of American Epilepsy Society

RESOURCES