Abstract
We sought to investigate electroencephalographers’ real-world behaviors and opinions concerning reading routine EEG (rEEG) with or without clinical information. An eight-question, anonymous, online survey targeted at electroencephalographers was disseminated on social media from the authors’ personal accounts and emailed to authors’ select colleagues. A total of 389 responses were included. Most respondents reported examining clinical information before describing rEEG findings. Nonetheless, only a minority of respondents believe that EEG analysis/description should be influenced by clinical information. We recommend reviewing clinical data only after an unbiased EEG read to prevent history bias and ensure generation of reliable electrodiagnostic information.
Keywords: Cognitive bias, EEG, Electroencephalography, History bias
Introduction
Reporting of routine scalp EEG (rEEG) findings should entail an objective description of the findings, followed by an interpretation or clinical correlation that relates the findings with the indication for which the EEG was ordered. [6]
This structure reflects the argument for describing rEEG findings without clinical information to prevent bias (“history bias”) [1]. Despite these widely accepted recommendations, we hypothesized that (i) unbiased EEG reading may not be ubiquitous in real-world practice, and (ii) biased vs. unbiased EEG reading practices may correlate with providers’ profession and training profile. In this study, we investigated electroencephalographers’ opinions and real-world behaviors concerning analyzing rEEG findings with or without clinical information.
Methods
We evaluated rEEG reading practices utilizing an eight-question, anonymous, online survey (iReadEEG; e-survey) targeted at electroencephalographers. Routine EEG was defined as routine scalp EEG recordings – either in the out-patient or inpatient setting - thus excluding specialized types of EEG (e.g., electrocerebral inactivity) or EEG recordings in special circumstances or for prolonged durations (e. g., continuous EEG in critical illness or during long-term video-EEG monitoring). “Clinical history” represents clinical information beyond a patient’s age.
The survey was disseminated on social media (Twitter and LinkedIn) from authors’ personal accounts (FAN and SB) and shared by the Critical Care EEG Monitoring Research Consortium, Greater Boston Epilepsy Society, Danish Society of Clinical Neurophysiology, NeurologyLive, and International League Against Epilepsy (ILAE) Young Epilepsy Society. Moreover, authors emailed the survey link to select colleagues, mostly academic epileptologists.
Participants were stratified into five groups: epileptologists with fellowship training in clinical neurophysiology (CNP) or epilepsy (group 1); general neurologists with (group 2) or without (group 3) fellowship training in CNP or epilepsy; CNP or epilepsy fellows and neurology residents (group 4); and EEG technicians, non-neurologist clinical neurophysiologists, researchers who read EEG, and others (group 5).
Statistical analyses were performed using IBM (SPSS) Statistics Version 28.0 software. Chi-square was used to compare groups; statistical significance was assumed for p<0.05. Data was collected between Jan 3rd and Jan 12th, 2022. We did not seek ethical approval from an institutional review board because the study data was deidentified and obtained from healthcare providers who volunteered to share details about their EEG practices. All data is available upon request.
Results
There were 436 responses; 47 were excluded because incomplete. A total of 389 responses were ultimately included: group 1 (n = 167), group 2 (n = 67), group 3 (n = 30), group 4 (n = 47), and group 5 (n = 78). Respondents were from 60 different countries. There were 154 responses from the U.S., 27 from Russia, 24 from the United Kingdom, 21 from Brazil, 16 from Spain, 14 from Canada, and 13 from India. The remaining countries had fewer than 10 responses. Survey results are summarized in Table 1.
Table 1.
Summary of entire survey dataset.
| Group 1 (n = 167) |
Group 2 (n = 67) |
Group 3 (n = 30) |
Group 4 (n = 47) |
Group 5 (n = 78) |
|
|---|---|---|---|---|---|
| How long have you been reading EEG (both in training and clinical practice)? (n;%) | |||||
| ≤6 months | 0; 0 | 2; 3.0 | 4; 13.3 | 4; 8.5 | 3; 3.8 |
| >6 months and ≤12 months | 1; 0.6 | 2; 3.0 | 2; 6.7 | 9; 19.1 | 5; 6.4 |
| >12 months and ≤2 years | 7; 4.2 | 10; 14.9 | 5; 16.7 | 9; 19.1 | 5; 6.4 |
| >2 years and ≤5 years | 31; 18.6 | 14; 20.9 | 7; 23.3 | 8; 17.0 | 12; 15.4 |
| >5 years | 128; 76.6 | 39; 58.2 | 12; 40.0 | 17; 36.2 | 53; 67.9 |
| When do you look at patients’ clinical history* or head imaging? (n;%) | |||||
| Before analyzing the raw EEG | 35; 21.0 | 22; 32.8 | 14; 46.7 | 20; 42.6 | 47; 60.3 |
| While analyzing the raw EEG | 23; 13.8 | 13; 19.4 | 1; 3.3 | 4; 8.5 | 14; 17.9 |
| After analyzing the raw EEG and before describing EEG findings | 46; 27.5 | 15; 22.4 | 9; 30.0 | 9; 19.1 | 6; 7.7 |
| After analyzing and describing EEG findings and before generating a clinical impression | 63; 37.7 | 17; 25.4 | 6; 20.0 | 14; 29.8 | 11; 14.1 |
| Is your analysis and description of EEG findings influenced by patients’ clinical history* or head imaging? (n;%) | |||||
| Yes | 105; 62.9 | 50; 74.6 | 25; 83.3 | 32; 68.1 | 53; 67.9 |
| No | 62; 37.1 | 17; 25.4 | 5; 16.7 | 15; 31.9 | 25; 32.1 |
| My philosophy is that analyzing and describing findings on EEG: (n;%) | |||||
| Should not be influenced by patients’ clinical history* or head imaging | 45; 26.9 | 7; 10.4 | 7; 23.3 | 8; 17.0 | 7; 9.0 |
| Should not be influenced by patients’ clinical history* or head imaging except in cases where there are questionable EEG findings such as suspicious sharp transients | 63; 37.7 | 32; 47.8 | 9; 30.0 | 24; 51.1 | 34; 43.6 |
| Should be influenced by patients’ clinical history* or head imaging | 59; 35.3 | 28; 41.8 | 14; 46.7 | 15; 31.9 | 37; 47.4 |
In addition to patients’ age.
More than 60% of respondents in every group reported looking at patients’ clinical history or head imaging before describing EEG findings (Fig. 1). This figure was lowest in group 1 (62.3%) and highest in groups 3 and 5 (80.0% and 85.9%, respectively). Upon subgroup analysis (Fig. 1), the relationships between timing of looking at patients’ clinical information and (i) profession (group 1 vs. groups 2 and 3) or (ii) having fellowship training in CNP or epilepsy (groups 1 and 2 vs. group 3) were statistically significant (p = 0.020 and p = 0.024, respectively). More than 60% of respondents in every group stated that their analysis and description of EEG findings are influenced by patients’ clinical history or head imaging. This figure was lowest in group 1 (62.9%) and highest in group 3 (83.3%). Respondents from group 1 mostly rated this influence as minimal (52.4%) whereas those from groups 2–5 mostly rated it as moderate (60.0%, 60.0%, 65.6%, and 58.5%, respectively). There was significant intergroup variability as to whether analyzing and describing EEG findings should be influenced by patients’ clinical history or head imaging (p = 0.015) (Fig. 2).
Fig. 1.

Data pertaining to survey question “When do you look at patients’ clinical history* or head imaging?” stratified by group.
Upper panel: question responses stratified by group; X2=53.31, df 12, p<0.001. Lower left panel: subgroup analysis comparing question responses with profession (group 1 vs. groups 2 and 3); X2=9.85, df 3, p = 0.020. Lower right panel: subgroup analysis comparing question responses with presence of absence of having fellowship training in clinical neurophysiology or epilepsy (groups 1 and 2 vs. group 3); X2=9.47, df 3, p = 0.024. *In addition to patients’ age.
Fig. 2.

Data pertaining to survey question asking respondents’ philosophy in terms of analyzing and describing findings on routine EEG studies stratified by group.
Question responses stratified by group; X2=19.05, df 8, p = 0.015. *In addition to patients’ age.
Discussion
Examining clinical information during rEEG reading before describing findings is common practice - even among epileptologists (62.3%) but mostly among general neurologists without EEG training (80.0%) and general neurologists with EEG training (74.6%). On a contradictory note, most respondents - irrespective of the group - believe that, in principle, EEG analysis and description either should not be influenced by clinical information or should not be influenced by clinical information except in select cases with questionable findings.
Notably, our survey was not designed to account for scenarios where the patient whose EEG is being interpreted may already be known to the electroencephalographer. Such scenarios would include cases where the patient is clinically cared for by the electroencephalographer or a colleague within the same practice group, or in situations where the patient’s clinical information has been previously made familiar to the electroencephalographer through conferences. In these particular circumstances, the electroencephalographer would conceivably be familiar with the patient’s clinical information irrespective of accessing his/her medical records.
Judgement errors in diagnostic tests, such as EEGs, result from a combination of systematic bias and noise [2]. The former, defined as errors that consistently deviate from the truth in the same direction, includes cognitive biases - which have been shown to be associated with diagnostic inaccuracies [5].
Clinical information may bias test reading – consciously or unconsciously - in two realms: perception and interpretation [3]. Perception bias refers to bias in identification of EEG findings – especially those that are subtle (e.g., “looking too hard” in cases where history suggests epilepsy [1]). Interpretation bias refers to bias in classifying findings as normal or abnormal (i.e., under- or over-calling), a problem that is pronounced in EEG reading by experts even when clinical information is not provided [4]. History bias in EEG reading has been documented in a study where electroencephalographers changed their interpretation between normal and abnormal depending on whether clinical information was available [7].
Fundamentally, the value of a diagnostic test is founded upon its ability to provide independent information. Reaching a correct diagnosis is more likely when integration of evidence is delayed until after each piece of evidence (e.g., clinical history and EEG) is first evaluated on its own merits. This is one of several ‘decision hygiene’ practices that have been found to reduce noise in human judgements, including diagnostic reasoning [2]. Therefore, we suggest electroencephalographers consider reviewing clinical data only after an unbiased EEG read. This method prevents history bias and allows generation of reliable electrodiagnostic information. In scenarios where this is not possible (e.g., in cases in which the electroencephalographer is reading an EEG from his/her own clinical patient), we stress that this practice likely introduces bias to EEG data analysis and description, and may be remediated by having the EEG interpreted by a colleague electroencephalographer who is unfamiliar with the case.
Supplementary Material
Disclosure
M. B. Westover is a co-founder of Beacon Biosignals, which played no role in this study. F. Nascimento, J. Jing, S. Beniczky, M. Olandoski, S. Benbadis, and A. Cole report no disclosures relevant to this manuscript.
Supplementary Material
Supplementary material associated with this article can be found in the online version at doi:10.1016/j.neucli.2022.08.002.
References
- [1].Amin U, Benbadis SR. The role of EEG in the erroneous diagnosis of epilepsy. J Clin Neurophysiol 2019;36:294–7. [DOI] [PubMed] [Google Scholar]
- [2].Kahneman D, Sibony O, Sunstein CR. Noise: a flaw in human judgement. New York: Little, Brown Spark; 2021. [Google Scholar]
- [3].Loy CT, Irwig L. Accuracy of diagnostic tests read with and without clinical information: a systematic review. JAMA 2004;292:1602–9. [DOI] [PubMed] [Google Scholar]
- [4].Nascimento FA, Jing J, Beniczky S, Benbadis SR, Gavvala JR, Yacubian EM, et al. One EEG, one read – a manifesto towards reducing interrater variability among experts. Clin Neurophysiol 2022;133:68–70. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [5].Saposnik G, Redelmeier D, Ruff CC, Tobler PN. Cognitive biases associated with medical decisions: a systematic review. BMC Med Inform Decis Mak 2016;16:138. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [6].Tatum WO, Olga S, Ochoa JG, Clary HM, Cheek J, Drislane F, et al. American Clinical Neurophysiology Society Guideline 7: guidelines for EEG reporting. J Clin Neurophysiol 2016;33:328–32. [DOI] [PubMed] [Google Scholar]
- [7].Williams GW, Lesser RP, Silvers JB, Brickner A, Goormastic M, Fatica KJ, et al. Clinical diagnoses and EEG interpretation. Cleve Clin J Med 1990;57:437–40. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
