Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2023 Feb 1.
Published in final edited form as: J Clin Neurophysiol. 2022 Feb 1;39(2):129–134. doi: 10.1097/WNP.0000000000000836

Neurobehavioral Biomarkers: An EEG Family Reunion

Joshua B Ewen 1,2, April R Levin 3,4
PMCID: PMC8810913  NIHMSID: NIHMS1661263  PMID: 34366398

Abstract

The field of clinical EEG has had an uneasy relationship with the use of this technology for clinical cognitive applications, and often for good reason. However, apart from its clinical use, EEG has had a tradition as a major tool in cognitive psychology and cognitive neuroscience dating back at least to the 1960s. Based on accumulated knowledge from its research application, EEG-based biomarkers are beginning to see applications in clinical trials and may eventually enter clinical care. We address concerns surrounding quality control, the treatment of artifact and normal variants, and how developments in engineering, biomarker validation and implementation science rigorously applied to these tools can lead to well-justified approaches.


Clinical neurophysiologists often bristle at the notion of EEG-based biomarkers for neurobehavioral disorders, and not for nothing. The 1980’s saw the misuse of EEG-based technologies that purported to diagnose a wide range of disorders; these technologies persist today in fee-for-service environments, but their shaky scientific foundations continue to raise significant concerns among professional organizations (1) and payors. Based on the lessons of that era, experienced neurologists today worry that contemporary efforts at developing EEG-based biomarkers for neurobehavioral disorders represent an attempt to revive a fringe, futile field of research. They raise concern that this field may suffer from poor judgments about signal quality and artifact, made by under-trained, non-neurophysiologist personnel. Moreover, many proposed neurobehavioral biomarkers depend heavily on computerized analyses, and most electroencephalographers’ current experience with computer analysis may be spike/seizure detectors which are so inaccurate as to make our workflow more difficult rather than easier. With prior validation studies so poorly performed as to be invalid, questionable quality control, and computer algorithms that do not seem to be up to the task of recognizing even the most basic epileptiform features, how can there be any hope for computer-analyzed, EEG-based biomarkers that can say anything real and useful about entities so nebulous as neurobehavioral disorders?

The answer is that the new generation of biomarkers springs not from the tradition of clinical EEG, but from a “secret, second EEG family” of cognitive neuroscience researchers that has been examining physiological mechanisms of neurobehavioral disorders in earnest for decades (2). Many clinical neurophysiologists may not know that this body of knowledge even exists. Cognitive psychologists and neuroscientists are likely to identify themselves not by technique but by domain of investigation, and to master multiple techniques suited to answer their scientific questions. It may be because there is no technique-focused professional organization equivalent to IFCN or ACNS in cognitive neuroscience that an entente between the clinical and cognitive research uses of EEG has been elusive. Recently developed collaborations between IFCN and the Organization for Human Brain Mapping may go a long way to increase intellectual exchange between the branches of Berger’s family tree.

It is primarily from the cognitive research tradition of EEG—and not from clinical neurophysiology—that this new generation of biomarkers will derive. However, the clinical neurophysiology community has a lot to offer any effort to develop an EEG-based clinical test. While some issues related to test validity are markedly different between visual inspection of the EEG and computer analysis of neurobehavioral biomarkers, it would nevertheless be foolish not to encourage clinical neurophysiologists’ involvement into the process of biomarker validation. The intent of this editorial is to identify and respond to concerns heard from clinical neurophysiologists, in order to remove barriers to increased participation in this developing field (3).

Haven’t We Tried This Before (and Failed)?

From a theoretical standpoint, “Berger’s descendants” can agree that EEG contains a tremendous amount of brain data with high temporal resolution, and thus could plausibly be able to index brain functions beyond epilepsy. In the eyes of Hans Berger, the application of EEG to neurobehavioral issues was the original goal (4, 5). It was with the discovery of interictal epileptiform discharges in 1934 (6) that the current primary clinical application to epilepsy evolved. The reintroduction of EEG into cognitive neurosciences had a major leap in the 1960s with the discovery of an event-related potential (ERP) that clearly indexed a psychological phenomenon (7). Clinical neurophysiologists most often interact with this body of research in the context of advances in the use of EEG, MEG or ECoG, often coupled with sensory-cognitive-motor tasks, to help with pre-surgical planning (8). Nonetheless, to many neurologists, EEG-based biomarkers of neurobehavioral disorders evoke thoughts of a few basic mathematical techniques (perhaps run on 16-pound desktop computer with green text and saved on a floppy drive), modernized only by some trendy black-box statistics.

Computational techniques able to index mechanistically relevant processes have in fact advanced significantly over the past 30 years, however. This has occurred largely as a result of translating key ideas across previously disparate fields. EEG is a complex signal amenable to processing techniques developed in fields including mathematics, physics, computer science, aerospace engineering and biology. By harnessing the ever-increasing computational power of modern computers, applying these techniques to EEG signals, and studying the outputs in the context of a wide range of behaviors and disorders, relevant new measures of brain activity are constantly being developed and validated.

Most of the current work in cognitive electrophysiology focuses on targeted, hypothesis-driven techniques, within a theoretical framework regarding a particular disorder (9). Indeed, most biomarkers being proposed today rest on considerably more mechanistic knowledge than did the clinical EEG when it was first found to be valid and useful in the care of people with epilepsy. One such example is cross-frequency coupling, a measure of how the brain integrates and segregates information across multiple temporal and spatial scales (10). Cross-frequency coupling at the level of neural circuits (as measured by EEG) has notable parallels to “up-states” and “down-states” measured at the level of single neurons, thus offering potential for bench-to-bedside translation. Cross-frequency coupling is difficult to see visually but easily detectable using quantitative measures, providing previously inaccessible information about how neural circuit physiology contributes to attention and learning.

Targeted sensory paradigms also offer particular promise in this regard and can be used to measure both “high-level” cognitive processes (thus mapping best onto phenotype) and “low-level” sensory responses (thus mapping best onto underlying mechanism or genotype). As an example of a “high-level” paradigm mapping onto phenotype, numerous studies have demonstrated that delayed latency of the N170 ERP component to faces offers promise as a marker of social communication dysfunction in autism (11). As an example of a “low-level” paradigm mapping onto genotype, various characteristics of visual evoked potential (VEP) components are altered in individuals with Rett syndrome (12), and may serve as markers of disease stage, clinical severity, and mutation type. Similar findings are seen in mouse models of Rett syndrome, are linked to specific abnormalities of circuit dysfunction, and may be responsive to targeted treatments (13). Similarly, neural entrainment to an auditory chirp stimulus and habituation to repeated auditory stimuli are both altered in individuals with Fragile X, and correlate with social communication and sensory deficits (14, 15). As in with the VEP in Rett syndrome, there is also translational potential here: Similar measures are altered in mouse models of Fragile X and can be altered by pharmacological interventions (16).

A few words need to be said about unbiased, non-parametric (machine learning) approaches. Clinicians will likely feel more comfortable with the majority of biomarkers under current development, which rely on mechanistic knowledge (though recalling the empirical state of affairs when our clinical EEG first entered service). This comfort is probably a reaction to the 1980s’ procedure of throwing EEG spectral data into discriminant statistical analytical black-boxes which were meant to classify the data as belonging to an individual with a disorder or not. However, a more modern black box—machine-learning algorithms—have been shown to be accurate classifiers in many walks of life and are likely to eventually enter the field of EEG, as they have in radiology. While there may be some hope that these black boxes can be cracked open to human ken in some instances, others may be forever beyond human comprehension. Perhaps what matters most, however, is not whether we understand them on a mechanistic level, but whether they can be shown to do their jobs accurately.

Ultimately, a biomarker’s role is to accurately and validly report on the clinical information it is designed to index. While mechanistic knowledge and advanced mathematical tools may increase our hit-rate for rationally targeting the most promising biomarker candidates, the one key ingredient that is most critical to moving the field forward and avoiding the mistakes of the past is solid validation data (2, 17, 18).

What About Artifact?

A critical skill among experienced electroencephalographers is the recognition of artifact. An imperative for patient safety is not to make a “false alarm” over-call due to artifact masquerading as a pathological finding. We may use filters to mitigate the artifact or throw out a record entirely when unavoidable and talk with the technologists about how to avoid the artifact in the future. Because of the questionable quality control associated with prior methods purporting to be EEG-based tests for cognitive disorders, clinical neurophysiologists may fear that the new generation of biomarkers may be lax about signal quality.

The opposite is true. Just as in clinical neurophysiology (19), there are stringent technical standards among cognitive psychologists and cognitive neuroscientists who use EEG (2022), and many of the top labs perform methodology research alongside their neuroscientific work. Decades of EEG-based research in cognitive neuroscience labs has in fact generated a large body of technical research into artifact identification and correction, thanks in large part to the involvement of signal-processing engineers and applied mathematicians (2325). Healthy competition among methods used, for example, to identify and reject eye blinks, lateral eye movements, movement and EMG artifact has led to significant progress in this area. And human-independent preprocessing pipelines to reject or correct artifact are coming online, and their performance is being evaluated with extensive validation (25). Moreover, EEG-based biomarker validation efforts being conducted in multi-site clinical trials typically have exceedingly rigorous standards for manualization, quality assurance and training (26). Imagine if your technologists’ interactions with each EEG patient were highly scripted (but flexible to accommodate different patients’ needs), each EEG recorded by your lab was reviewed promptly at a central site, and, if a below-standard recording occurred, the central site provided ongoing education of the technologist before any further recording was performed.

Because most computer analysis of EEG measures features that are different than those identified in clinical EEG (discussed more in the next section), it may also be the case that some of the features targeted by signal-processed EEG are more robust to some types of noise than the visually inspected EEG. Many of the most widely used analysis techniques result from averaging together many trials rather than, say, looking at individual examples of seizure onsets, as is often the case in the clinic. This averaging technique is necessary for finding a 1–2 μV ERP signal hidden within 20–80 μV background EEG, but it is also useful in conferring a resistance to any artifact or other sources of noise that are not time-locked. Children with Angelman syndrome often have EEG backgrounds of over 150 μV, and often with frequent inter-ictal epileptiform discharges. It would make sense that it would require many more trials than in controls to find the tiny ERP signal within this noisy background. Yet Key and colleagues recently obtained a reliable modulation of ERPs in children with Angelman syndrome using only 14 trials, compared to an average of 17 trials within controls (27). So despite a context in which the background activity likely differed by multiples of voltage, and at least some members of the clinical group presumably had frequent epileptiform activity while the other had none, the effect on the ERP was detectable with equivalent numbers of trials. This is, on its face, plausible evidence that features measured by signal processing methods may be less affected by changes in the background record than one might assume.

This Isn’t Chess: Humans vs. Computers

Clinical electroencephalographers will be familiar with the slow progress in signal-processing methods to detect or predict seizures, and to identify spikes. While signal-processing methods may be facile at quantifying some of the features that are recognized by the human eye, such as asymmetry of background power (28, 29), computer “vision” and the human eye mostly have different domains of expertise. Rather than try to shoe-horn computer algorithms into being a second-rate substitute for an experienced eye, most biomarkers under development identify and measure features that are natural to mathematical algorithms. Features such as cross-frequency coupling or event-related potentials are likely all but invisible to the human eye, yet are key elements of certain models of brain function (10).

Normal variants also merit mention here. This is a sticky wicket. (The authors regret the pun.) Clinical electroencephalographers worry that signal processing methods may demonstrate false positives because these methods are picking up the normal variants found in the clinical interpretation of the EEG. The short version is that, because different signal processing techniques are differentially sensitive to artifactual features (30) from those that capture the electroencephalographer’s eye, as demonstrated by the results of Key et al. (27), traditional “normal variants” and most mathematical analyses may swim in different streams that interact only minimally.

The reason we have the concept of “normal variants” is because we as a field initially thought these were pathological findings (some examples reviewed in (17)). Sub-typing certain sharp, transient forms as pathological and others as non-pathological represents second-order work on the part of electroencephalographers and helps refine the sensitivity and specificity of the EEG. Bear in mind that even pathological EEG findings are not perfectly sensitive or specific—the presence of focal discharges does not guarantee that the patient has epilepsy, and the absence does not guarantee that she does not. The identification of certain sub-types simply makes those specificities and sensitivities just a bit better.

Because what we measure with computer analysis is typically quite different from what we identify in the clinical interpretation, knowledge of clinical EEG normal variants may not regularly help us identify normal variants in signal-processing metrics. And because we are still at a relatively early stage in the development of these metrics, we can only assume that future generations will be able to sub-type benign clusters out of otherwise pathological findings in these new measures. Until that time, the additional uncertainty is simply “baked into” the imperfect sensitivity and specificity of associated with the biomarkers, just as was the case with clinical EEG, for example, before we did not know to distinguish small sharp spikes from epileptiform discharges.

Conversely, we should also recognize an additional source of uncertainty in clinical EEG that is absent in computer-analyzed EEG: subjectivity. Validation of these novel quantitative biomarkers will have specified technical standards for data collection, a well specified data-cleaning pipeline, and an objective, quantitative threshold to separate positive vs. negative results. In contrast, anyone who has re-read an EEG from a second-opinion consult knows that every other neurologist over-reads artifact, misses electrographic seizures and misinterprets those normal variants our ancestors worked so hard to define.

In the end, does the “second-order knowledge” associated with decades’ of experience in clinical EEG outweigh the objectivity of quantitative biomarkers, or vice versa? This is an empiric question that must be considered in individual clinical scenarios in which the clinical EEG and any new EEG-based biomarker is used. Perhaps neurology can follow the lead of the artificial intelligence community here, by recognizing that our field might best make progress by moving away from a “human versus machine” model, and instead towards a “human plus machine” model.

Clinical Readiness

A quick PubMed search of the term “electroencephalogram” returns over 150,000 entries. The knowledge-base surrounding the clinical EEG is highly interconnected (30), and a practitioner of a technique used and studied for over 80 years could be forgiven for questioning the application of a host of new techniques that as yet have no track record. Yet this new generation of techniques is grounded in basic science work (10, 12, 1416), and efforts to separate the valid and useful biomarker candidates from those who fail to prove their mettle are being conducted by large, well designed, multi-center studies. For example, the Autism Biomarker Consortium for Clinical Trials (ABC-CT) has collected EEG and ERP data from 399 children and is currently collecting data from 400 more (31). The Bipolar-Schizophrenia Network on Intermediate Phenotypes (B-SNIP) has data on thousands of individuals (32). What these biomarkers lack in accumulated history, they make up in rigorous and explicit validation studies (17, 18).

It is thus worth comparing the logic of biomarker development to what goes on in the head of a clinical neurophysiologist reading a single EEG. As required by FDA, a biomarker validation study requires a well-specified “context of use”—the question being asked of the biomarker (17, 18, 33), such as prognosticating the course of natural history, predicting the response to a medication, or monitoring for toxicity. We can parse the necessary elements of a validation study: [1] patient population to which the test can be applied (inclusion/exclusion criteria for the validation study, and the patient group to whom the biomarker can be applied in use), [2] context of use, i.e., the question being asked, [3] recording standards and data cleaning pipeline, [4] EEG feature to be tested, [5] a threshold for defining a positive vs. negative test, and [6] for the validation study only (not for clinical use) an outcome “gold standard” independent variable, against which sensitivity and specificity are judged.

Consider the example of a biomarker tested for its ability to predict benefit from a hypothetical new medication, “Persevegon,” that aims to minimize stereotypies among children with autism spectrum disorder (ASD) (see Table 1). [1] The patient population may include children in the age range from 8–12 years. [2] This biomarker is designed to predict response to a medication. [3] Recording standards would be specified by the investigators doing the validation of the biomarker. [4] In this made-up example, investigators may choose to examine event-related modulation of beta activity (34), due to its long researched association with persistence of activity within the motor system. [5] A binarizing threshold would be selected before the validation study, based on prior data. [6] In the validation study, investigators would compute sensitivity and specificity of the biomarker by comparing the biomarker’s prediction with actual response to the medication.

Table 1.

Comparison of clinical hypothesis-testing using a biomarker vs. the testing of multiple, simultaneous hypotheses using routine clinical EEG.

Biomarker Clinical EEG
Hypothesis 1 Hypothesis 2 Hypothesis 3 Scanning for incidental findings
Patient Population Children with ASD, 8–12y, with stereotypies School-aged, typically developing child with parent-reported staring spells
Question Being Asked Response to “Persevegon” Absence epilepsy Focal epilepsy Non-epileptic staring spells Unspecified
Recording Standards and Data Cleaning HAPPE (Harvard Automated Preprocessing Pipeline for EEG), which is specifically optimized for EEG preprocessing in children with neurodevelopmental disorders (25) 1. ACNS/IFCN recording guidelines
2. Maybe some filtering
3. Expertise and bias of reader toward discounting/not mis-interpreting artifact
EEG Feature Event-related modulation of motor beta activity 3 Hz generalized spike-wave Focal sharp waves (± focal slowing Absence of relevant abnormal findings Anything pathological in EEG (or EKG)
Threshold (pos/neg) Predefined based on prior data Presence/absence* Presence/absence* Presence/absence Presence/absence
Post-Test Probability Defined by the validation study If EEG negative:
Assuming good recording, post-test probability close to 0.
If EEG positive: post-test probability close to 100%
If EEG negative:
does not update pre-test probability much
If EEG positive: increases probability of focal epilepsy
If indeterminate (“sharp transients”): probably does not change the mind of the clinician much
Dependent on probability of absence and focal epilepsy Depends
*

But what is the quality of the morphology, and how many examples do you need to see in order to consider it “real”?

For clinical EEG, in how many instances are we actually able to put numbers on post-test probabilities?

If the technical standards are sub-optimal, may need to discount the effect of a “negative” test.

By contrast, we rarely think consciously about how we test multiple hypotheses and sub-hypotheses at once while interpreting the clinical EEG. If we see a 7-year-old, typically developing child with starting spells, depending on the specifics of the history and physical, we assign pre-test probabilities to each of a number of hypotheses: that she has childhood absence epilepsy, that she has focal epilepsy, that she has non-epileptic staring spells (see Table 1). Depending on what we find on the EEG—3 Hz generalized spike-wave, focal inter-ictal discharges, focal slowing or nothing abnormal—we derive post-test probabilities for each of these hypotheses. Simultaneously, we are also scanning the EEG for incidental, pathological findings. In many cases our use of the clinical EEG is not based on the same type of explicit validation studies we currently expect in the upcoming generation of biomarkers. However, the traditional use of EEG is based on decades of interconnected knowledge (30). Whereas this new generation of biomarkers will start as individual islands of knowledge, the clinical EEG represents an archipelago of studies connected by bridges. Only with decades of use will the biomarker literature become as interconnected as is the case currently with clinical EEG. But for each new biomarker, well done validation studies will be a strong starting point.

In summary, a new age of EEG-based biomarker development is upon us. There is strong support in funding and regulatory agencies (33, 35, 36), and multiple large-scale consortium trials are in progress. While clinical neurophysiologists may be skeptical of these efforts due to prior efforts in which business models often outpaced the science, the current approaches are of a qualitatively different and improved caliber: they are based on fundamental research conducted for decades by serious cognitive psychology and neuroscience laboratories, with the backing of deep methodological research and community standards for data quality. This is a message of hope but should not be misinterpreted as one of realized accomplishment: none of these techniques has yet “crossed the finish line” yet. Rather, in parallel with methodological advances and a growing neurobiological basis, there has also been development of stringent scientific and regulatory criteria for validation and quantification of uncertainty, as biomarker candidates are applied to clinical trial, clinical and medicolegal contexts (18, 33, 36, 37). Until a particular biomarker candidate is demonstrated to be fit for purpose, each candidate is a focus of study and should not be employed in clinical or medicolegal contexts. The momentum of this up-and-coming field is strong, and the clinical EEG community has a key contributions to make.

Funding Sources:

Dr. Ewen’s effort was funded by NIH grants R01MH113652 and P50 HD103538.

Footnotes

Conflict of Interests: The authors report no conflicts of interest.

This paper has not previously been presented at a meeting.

Bibliography

  • 1.Nuwer M Assessment of digital EEG, quantitative EEG, and EEG brain mapping: report of the American Academy of Neurology and the American Clinical Neurophysiology Society. Neurology. 1997. Jul;49(1):277–92. [DOI] [PubMed] [Google Scholar]
  • 2.Ewen JB. The eternal promise of EEG-based biomarkers: Getting closer? Neurology. 2016. Nov 29;87(22):2288–9. [DOI] [PubMed] [Google Scholar]
  • 3.Sahin M, Jones S, Sweeney J, Berry-Kravis E, Connors B, Ewen J, et al. Discovering translational biomarkers in neurodevelopmental disorders. Nat Rev Drug Dis in press. [DOI] [PMC free article] [PubMed]
  • 4.Berger H On the Electroencephalogram of Man: Twelfth Report. Electroencep clin Neurophysiol 1969;Suppl. 28:267–87. [PubMed] [Google Scholar]
  • 5.Gloor P Hans Berger and the discovery of the electroencephalogram. Electroencep clin Neurophysiol 1969;Suppl. 28:1–36. [PubMed] [Google Scholar]
  • 6.Fisher M, Lowenbach H. Aktionsstrome des Zentralnervensystems unter der Einwirkung von Krampfgiften, 1. Mitteilung Strychnin und Pikrotoxin. Arch F Exp Pathol und Pharmakol 1934;174:357–82. [Google Scholar]
  • 7.Walter WG, Cooper R, Aldridge VJ, McCallum WC, Winter AL. Contingent Negative Variation: An Electric Sign of Sensorimotor Association and Expectancy in the Human Brain. Nature. 1964. Jul 25;203:380–4. [DOI] [PubMed] [Google Scholar]
  • 8.Lachaux JP, Axmacher N, Mormann F, Halgren E, Crone NE. High-frequency neural activity and human cognition: past, present and possible future of intracranial EEG research. Prog Neurobiol 2012. Sep;98(3):279–301. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Adamek JH, Luo Y, Ewen JB. Using Connectivity to Explain Neuropsychiatric Conditions: The Example of Autism. In: Thakor NV, editor. Springer Handbook of Neuroengineering: Springer; in press. [Google Scholar]
  • 10.Canolty RT, Knight RT. The functional role of cross-frequency coupling. Trends Cogn Sci 2010. Nov;14(11):506–15. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Kang E, Keifer CM, Levy EJ, Foss-Feig JH, McPartland JC, Lerner MD. Atypicality of the N170 Event-Related Potential in Autism Spectrum Disorder: A Meta-analysis. Biol Psychiatry Cogn Neurosci Neuroimaging 2018. Aug;3(8):657–66. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.LeBlanc JJ, DeGregorio G, Centofante E, Vogel-Farley VK, Barnes K, Kaufmann WE, et al. Visual evoked potentials detect cortical processing deficits in Rett syndrome. Ann Neurol 2015. Nov;78(5):775–86. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Durand S, Patrizi A, Quast KB, Hachigian L, Pavlyuk R, Saxena A, et al. NMDA receptor regulation prevents regression of visual cortical function in the absence of Mecp2. Neuron 2012. Dec 20;76(6):1078–90. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Ethridge LE, White SP, Mosconi MW, Wang J, Byerly MJ, Sweeney JA. Reduced habituation of auditory evoked potentials indicate cortical hyper-excitability in Fragile X Syndrome. Transl Psychiatry 2016. Apr 19;6:e787. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Ethridge LE, White SP, Mosconi MW, Wang J, Pedapati EV, Erickson CA, et al. Neural synchronization deficits linked to cortical hyper-excitability and auditory hypersensitivity in fragile X syndrome. Mol Autism 2017;8:22. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Sinclair D, Oranje B, Razak KA, Siegel SJ, Schmid S. Sensory processing in autism spectrum disorders and Fragile X syndrome-From the clinic to animal models. Neurosci Biobehav Rev 2017. May;76(Pt B):235–53. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Ewen J, Beniczky S. Validating biomarkers and diagnostic tests in clinical neurophysiology: Developing strong experimental designs and recognizing confounds. In: Schomer D, Fernando H. Lopes da Silva F, editors. Niedermeyer’s Electroencephalography: Basic Principles, Clinical Applications, and Related Fields. Seventh ed. New York: Oxford University Press; 2018. p. 229–46. [Google Scholar]
  • 18.Ewen JB, Sweeney JA, Potter WZ. Conceptual, Regulatory and Strategic Imperatives in the Early Days of EEG-Based Biomarker Validation for Neurodevelopmental Disabilities. Front Integr Neurosci 2019;13:45. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Sinha SR, Sullivan L, Sabau D, San-Juan D, Dombrowski KE, Halford JJ, et al. American Clinical Neurophysiology Society Guideline 1: Minimum Technical Requirements for Performing Clinical Electroencephalography. J Clin Neurophysiol 2016. Aug;33(4):303–7. [DOI] [PubMed] [Google Scholar]
  • 20.Picton TW, Bentin S, Berg P, Donchin E, Hillyard SA, Johnson R Jr., et al. Guidelines for using human event-related potentials to study cognition: recording standards and publication criteria. Psychophysiology. 2000. Mar;37(2):127–52. [PubMed] [Google Scholar]
  • 21.Webb SJ, Bernier R, Henderson HA, Johnson MH, Jones EJ, Lerner MD, et al. Guidelines and best practices for electrophysiological data collection, analysis and reporting in autism. J Autism Dev Disord 2015. Feb;45(2):425–43. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Luck S An Introduction to the Event-Related Potential Technique. second ed. Cambridge: MIT Press; 2014. [Google Scholar]
  • 23.Jung TP, Makeig S, Humphries C, Lee TW, McKeown MJ, Iragui V, et al. Removing electroencephalographic artifacts by blind source separation. Psychophysiology. 2000. Mar;37(2):163–78. [PubMed] [Google Scholar]
  • 24.Cohen M Analyzing Neural Time Series Data: Theory and Practice. Cambridge: MIT Press; 2014. [Google Scholar]
  • 25.Gabard-Durnam LJ, Mendez Leal AS, Wilkinson CL, Levin AR. The Harvard Automated Processing Pipeline for Electroencephalography (HAPPE): Standardized Processing Software for Developmental and High-Artifact Data. Front Neurosci 2018;12:97. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Webb SJ, Shic F, Murias M, Sugar CA, Naples AJ, Barney E, et al. Biomarker Acquisition and Quality Control for Multi-Site Studies: The Autism Biomarkers Consortium for Clinical Trials. Front Integr Neurosci 2019;13:71. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Key AP, Jones D, Peters S, Dold C. Feasibility of using auditory event-related potentials to investigate learning and memory in nonverbal individuals with Angelman syndrome. Brain Cogn 2018. Dec;128:73–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Ewen JB, Kossoff EH, Crone NE, Lin DD, Lakshmanan BM, Ferenc LM, et al. Use of quantitative EEG in infants with port-wine birthmark to assess for Sturge-Weber brain involvement. Clin Neurophysiol 2009. Aug;120(8):1433–40. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Hatfield LA, Crone NE, Kossoff EH, Ewen JB, Pyzik PL, Lin DD, et al. Quantitative EEG asymmetry correlates with clinical severity in unilateral Sturge-Weber syndrome. Epilepsia 2007. Jan;48(1):191–5. [DOI] [PubMed] [Google Scholar]
  • 30.Ewen JB, Potter WZ, Sweeney JA. Biomarkers and Neurobehavioral Diagnosis. Biomarkers in Neuropsychiatry. in press. [DOI] [PMC free article] [PubMed]
  • 31.McPartland JC, Bernier RA, Jeste SS, Dawson G, Nelson CA, Chawarska K, et al. The Autism Biomarkers Consortium for Clinical Trials (ABC-CT): Scientific Context, Study Design, and Progress Toward Biomarker Qualification. Front Integr Neurosci 2020;14:16. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Parker D, Trotti R, McDowell JE, Keedy S, Sweeney JA, Gershon E, et al. Auditory and Visual EEG Validators of Psychosis Biotypes, Findings From Bipolar-Schizophrenia Network on Intermediate Phenotypes (B-SNIP) Consortium. Biol Psychiatry. 2018;83(9 (Suppl.)):S60–S1. [Google Scholar]
  • 33.FDA-NIH Biomarker Working Group. BEST (Biomarkers, EndpointS, and other Tools) Resource. Silver Spring (MD): Food and Drug Administration; (US: ); 2016. [PubMed] [Google Scholar]
  • 34.Engel AK, Fries P. Beta-band oscillations--signalling the status quo? Curr Opin Neurobiol 2010. Apr;20(2):156–65. [DOI] [PubMed] [Google Scholar]
  • 35.Sahin M, Jones SR, Sweeney JA, Berry-Kravis E, Connors BW, Ewen JB, et al. Discovering translational biomarkers in neurodevelopmental disorders. Nat Rev Drug Discov 2018. Dec 20. [DOI] [PMC free article] [PubMed]
  • 36.Amur S, LaVange L, Zineh I, Buckman-Garner S, Woodcock J. Biomarker Qualification: Toward a Multiple Stakeholder Framework for Biomarker Development, Regulatory Acceptance, and Utilization. Clin Pharmacol Ther 2015. Jul;98(1):34–46. [DOI] [PubMed] [Google Scholar]
  • 37.Ewen JB, Beniczky S. Validating biomarkers and diagnostic tests in clinical neurophysiology: Developing strong experimental designs and recognizing confounds. In: Schomer DL, Lopes da Silva FH, editors. Niedermeyer’s Electroencephalography. 7th ed. New York: Oxford University Press; 2018. [Google Scholar]

RESOURCES