SUMMARY
Human brains share a broadly similar functional organization with consequential individual variation. This duality in brain function has primarily been observed when using techniques that consider the spatial organization of the brain, such as MRI. Here, we ask whether these common and unique signals of cognition are also present in temporally sensitive, but spatially insensitive, neural signals. To address this question, we compiled EEG data from individuals of both sexes while they performed multiple working memory tasks at two different data-collection sites (ns = 171 and 165). Results revealed that trial-averaged EEG activity exhibited inter-electrode correlations that were stable within individuals and unique across individuals. Furthermore, models based on these inter-electrode correlations generalized across datasets to predict participants’ working memory capacity and general fluid intelligence. Thus, inter-electrode correlation patterns measured with EEG provide a signature of working memory and fluid intelligence in humans and a new framework for characterizing individual differences in cognitive abilities.
Keywords: Individual differences, working memory, fluid intelligence, prediction, EEG
eTOC BLURB
Hakim et al. use EEG-based signals to identify individuals from a group and predict individual working memory capacity and general fluid intelligence. The work provides evidence that common and unique signals of cognition are present in spatially sparse EEG activity.
INTRODUCTION
Human brains share a common template of functional organization. Nearly every person, for example, shows a retinotopic map in primary visual cortex 1 and a face-sensitive region in inferior temporal cortex 2. Electroencephalogram (EEG) activity signatures of the number of items in working memory are reliable enough to be detected at the single-subject level 3. Even in the absence of an explicit task, individuals show synchronous activity in a stereotyped set of brain networks, such as the default mode 4 and frontoparietal 5 networks.
Atop this shared organizational template is significant individual idiosyncrasy in brain structure and function 6. For example, functional MRI studies have revealed that each person has a unique pattern of functional connectivity, or correlated activity between spatially distinct brain regions, that distinguishes them from others and remains stable across cognitive states 7–9. Furthermore, these unique functional connectivity patterns appear cognitively meaningful, predicting individual differences in behaviors including fluid intelligence 7,10 and attention 11–14.
Brain-based predictive models rely on this interesting duality in brain function: broadly similar organization with consequential individual variation. That is, a neural system common across individuals is necessary to build brain-based biomarkers because if every individual relied on a different neural system to achieve a particular behavior, predictive models would fail to generalize across people. However, systematic idiosyncrasies in common neural systems are what allow models to predict each person’s unique set of cognitive abilities. Without these differences, predictive models would fail to differentiate individuals. This duality in brain function has primarily been observed when using techniques, such as MRI, that consider the spatial organization of the brain. In the current work, we seek to address the open theoretical question of whether these common and unique signals of cognition are also present in temporally sensitive, but spatially insensitive, neural signals, such as EEG.
Work in cognitive and network neuroscience has argued that cognition relies on coordinated activity in large-scale, high-density brain networks 11–16. This may suggest that high-density, spatially resolved neural signals, such as those measured with fMRI, are uniquely informative of behavior. At the same time, studies relating cognitive abilities to EEG and magnetoencephalography (MEG) provide evidence that cognitively meaningful variability in brain function may be sampled using spatially sparse, high-frequency neural signals 21–26. In this paper, we directly test the hypothesis that dense functional networks are uniquely informative of behavior. To do so, we ask whether inter-electrode correlations measured using EEG can 1) identify individuals and 2) predict trait-like cognitive abilities across individuals from completely independent datasets. Although these inter-electrode correlation patterns, based on trial-averaged EEG signals, likely capture different aspects of brain activity than do functional connectivity patterns measured with fMRI, they may nonetheless be stable within individuals, unique across individuals, and robustly predictive of trait-like aspects of behavior.
In addition to open theoretical questions about brain function and cognition, obstacles to widespread adoption of existing brain-based models of behavior—the majority of which are based on MRI data—remain. First, the majority of these models have not tested the generalizability of their predictions to unseen individuals and datasets 17. This limits our ability to draw conclusions about their robustness and replicability 18. Second, work has suggested that, in some cases, confounds such as head motion can influence observed relationships between these predictive models and behavior 19. Finally, the costs of MRI for researchers, clinicians, and participants have so far limited translation to real-world settings.
Here we address these open theoretical questions and practical challenges by using a direct measure of neural activity that is easy and affordable to implement: EEG. Across two EEG datasets, each with 165+ individuals, collected at different universities with different EEG systems (passive vs. active), we show that inter-electrode correlations between trial-evoked ERPs are unique across individuals and stable within individuals. We next demonstrate that models based on these sparse inter-electrode correlations generalize across individuals and independent datasets to predict individual differences in working memory capacity, a critical cognitive ability. Finally, we show that the same set of inter-electrode correlations that predicts working memory capacity predicts general fluid intelligence in novel individuals. Thus, sparse trial-evoked EEG correlation patterns reveal a signature of trait-like cognitive abilities in humans and provide a new, more affordable, and accessible approach for predicting cognitive ability from brain function.
RESULTS
EEG fingerprinting
Are patterns of inter-electrode correlations unique across individuals? Many EEG analyses implicitly treat potential individual differences as noise, averaging results across the group or comparing average results from two groups (e.g., individuals with high vs. low working memory capacity; patients vs. controls). Here, we asked whether individuals have both stable and unique trial-evoked patterns of inter-electrode correlations that can reliability distinguish them from others. To test this possibility, we applied a “fingerprinting” style analysis approach developed using fMRI functional connectivity data 9,7, to two independent EEG datasets. Data were collected as participants performed variants of a lateralized change detection task at the University of Oregon (n = 171) and the University of Chicago (n = 165; see STAR Methods). Fingerprinting analyses were restricted to individuals who participated in at least two separate tasks at one of the sites (Oregon n = 171; Chicago n = 45).
For each participant and task, we calculated a whole-scalp inter-electrode correlation pattern using the 17 electrodes that overlapped between the two datasets. We correlated the trial-averaged time courses (i.e., event-related potentials or ERPs) between all pairs of these 17 electrodes for each task (Figure 1). We then correlated every individual’s task A inter-electrode correlation pattern with all possible task B patterns and vice versa. Individuals were considered correctly identified when the correlation between their task A and their task B patterns was larger than the correlation between their task A and all other task B patterns (and vice versa). Identification accuracy is the number of correct identifications divided by twice the number of participants. Non-parametric statistical significance was determined with 10,000 permutation tests (see STAR Methods).
Figure 1 |. Visual depiction of our analysis pipeline.

First, (A) we calculated ERP time courses for each electrode by averaging data across all trials. Here, we have depicted two trial-averaged ERP time courses from an example participant (participant #1). Then, we correlated these trial-averaged ERP time courses across all pairwise electrodes. For example, we correlated the green and blue lines from (A). Correlating all pairwise electrodes for each individual and task resulted in one inter-electrode correlation matrix for each participant and task, which represents the relationship over time of each electrode with every other electrode. For the EEG fingerprinting analyses, we compared each individual’s inter-electrode correlation matrix across tasks. For the behavioral prediction models, we averaged each individual’s inter-electrode correlation matrix across all tasks. For visual comparison, here we have plotted (B, C) the average inter-electrode correlation matrices across all participants and tasks for the Oregon-site and Chicago-site data separately. As can be seen from these figures, the averaged Oregon-site and Chicago-site inter-electrode correlation matrices are very similar to each other despite differences in EEG systems, experimental designs, data collection sites, and subject pools.
We ran an even stronger test of individual uniqueness by comparing the effects of subject vs. task. To this end, we correlated each individual’s task A and task B inter-electrode correlation patterns (“across-task correlations”) and all pairs of individuals’ patterns for each task (“within-task correlations”). If individuals look most like themselves, regardless of task context, across-task correlations should be higher than within-task correlations.
University of Oregon sample.
Participants in the University of Oregon sample completed two lateralized change detection tasks in which they were instructed to remember items’ shape or color as part of a single study. EEG fingerprinting analyses revealed that individuals had stable and unique patterns inter-electrode of correlations across these two tasks that could reliably dissociate them from other individuals: color: shape task accuracy = 82%, p < 0.0001; shape: color task accuracy = 81%, p < 0.0001 (chance = 1/171, or .58%). In other words, individuals’ patterns of inter-electrode correlations were distinct from the group and stable across tasks. Furthermore, individuals’ patterns of inter-electrode correlations looked most like themselves, regardless of task context (within-subject, across-task average Spearman rank correlation = 0.98+/−0.02; across-subject, within-task average matrix correlation = 0.87+/−0.09; p < 0.0001). Furthermore, these results were not driven by differences in skull thickness (Laplacian transformed identification of: color:shape task accuracy = 77.78%, p < 0.001; shape:color task accuracy = 80.12%, p < 0.001 (chance = 1/171, or .58%). This aligns with previous research that has found that differences in brain activity contribute much more variance to surface EEG than do variations in skull thickness 26.
Are patterns of inter-electrode correlations unique in their ability to identify individuals, or are trial-averaged amplitude patterns stable and distinguishable across individuals as well? To address this question, we repeated fingerprinting analyses using trial-averaged amplitude at each electrode. Results revealed that trial-evoked amplitude significantly identified individuals: color: shape task accuracy = 51%, p < 0.0001; shape: color task accuracy = 51%, p < 0.0001 (chance = 1/171, or .58%). However, identification accuracy was lower than that achieved with patterns of inter-electrode correlations (color: shape task accuracy = 82%, p < 0.0001; shape: color task accuracy = 81%, p < 0.0001).
University of Chicago sample.
The University of Chicago sample included data from twelve experiments collected as part of six independent studies. Data from all studies were collected on different days in different sessions. For this analysis, we only consider data from the subset of individuals who participated in multiple experiments (see STAR Methods). Replicating results from the Oregon sample, individuals in the Chicago dataset showed stable and unique patterns of inter-electrode correlations, which could be used to reliably distinguish them from other individuals: task A:B accuracy = 29%, p < 0.0001; task B:A accuracy = 34%, p < 0.0001 (chance = 1/19, or 5.26%). Additionally, individuals’ patterns of inter-electrode correlations looked most like themselves, regardless of task context (within-subject, across-task average Spearman rank correlation = 0.90+/−0.10; across-subject, within-task average correlation = 0.85+/−0.11; p < 0.0001). Once again, identification accuracy was not driven by differences in skull thickness: Laplacian transformed identification of task A:B accuracy = 37.62%, p < 0.001; task B:A accuracy = 36.00%, p < 0.001 (chance = 1/19, or 5.26%). Although trial-evoked amplitude significantly identified individuals (task A:B accuracy = 17%, p < 0.0001; task B:A accuracy = 18%, p < 0.0001; chance = 1/19, or 5.26%), identification accuracy was lower than that achieved with patterns of inter-electrode correlations.
In both the Chicago and Oregon datasets, we identified individuals based on their unique pattern of inter-electrode correlations. These results suggest that individuals have unique and robust inter-electrode correlation patterns that differentiate them from others. These results also complement existing findings in the literature that have, for example, used ERPs to identify individuals 20–22. Our results provide further evidence that EEG-based correlations can be used to identify individuals and can generalize across tasks, sites, and EEG system.
To test whether individual differences in these patterns reflect individual differences in cognitive abilities and behavior, we next asked whether we could use these patterns to predict a central cognitive ability: working memory capacity.
Predicting working memory: Within-site validation
To determine whether patterns of inter-electrode correlations during lateralized change detection tasks predicted working memory capacity in novel individuals, we trained and tested connectome-based predictive models 26, 33 using balanced 5-fold cross-validation. For each participant, we calculated working memory capacity (K score) based on their change detection task performance 23,24 and a single inter-electrode correlation pattern from all available task data. For all studies, we included data from time points 0 (the onset of the memory array) to 1000 ms (see STAR methods for more details). In each cross-validation fold, we identified the inter-electrode correlations (“features”) significantly related to K score in the training set, averaged the strength of these features for each participant, and related mean feature-strength values to K scores with a linear model (see STAR Methods). We applied this model to predict the left-out set of participants’ K scores based on the strength of their inter-electrode correlations. To assess predictive power, we calculated the Spearman correlation between observed and predicted K scores. Correlation assesses the degree to which model predictions capture participants’ rank-order K scores—that is, whether models predict which participants had relatively higher and lower working memory capacities. We calculated mean square error (mse), which reflects the numeric accuracy of predictions, as a complementary measure of model performance. P-values for rs and mse values were calculated using permutation tests.
University of Oregon sample.
Models based on inter-electrode correlations averaged across the color and shape change detection tasks significantly predicted novel individuals’ working memory capacity: mean correlation between predicted and observed K score rs = 0.22, p = 0.002; mse = 0.24, p < 0.0001; Figure 2. Control analyses revealed that models based on EEG signal amplitude, rather than inter-electrode correlations of amplitude, did not predict capacity: rs = 0.02, p = 0.43; mse = 0.26, p = 0.80.
Figure 2 |. Within-site validation results.

Histogram of the correlation between observed and predicted working memory capacity (K) using 5-fold cross-validation over 10,000 true model iterations (purple) and 10,000 null model permutations (gray) for (A) the model trained within the Oregon-site data and (B) the model trained within the Chicago-site data. The vertical black lines represent the iteration and fold with the mean r value. Scatter plot of the correlation between observed and predicted K scores from the iteration and fold with the mean r value for (C) the model trained within the Oregon-site data and (D) the model trained within the Chicago-site data. Gray dots represent individuals observed and predicted K scores from one fold. The black line is the best fit line. Also see Figure S4.
Training and testing models for the color and shape tasks separately revealed that observed and predicted K scores were significantly correlated in the shape task, but not the color task: mean shape task, rs = 0.21, p = 0.006; mse = 0.34, p = 0.0004; mean color task, rs = 0.09, p = 0.12; mse = 0.23, p = 0.05.
We investigated whether models generalized across tasks by applying models trained in the color task to shape task data from held-out individuals and vice versa. Models trained on color task data significantly predicted K scores from shape-task data: mean rs = 0.28, p = 0.0002; mse = 0.43, p < 0.0001. Models trained on data from the shape task, significantly predict K scores from color-task data as evaluated by rs (mean rs = 0.25, p = 0.0007), but not as evaluated by mse (mean mse = 0.38, p = 0.07). Performance in the color and shape tasks was highly correlated, r=0.73, p=2.60e-0.30, mse=0.22. However, the variability (color: variance=0.23; shape: variance=0.36) and reliability (color: reliability=0.85; shape: reliability=0.77) of performance differed between the color and shape tasks. Thus, it is possible that the shape task model did not generalize to the color task because there was less behavioral variance in the color task for the model to capture during testing.
The above models were built using correlations of trial-averaged ERPs from electrodes aligned based on the side of the screen that participants attended on each trial as described in the Electrode organization section of the STAR Methods. When electrodes were not aligned in this way, models did not predict working memory capacity: mean rs = −0.04; p = 0.67, mse = 0.27, p = 0.73. Similarly, models trained on correlation patterns calculated from EEG data concatenated rather than averaged across trials did not predict capacity (see Figure S1).
Of note, the range of model predictions was condensed relative to the range of observed K scores. This is common in machine learning models and is an active area of research25. One reason for this restriction of range is that data in the tails of a distribution are less well represented in the training data than closer-to-average values. Restricted predicted ranges are also observed and discussed in work building fMRI functional connectivity models7,26.
University of Chicago sample.
For the Chicago dataset, we collapsed data across multiple lateralized change detection experiments, which had different memoranda and sample durations. Inter-electrode correlations and K for each participant were measured using all the studies that each participant completed. Replicating findings from the Oregon sample, observed and predicted K scores were significantly correlated: mean rs = 0.26, p < 0.0001; mse = 0.33, p = 0.002; Figure 2. Thus, inter-electrode correlations predicted working memory capacity across individuals.
Predicting working memory: Across-site validation
Although these results provide the first evidence that models based on EEG data predict working memory capacity across individuals, there are many reasons that internal (i.e., within-dataset) validation may overestimate effect sizes, including idiosyncrasies in task context, EEG systems, and participant populations. Therefore, an even more powerful demonstration of the robustness of predictive models is to externally validate them—that is, to apply them to data from a completely independent sample. External validation allows us to better approximate a model’s population-level generalizability and better understand its predictive boundaries.
To test the cross-dataset generalizability of models predicting working memory capacity, we trained models on the full Oregon sample and applied them to the full Chicago sample and vice versa. Importantly, these two datasets were collected by different experimenters in different locations using different EEG systems.
Predicting Chicago K scores from Oregon model.
To generalize the Oregon working memory model to the Chicago dataset, we trained a model using the Oregon dataset and applied it, completely unchanged, to the Chicago sample. Predictions from this model were significantly correlated with observed K scores, demonstrating that the model trained on the Oregon dataset predicted working memory capacity in the Chicago dataset: rs = 0.29, p < 0.0001; mse = 1.008, p = 0.0003; Figure 3. Cross-site prediction performance was robust to variations in the feature-selection threshold applied during model building. Thresholds ranging from the top 5% to the top 20% of features positively and negatively correlated with behavior resulted in model performance values between r = 0.19 – 0.31, p < 0.01, mse = 0.95 – 1.08, p < 0.0001.
Figure 3 |. Across-site validation results.

(A) Scatter plot of the correlation of observed and predicted working memory capacity (K) from a model trained on all the Chicago data and tested on all the Oregon data. Gray dots represent individuals observed and predicted K scores. The black line is the best fit line. (B) Scatter plot of the correlation of observed and predicted working memory capacity (K) from a model trained on all the Oregon data and tested on all the Chicago data. Individual colored lines represent the correlation between observed and predicted working memory capacity of each experiment plotted separately and demonstrate that it is not the case that a small subset of experiments drives these results. The black line represents the fit of all individual participants. (C) Predictive features (i.e., inter-electrode correlations) that were included in the model trained on the Oregon dataset. Features that positively predicted working memory capacity are depicted in orange, and those features that negatively predicted working memory capacity are depicted in blue. (D) Predictive features that were included in the model trained on the Chicago dataset. Also see Figure S4.
We additionally investigated whether stimulus-evoked activity drove our ability to predict working memory capacity across sites. To do this, we re-calculated inter-electrode correlations using data from stimulus offset to the end of the retention interval (see STAR Methods). We were still able to significantly predict working memory capacity in the Chicago dataset using a model trained on the Oregon dataset: rs = 0.26, p = 0.0006; mse = 1.00, p = 0.002. Though stimulus-evoked activity does not necessarily end at stimulus-offset, these results provide some initial evidence that our results are robust across various time windows.
Predicting Oregon K scores from Chicago model.
We next applied a model defined using Chicago data to predict working memory in the Oregon sample. Predictions from this model were significantly correlated with observed K scores: rs = 0.30, p < 0.0001; mse = 0.92, p = 0.0002; Figure 3. Again, cross-site prediction performance was robust to variations in feature-selection threshold. Thresholds ranging from the top 5% to the top 20% of features positively and negatively correlated with behavior resulted in model performance values between r = 0.18 – 0.28, p < 0.02, mse = 0.95 – 1.08, p < 0.0001. Predictions remained significant when calculating inter-electrode correlations from data collected after memory array offset: mean rs = 0.23, p = 0.003; mse = 0.887, p = 0.0003.
Across-site validation demonstrates that models predicting working memory capacity not only generalized across EEG session and stimuli, but also across data acquisition sites (Oregon vs. Chicago) and EEG systems (passive vs. active). This suggests that inter-electrode correlations are a robust, reproducible, and generalizable predictor of working memory capacity.
Overlap of Oregon and Chicago models.
Next, we asked whether the two externally validated models predicted working memory from a common set of inter-electrode correlations. We found that indeed there was significant overlap between the predictive features in the Chicago and Oregon models: 6 overlapping features (26% of the features in both models; 5 features negatively predicting behavior in both samples and one feature positively predicting behavior in both samples), p=7.68e-7. This suggests that a common set of inter-electrode correlations predicts working memory capacity in both datasets.
The features that overlapped between the Oregon and Chicago models involved posterior and occipital electrodes: O1-O2, PO8-PO7, P7-P8, PO7-P8, P4-P3, PO8-Pz. Most of these correlations were between cross-hemisphere electrodes, which could be due to the lateralized nature of this task. The involvement of cross-hemisphere inter-electrode correlations in these models emphasizes that both contralateral and ipsilateral neural activity critically contribute to our ability to predict behavior.
Relationship to the CDA.
Interestingly, predictive inter-electrode correlations in our models include electrodes that are typically used to calculate the contralateral delay activity (CDA). This large degree of overlap between our models’ predictive feature set and CDA electrodes raises some questions about whether our models are independent from the CDA. To answer this question, we ran the same prediction analyses as we did above using CDA amplitude to predict working memory capacity. We calculated the CDA by taking the difference in amplitude between contralateral and ipsilateral posterior/occipital electrodes (P7/P8, P3/P4, PO7/PO8, PO3/PO4, O1/O2) from 0-1000 ms (the time range included in our analyses). We then trained a linear model to predict working memory capacity from the CDA using the full Oregon-site dataset and tested it on the full Chicago-site dataset (and vice versa). The model trained on the Oregon-site dataset did not significantly predict working memory capacity: rs=0.04, p=0.29; mse = 1.24, p = 0.57. Surprisingly, the model trained on the Chicago-site dataset negatively predicted working memory capacity: rs = −0.32, p < 0.0001; mse = 0.98, p < 0.0001. This negative prediction is difficult to interpret, especially because this model predicted K score values that ranged from 2.50 to 2.52, values close to the mean of the Chicago-site dataset (mean K = 2.50).
Past work has shown that CDA amplitude in the Oregon-site dataset is significantly correlated with working memory capacity 27. However, in the Chicago-site data, the correlation between CDA amplitude and working memory capacity was not significant: r = −0.04, p = 0.61. One possible reason for the lack of a significant correlation in this dataset is that the set sizes used to calculate K scores are smaller than set sizes typically used to observe this correlation. Smaller, variable set sizes could affect estimates of working memory capacity and, thus, the relationship between K scores and CDA amplitude. Nevertheless, the inter-electrode correlation approach seems to be more sensitive than the CDA to the relationship between neural activity and behavior because the inter-electrode correlation approach was able to predict behavior in both datasets despite being calculated using a more limited behavioral range in the Chicago-site dataset.
To further investigate whether CDA explains unique variance to inter-electrode correlations, we trained models to predict working memory capacity from both CDA and inter-electrode correlations combined. These models significantly predicted behavior: train Oregon, test Chicago rs = 0.25, p < 0.0001, mse = 1.33, p = 0.012; train Chicago, test Oregon rs = 0.27, p < 0.0001, mse = 1.05, p < 0.0001. However, the CDA coefficient was only significant in the model trained on the Oregon data: train Oregon CDA coefficient p = 0.02; train Chicago CDA coefficient p = 0.61. These results provide further evidence that our inter-electrode correlation models do not simply track the CDA. Instead, they predict behavior from neural signals that are distinct from the CDA.
EEG fingerprinting using predictive features.
Are the inter-electrode correlation patterns that predict working memory capacity across individuals reliable enough to distinguish individuals from a group? To ask this question, we applied the same analysis described in the EEG fingerprinting section above. However, instead of using the whole-scalp pattern of inter-electrode correlations, we only included the inter-electrode correlations that significantly predicted behavior across individuals. This feature set only included those features that significantly predicted behavior in both the Chicago and Oregon models. We then compared the results from the predictive feature set to those from the whole-scalp inter-electrode correlation pattern.
University of Oregon sample.
Features predicting working memory were sufficient to identify individuals: color: shape task accuracy = 40%, p < 0.0001; shape: color task accuracy = 39%, p < 0.0001 (chance = 1/171). In other words, the inter-electrode correlations that predicted working memory capacity across individuals also reliably distinguished individuals. However, identification accuracy using only predictive features was lower than when we included the whole-scalp pattern. To determine whether this reduction in identification accuracy was due to downsampling the number of features, we compared identification accuracy of predictive features to an equal number of randomly selected features that were not predictive of working memory capacity. Identification accuracy using a random subset of features also identified individuals: color: shape task accuracy = 39%, p < 0.0001; shape: color task accuracy = 33%, p 0.0001 (chance = 1/171). There was not a significant difference between identification accuracy using the predictive features and the random features: color: shape p < 0.167; shape: color p < 0.272. These results suggest that inter-electrode correlations that significantly predict working memory capacity are not necessarily better at identifying individuals than a random subset of inter-electrode correlations.
University of Chicago sample.
Replicating results from the Oregon sample, identification accuracy using only predictive features was successful: task A.B accuracy = 26%, p < 0.0001; task B:A accuracy = 23%, p < 0.0001 (chance = 1/19). Once again, this identification accuracy was worse than that achieved with whole-scalp inter-electrode correlation patterns, and random same-size feature sets also identified individuals: task A:B accuracy = 13%, p < 0.0001; B:A accuracy = 13% p < 0.0001 (chance = 1/19). Unlike in the Oregon dataset, however, predictive features more accurately identified individuals than the random features: task A:B p < 0.0001; task B:A p < 0.0001. These results suggest that features that significantly predict working memory capacity may more accurately identify individuals than a random subset of features. Given the inconsistency of this result across the Chicago and Oregon samples, further research is needed to characterize whether an individual’s most identifiable inter-electrode correlation features are the same features that best predict their behavior.
Relationship between inter-electrode correlations and general fluid intelligence
Our results demonstrate that patterns of inter-electrode correlations observed during a working memory task are a robust and reliable predictor of working memory capacity. Do patterns of inter-electrode correlations observed in this context predict other cognitive abilities, such as general fluid intelligence? In the Oregon dataset, participants performed a series of cognitive tasks outside the EEG booth, including three measures that allowed us to calculate general fluid intelligence (see STAR Methods). Therefore, we used this dataset to investigate the relationship between patterns of inter-electrode correlations and general fluid intelligence. We also investigated the relationship between predicted working memory capacity from our models and all other behavioral measures from the Oregon dataset (see STAR methods).
Predicting fluid intelligence.
Although we have shown that the patterns of inter-electrode correlations are a robust predictor of working memory capacity, one concern is that these correlations are not predicting working memory per se, but instead some other aspect of function that could support performance in the working memory task. For instance, inter-electrode correlations might reflect the degree of response bias in an individual, which could in turn have an impact on change detection performance. To address this possibility, we tested whether inter-electrode correlations that predict working memory capacity also predict general fluid intelligence (gF), a closely related ability 28–30. If we can predict gF based on our EEG-based models, this would provide converging evidence that inter-electrode correlations are predicting a purer working memory capacity construct because that which is predicted is shared with a very different task that is known to rely on working memory.
To this end, for each participant in the Oregon sample, we calculated gF scores from a combination of performance on three tasks: Raven’s Advanced Progressive Matrices, number series, and Cattell’s culture fair test, as reported in previous work 27. We also calculated mean strength of the inter-electrode correlations that predicted working memory in the Chicago sample. Using 5-fold cross-validation, we trained and tested models predicting gF from these values. Models significantly predicted novel individuals’ gF scores: rs = 0.19, p = 0.02; mse = 6.79, p = 0.002; Figure 4. These results suggest that inter-electrode correlation patterns tap into the same aspects of working memory capacity that are critical for fluid intelligence, alleviating concerns that EEG models were instead tapping into some other idiosyncratic factor influencing performance on the change detection task used to measure working memory ability in our study. Although the EEG data used here were collected as participants performed a working memory task, our EEG models generalized to predict fluid intelligence (see STAR methods.)
Figure 4 |. Prediction of general fluid intelligence.

(A) Histogram of the correlation between observed and predicted general fluid intelligence (gF) using 5-fold cross-validation over 10,000 true model iterations (purple) and 10,000 null model permutations (gray). The vertical black line represents the iteration and fold with the mean r value. Models were trained and tested within the Oregon dataset using the predictive working memory features that were included in both the Oregon and Chicago working memory models. (B) Scatter plot of the correlation between observed gF and predicted gF from the iteration and fold with the mean r and p values. Of note, this data is from an example fold and iteration. In some folds, the numeric outlier in this figure was not included in the data. Gray dots represent individuals observed and predicted gF. The black line is the best fit line.
DISCUSSION
Here, we demonstrate that inter-electrode correlations measured using EEG are idiosyncratic to each person and can be used to identify individuals from a group. These correlation patterns are unique and stable over time, just like a fingerprint. Furthermore, individual differences in these patterns are cognitively meaningful, predicting general fluid intelligence across individuals and working memory capacity across independent datasets. Together, these results demonstrate that individual differences in critical cognitive abilities are reflected in individuals’ unique, idiosyncratic expression of ERP correlation patterns. Furthermore, patterns of inter-electrode correlations are a generalizable and accessible approach for predicting individual differences in other abilities and behaviors from brain data.
Predicting trait-like behavior
Previous research has used EEG to track moment-to-moment fluctuations in working memory storage. For example, multivariate pattern classification techniques have used the topography of raw EEG amplitude to decode the amount of information maintained in working memory on a single trial 31. Another EEG signal, the contralateral delay activity (CDA), scales with the number of items held in working memory. This signal’s amplitude asymptotes when working memory is full and is correlated with working memory capacity 27. Previous research has also found that ERP components, including the P200, N200, and P300, are strongly associated with general fluid intelligence 32. Thus, the presence of EEG signals that track working memory storage, capacity, and intelligence is well established. However, no previous work has generalized their working memory capacity and general fluid intelligence models across completely independent individuals from different datasets using direct measures of brain function, including EEG.
Brain-based models of behavior are most theoretically informative and practically useful when they generalize to novel data. However, there are many reasons why brain-based models might not generalize. For example, internally validated model results could be driven by similarities between people at a particular site or idiosyncrasies of a particular task design or experimental context. Testing models on independent datasets (i.e., external validation) is a powerful way to reduce these and other biases 17,18. Although significant work has emphasized the importance of external validation 18, it is still relatively uncommon 17. In fact, one paper found that only 9% of the neuroscience studies that they surveyed tested models on one or more independent datasets 17. Additionally, none of these studies analyzed EEG data. Here, we externally validated our EEG-based predictive models. We illustrate that our results are robust to differences in task design, experimental context, EEG system, and participant population. By externally validating our results, we provide a powerful demonstration that our models of working memory are both robust and generalizable.
Interestingly, we were able to predict general fluid intelligence using the same EEG features that predicted working memory capacity. This suggests that the variance in working memory capacity that our predictive model explains is shared with general fluid intelligence. With our supplemental cross-correlation analysis, we also found that the relationship between predicted working memory capacity and other cognitive abilities was analogous to the relationship between observed working memory capacity and other cognitive abilities. This is another example of how our model predictions capture variance in working memory that is shared with related cognitive abilities. Overall, the EEG-based model that we identified is not idiosyncratic to a particular working memory task. Rather, it seems to track trait-like cognitive abilities, including general fluid intelligence, more generally.
Sparse cognitive networks
Previous work in cognitive and network neuroscience has suggested that cognitive abilities, such as working memory and attention, emerge from interactions between dozens or hundreds of brain regions in complex, large-scale brain networks 11–14,33. Significant research, typically using fMRI, has utilized these networks to describe and predict behavior (43, 50, 14, 44, 45). Interestingly, using EEG, we were able to predict working memory capacity from fewer than 15 inter-electrode correlations and fluid intelligence from only 6 correlations between occipital and parietal electrodes. Thus, although cognitive abilities may involve the interaction between large numbers of disparate brain regions, they can also be summarized using a relatively sparse EEG “network”. Future work can investigate whether this sparse EEG network and more spatially specific measures of cognition explain unique or overlapping variance in cognitive abilities.
The predictive network that overlapped between the Oregon and Chicago models involved posterior and occipital electrodes. Considering previous work that has found robust associations between frontoparietal brain regions and working memory, it may be surprising that our working memory models do not include correlations between frontal electrodes as features. However, the lack of frontal electrode correlations in our models does not necessarily suggest a lack of involvement of frontoparietal regions in working memory processes. EEG’s lack of spatial specificity makes it very difficult, if not impossible, to localize where the signal that we are measuring was generated. Thus, it is possible that the activity in posterior electrodes that predicted working memory capacity reflected contributions from anterior brain regions that have previously been shown to be involved in working memory. Furthermore, there are many robust EEG signals that track working memory that are measured in posterior electrodes. For example, the contralateral delay activity (CDA)—which is measured using posterior-occipital electrodes—scales with working memory load and tracks individual differences in working memory capacity (3). The distractor positivity (PD) tracks suppression of items from working memory and is also measured using posterior electrodes (55). Considering these and other posterior EEG signals that track working memory processes, the lack of frontal electrode correlations in our models is potentially less surprising.
Tracking brain function
Our results suggest that common and unique signals of cognition are present in temporally precise, but spatially insensitive, EEG signals. This suggests that millisecond-by-millisecond fluctuations in neural processing are unique across individuals and cognitively meaningful. Previous work has also shown that analogous spatially sensitive signals are present in fMRI signals, including functional connectivity networks. These two methods track different neural signals. But, when considered together, they provide additional evidence that methods that measure statistical dependence between two regions’ signal time series, in general, track meaningful variation in brain function.
Despite the robustness of our EEG- and previous fMRI-based fingerprinting results and predictive models, there are certain obstacles that need to be addressed before we can fully accept that these methods track brain function. For example, previous work using fMRI functional connectivity has been criticized because it measures blood oxygenation, which is an indirect measure of neural activity. Due to this, critics have suggested that fMRI functional connectivity fingerprinting could be driven by individual differences in brain structure or vasculature, rather than meaningful differences in brain function 34,35. They additionally suggest that identification and behavioral prediction could be driven by trait-like head motion, which is a challenging confound in fMRI 19,36,37. Our EEG models complement this work by controlling for and ruling out these potential confounds.
EEG—like MRI and all other measures of human brain activity—is also influenced by brain structure. Unlike MRI, however patterns of inter-electrode correlations in the current work measure the topography of electrical brain activity, which is a direct measure of neural activity. EEG is also less influenced by head motion because EEG trials are typically shorter, and trials with interference from head motion are removed from analyses. Therefore, these factors do not influence our ability to identify individuals and predict behavior using inter-electrode correlations. Nevertheless, our EEG analyses come with their own unique challenges. For example, identification of individuals could be driven by differences in skull thickness across participants. Skull thickness could influence the conduction of neural signals on the scalp, leading to idiosyncratic patterns of electrical activity across people that are unrelated to brain function. We addressed this potential issue of volume conduction by applying a Laplacian transformation, which reduces the adverse effects of volume conduction 38. Even when we account for potential differences in volume conduction on the scalp, we are still able to identify individuals. Overall, our results provide strong evidence that sparse ERP correlation patterns can robustly predict cognitive ability. Inter-electrode correlations measured using EEG track meaningful variation in neural activity that is related to individual differences in cognition.
Limitations and future directions
In the current work, we analyze the correlation between two electrodes’ trial-evoked EEG signal over time. These analyses do not necessarily provide evidence for communication or causal relationships between, or shared processing in, different electrodes or brain regions. These are separate—but important—questions that should be addressed by future research. One way to begin addressing these questions, which has been utilized in the fMRI literature, would be to characterize inter-electrode correlations in resting-state (i.e., task-free) EEG data and evaluate their utility for identifying individuals and predicting behavior.
We analyzed EEG data while participants performed variants of a lateralized change detection task. Therefore, it is not clear whether predictions of working memory and general fluid intelligence rely on activity evoked during this specific task state, or whether models would generalize to predict behavior from data collected in other contexts. For example, are behaviorally relevant signals present in intrinsic inter-electrode correlations—that is, EEG activity observed at rest? Are individual differences in a cognitive process best predicted by inter-electrode correlation observed during a task that taxes that process (c.f., 50)? To address these questions, future research can use EEG data collected during rest and different tasks to predict individual differences in other cognitive abilities, such as attentional control or long-term memory. Future research should also utilize the high temporal precision of EEG to address questions about the temporal dynamics of different cognitive processes.
One complication with analyzing resting-state EEG data is that our inter-electrode correlation measure may rely on time-locked data that is averaged over many trials. We calculated time-locked averaged EEG data because single-trial data were noisy. Resting-state EEG data are typically collected continuously for a specific amount of time. It does not have specific trials, and, therefore, it is not clear how to overcome low signal-to-noise ratios with continuous resting-state EEG data. Nevertheless, it would be theoretically impactful to determine whether there are behaviorally relevant EEG correlation patterns that are present at rest.
Conclusions
We demonstrate that a temporally precise, but sparse pattern of electrical activity identifies individuals and predicts cognitive abilities. Thus, it is not necessary to measure dense functional networks with relatively high spatial resolution to predict important individual variability in cognition. Instead, cognitively meaningful variability in functional brain organization is also reflected in sparse, high-frequency neural signals. In sum, our analyses, which are temporally sensitive and easy and affordable to implement, provide a new arena in which we can better track and understand important individual differences in cognitive abilities.
STAR METHODS
Resource availability.
Lead contact.
Further information and requests for resources can be directed to either Nicole Hakim (nhakim2@stanford.edu) or Monica Rosenberg (mdrosenberg@uchicago.edu).
Materials availability.
This study did not generate any new or unique reagents.
Data and code availability.
Data, analysis scripts, and a predictive model that is trained on the combination of the Oregon-site and Chicago-site datasets will be available online upon publication at the following URL:https://osf.io/dbmuc/?view_only=6d76f387774241fdb40dcfe68f66ec93. See Figure S3 for more information about the number of testing participants and model performance and see Figure S2 for more information on model performance based on the number of predictive edges. Data were collected at two different study sites from individuals of both sexes: University of Oregon Eugene and the University of Chicago. Whereas data from the University of Oregon were collected as part of a single study, data from the University of Chicago were compiled from multiple studies. Participants were recruited from the respective university network and from their surrounding communities. Some of these samples are described in previous publications, while others are unpublished. All data are described below.
Method details
Participant information.
University of Oregon.
Data collected at Oregon were part of one study, and the results from this dataset were previously published 27. All participants (107 female) gave written informed consent according to procedures approved by the University of Oregon institutional review board. Participants were compensated for participation with course credit or monetary payment ($8/hr for behavior, $10/hr for EEG). Our analyses include data from 171 individuals. The numbers of participants included here is different than the number of participants included in the original study because we only required that participants have usable EEG and change detection data, whereas the prior analyses required participants to have completed all tasks.
University of Chicago.
Data from the University of Chicago were collected by multiple experimenters for multiple independent studies. Experiments were selected for inclusion based on whether they included a lateralized change detection task. All lateralized change detection EEG experiments that have been run by the Awh/Vogel lab at the University of Chicago task were included.
Experimental procedures were approved by The University of Chicago Institutional Review Board. All participants gave informed consent and were compensated for their participation with cash payment ($15 per hour); participants reported normal color vision and normal or corrected-to-normal visual acuity. For the current analyses, University of Chicago data were combined into one large sample. Of note, some individuals participated in more than one University of Chicago experiment. For these individuals, behavioral data and EEG signal time-series for all trials from all of the experiments in which they participated were averaged. Across all of the Chicago studies, there were 165 unique individuals. For a subset of the Chicago experiments, gender information is currently inaccessible due to Covid-19 limitations. Therefore, only the published studies contain information about gender.
Chicago study #1 was previously published 39 and includes data from four separate experiments: 28 participants (13 female) in experiment 1a, 20 participants (10 female) in experiment 1b, 20 participants (10 female) in experiment 2a, and 29 participants (13 female) in experiment 2b. Chicago study #2 was previously published 40 and includes data from two separate experiments: 20 participants (8 female) in experiment 1 and 20 participants (9 female) in experiment 2. Chicago study #3 is was previously published 41. This study contains data from two separate experiments, however, we only analyzed data from experiment 1, which contained data from 21 participants (15 female) because this was the only experiment that included a lateralized change detection task. Chicago study #4 is not published and contains data from one experiment (20 participants). Chicago study #5 is not published and contains data from two experiments (20 participants in experiment 1 and 19 participants in experiment 2). Chicago study #6 is not yet published and contains data from two experiments (25 participants in experiment 1 and 19 participants in experiment 2).
Experimental Design.
In all experiments, participants performed a lateralized change detection task. At the beginning of each trial, a central cue was presented to indicate which side (left or right) of the screen to pay attention to (Chicago study #6 did not have a cue on a subset of trials. See below for further information.) Following the arrow cue, a memory array appeared, which consisted of a series of objects on both sides of the screen. Participants were instructed to remember the objects that were presented on the cued side of the screen, while ignoring the objects on the other side. Following the memory array there was a blank retention interval. The exact duration of the retention interval varied across experiments. The response screen then appeared, which consisted of one object on each side of the screen. Participants had to indicate if the object on the cued side was identical to the original object that was presented at that location (Chicago study #5 had a two-alternative forced choice response. See below for further information.) The exact duration and stimulus parameters varied across experiments. For published datasets (University of Chicago studies 1–3), these details can be found in the original publications 27,40,40. For unpublished datasets (University of Chicago studies 4–6), details are described below.
University of Chicago study #4.
Stimuli were presented on a 24-in. LCD computer screen (BenQ XL2430T; 120-Hz refresh rate) on a Dell Optiplex 9020 computer. Participants were seated with their heads on a chin rest 74 cm from the screen. Each trial began with a blank inter-trial interval (750 ms), followed by a diamond cue (500 ms) indicating the relevant side of the screen (right or left). This diamond cue (maximum width = 0.65°, maximum height = 0.65°) was centered 0.65° above the fixation dot and was half green (RGB value: 74, 183, 72) and half pink (RGB value: 183, 73, 177). Half of the participants were instructed to attend to the green side, and the other half were instructed to attend to the pink side. After the cue, colored squares (1.1° × 1.1°) briefly appeared in each hemifield (150 ms) with a minimum of 2.10° (1.5 objects) between each square. Four colored squares appeared on each side of the screen, and then disappeared for 2000 ms. Squares could appear within a subset of the display subtending 3.1° to the left or right of fixation and 3.5° above and below fixation. Colors for the squares were selected randomly from a set of nine possible colors (RGB values: red = 255, 0, 0; green = 0, 255, 0; blue = 0, 0, 255; yellow = 255, 255, 0; magenta = 255, 0, 255; cyan = 0, 255, 255; orange = 255, 128, 0; white = 255, 255, 255; black = 1, 1, 1). Colors were chosen without replacement within each hemifield, and colors could be repeated across, but not within, hemifields. After the retention interval, the response screen then appeared. The response screen consisted of one object on each side of the screen. Participants had to report whether the color of the object on the cued side changed colors. On a subset of trials, a series of colored squares appeared on the midline during the retention interval. These trials were excluded from analysis.
University of Chicago study #5.
The experimental parameters replicate those from Chicago study #4, except for the following changes. In Chicago study #5 experiment 1, the memory array remained on the screen throughout the delay on a subset of trials. These trials were excluded from analysis. However, because the memory array remained on the screen throughout the delay, the response screen was different than all of the other experiments. On each side of the screen, there was one object with two colors on each side of the screen 42, and participants had to report which of the two presented colors matched the original memory item. In Chicago study #5 experiment 2, the memory array consisted of two or four colored squares on each side of the screen, and following the memory array, the screen remained blank for 1,650 ms.
University of Chicago study #6.
This study included two experiments. Stimuli in all experiments were presented on a 24-in. LCD computer screen (BenQ XL2430T; 120-Hz refresh rate) on a Dell Optiplex 9020 computer. Participants were seated with their heads on a chin rest 74 cm from the screen. Each trial began with a blank inter-trial interval (1000 ms), followed by a gray cross (600 ms). The cue was either pointed to the attended location (50% of trials) or was uninformative (50% of trials). Following the cue, the memory array appeared (200 ms), which consisted of two (50% of trials) or four (50% of trials) colored target squares that appeared either to the left, right, above or below fixation. Four colored distractor circles also appeared in a position adjacent to the target squares. The display was visually balanced with four gray (RGB value: 128, 128, 128) circles across from both the target squares and the distractor circles, which matched the average luminance of the colors. Participants were told to remember the colors of the squares over the delay and ignore the circles. Squares had a side length of 0.9° visual angle and circles had a diameter of 1.0° (squares and circles covered the same area, viz., 3600 pixels). Following the memory array, the screen remained blank for 1000 ms. After this, the response screen appeared, which consisted of one colored square. Participants had to determine whether the colored square was the same color as the original colored square in that location. Colors for the target squares and distractor circles were selected randomly from a set of nine possible colors (RGB values: red = 255, 0, 0; green = 0, 255, 0; blue = 0, 0, 255; yellow = 255, 255, 0; pink = 255, 0, 255; cyan = 0, 255, 255; purple = 128, 0, 255; dark green = 4, 150, 60; orange = 255, 128, 0). Colors were chosen without replacement.
EEG acquisition and artifact rejection.
University of Oregon.
EEG was recorded from 22 standard electrodes sites in an elastic cap (ElectroCap International, Eaton, OH) spanning the scalp, including International 10/20 sites F3, Fz, F4, T3, C3, Cz, C4, T4, P3, Pz, P4, T5, T6, O1, and O2, along with nonstandard sites OL, OR, PO3, PO4, and POz. Two additional electrodes were positioned on the left and right mastoids. All sites were recoded with a left-mastoid reference, and the data were re-referenced offline to the algebraic average of the left and right mastoids. To detect blinks, vertical electrooculogram (EOG) was recorded from an electrode mounted beneath the left eye and referenced to the left mastoid. The EEG and EOG signals were amplified with an SA Instrumentation amplifier (Fife, Scotland) with a bandpass of 0.01–80 Hz and were digitized at 250 Hz in Labview 6.1 running on a PC. Offline, data were low pass filtered at 50 Hz to eliminate 60 Hz noise from the CRT monitor. Eye movements (>1°), blinks, blocking, drift, and muscle artifacts were detected by applying automatic criteria. This pipeline differs from the pipeline that was used in the original paper 27, which is why the current analyses include more participants. A sliding-window step function was used to check for eye movements in the EOG channels. We used a split-half sliding-window approach (window size = 100 ms, step size = 50 ms, vertical threshold = 75 μV, horizontal threshold =15 μV).
University of Chicago.
EEG was recorded from 30 active Ag/AgCl electrodes (actiCHamp, Brain Products, Munich, Germany) mounted in an elastic cap positioned according to the international 10-20 system (Fp1, Fp2, F7, F8, F3, F4, Fz, FC5, FC6, FC1, FC2, C3, C4, Cz, CP5, CP6, CP1, CP2, P7, P8, P3, P4, Pz, PO7, PO8, PO3, PO4, O1, O2, Oz). Two additional electrodes were affixed with stickers to the left and right mastoids, and a ground electrode was placed in the elastic cap at position Fpz. Data were referenced online to the right mastoid. For Chicago studies #1-5, data were re-referenced offline to the algebraic average of the left and right mastoids, and incoming data were filtered (low cutoff = .01 Hz, high cutoff = 80 Hz; slope from low to high cutoff = 12 dB/octave) and recorded with a 500-Hz sampling rate. For Chicago study #6, data were re-referenced offline to the average of all electrodes, and incoming data were filtered (low cutoff=.01 Hz, high cutoff=250 Hz; slope from low to high cutoff = 12 dB/octave) and recorded with a 1000-Hz sampling rate. For all datasets, impedance values were kept below 10 kΩ. Eye movements and blinks were monitored using electrooculogram (EOG) activity and eye tracking. EOG data were collected with five passive Ag/AgCl electrodes (two vertical EOG electrodes placed above and below the right eye, two horizontal EOG electrodes placed ~1 cm from the outer canthi, and one ground electrode placed on the left cheek). Eye tracking data was collected using a desk-mounted EyeLink 1000 Plus eye-tracking camera (SR Research, Ontario, Canada) sampling at 1,000 Hz. For a comparison of electrode reliability across the Oregon-site and Chicago-site data, see Figure S5.
For Chicago studies #1-5, eye movements, blinks, blocking, drift, and muscle artifacts were first detected by applying automatic criteria. After automatic detection, trials were manually inspected to confirm that detection thresholds were working as expected. For the automatic eye movement detection pipeline, a sliding-window step function was used to check for eye movements in the horizontal EOG (HEOG) and the eye-tracking gaze coordinates. For HEOG rejection, we used a split-half sliding-window approach (window size =100 ms, step size = 10 ms, threshold = 20 μV). HEOG rejection was only used if the eye-tracking data were bad for that trial epoch. We slid a 100-ms time window in steps of 10 ms from the beginning to the end of the trial. If the change in voltage from the first half to the second half of the window was greater than 20 μV, it was marked as an eye movement and rejected. For eye-tracking rejection, we applied a sliding-window analysis to the x-gaze coordinates and y-gaze coordinates (window size = 100 ms, step size = 10 ms, threshold = 0.5° of visual angle). We additionally used a sliding-window step function to check for blinks in the vertical EOG (window size = 80 ms, step size = 10 ms, threshold = 30 μV). We checked the eye tracking data for trial segments with missing data points (no position data are recorded when the eye is closed). We checked for drift (e.g., skin potentials) by comparing the absolute change in voltage from the first quarter of the trial to the last quarter of the trial. If the change in voltage exceeded 100 μV, the trial was rejected for drift. In addition to slow drift, we checked for sudden step-like changes in voltage with a sliding window (window size =100 ms, step size =10 ms, threshold =100 μV). We excluded trials for muscle artifacts if any electrode had peak-to-peak amplitude greater than 200 μV within a 15-ms time window. We excluded trials for blocking if any electrode had at least 30 time points in any given 200-ms time window that were within 1μV of each other.
For Chicago study #6, eye movements, blinks, blocking, drift, and muscle artifacts were detected by applying automatic criteria only. To identify eye-related artifacts, eye-tracking data were first baselined identically to EEG data (i.e., subtraction of the mean amplitude of x and y coordinates for the time from −200 to 0 ms). Then, the Euclidian distance from the fixation cross was calculated from baselined data. Saccades were identified with a step criterion of 0.6° (comparing the mean position in the first half of a 50 ms window with the mean position in the second half of a 50-msec window; window moved in 20 ms steps). Drifts were identified by eye-tracking data indicating a distance from the fixation of >1°. Both eyes had to indicate an eye-related artifact for a trial to be excluded from analysis. In addition, trials in which any EEG channel showed a voltage of more than 100 μV or less than −100 μV were rejected.
Behavioral analysis.
Working memory capacity (K score) was used to measure task performance 23,24. Capacity was calculated with the formula K = S(H – F), where K is working memory capacity, S is the size of the array, H is the observed hit rate, and F is the false alarm rate. K scores were calculated with different set sizes within the Oregon- and Chicago-site data. Within the Oregon-site data, K scores were calculated from a separate working memory task collected outside the EEG booth that that had both set size 2 and 6 for all participants. For the Chicago-site participants, K score was calculated using different set sizes, depending on which experiment(s) the participant completed. Chicago-site set sizes included 2, 3, 4, and 6 items. In the Chicago sample, there was no significant relationship between the number of trials that a participant completed and K score: rs=0.13, p=0.10, mse=1.65e6 (see STAR methods).
Calculation of inter-electrode correlations using EEG
Electrode organization.
We analyzed EEG data that was collected while participants performed a lateralized change detection task. In this type of task, participants attend and maintain information on either the left or right side of the screen on any given trial. This results in lateralized neural activity that is contralateral to the remembered items (3). For example, if stimuli are presented on the left side of the screen, this lateralized neural activity would be present in electrodes O2, PO8, PO4, etc., whereas if stimuli were presented on the right side of the screen, it would be present in electrodes O1, PO7, PO3. We accounted for this lateralized neural activity by aligning neural activity based on which side of the screen participants were attending on each trial. For example, activity for electrode number 10 in the matrix included data from PO8 on “attend-left” trials and data from PO7 on “attend- right” trials. Central electrodes (Fz, Cz, and Pz) were unaffected by this organization. This method of alignment is like how CDA analyses account for the contralateral organization of the visual system (3).
Inter-electrode correlations of ERP activity.
The Chicago and Oregon datasets had different numbers of electrodes. Therefore, in our analyses we only included the 17 electrodes that overlapped between the two datasets: F3, Fz, F4, C3, Cz, C4, P7, P3, Pz, P4, P8, PO8, PO4, PO3, PO7, O1, and O2.
Previous fMRI connectome-based models have concatenated task data across trials or runs, and then correlated these concatenated time series across all pairwise nodes (here, electrodes). Because EEG data are noisy when concatenated across trials, we instead averaged the raw amplitude at each of these 17 electrodes across all trials from timepoints 0 (the onset of the memory array) to 1000 ms. This is analogous to calculating one ERP time course for each electrode. The shortest experiment (Oregon-site experiment) presented stimuli for 100 ms and had a retention interval of 900 ms. Therefore, timepoint 0 corresponds to the onset of the memory array in all experiments and timepoint 1000 corresponds to the end of the shortest retention interval (i.e., 100 ms stimulus + 900 ms retention interval). The durations of the memory array varied across experiments, so the offset varied within the analyzed time window. We analyzed data from the same time window for all experiments to match the number of timepoints included in the analysis across experiments. We computed the Pearson correlation of this trial-averaged EEG activity for all pairwise electrodes for each participant separately. For each participant, this resulted in a 17 x 17 matrix of the correlation between the time course of each electrode to each other electrode. We Fisher z-transformed correlation coefficients and submitted the resulting ERP correlation matrices to the analyses described below.
To investigate whether stimulus-evoked activity influenced our predictive models, we also calculated inter-electrode correlation matrices based on the offset of the stimuli in each study. For example, for Chicago dataset #5, the stimuli remained on the screen for 200 ms. Therefore, we calculated ERPs for each electrode from stimulus offset (200 ms) until the end of the retention interval (1000 ms). The Oregon-site stimuli remained on the screen for 100 ms. Therefore, for that experiment, we calculated ERPs for each electrode from 100-900 ms, to match the total amount of time included in our analyses across experiments. We then calculated our inter-electrode correlation matrices, as described above.
Statistical analyses.
We tested all analyses within the Oregon dataset, whenever this was possible. We then replicated these analyses and externally validated predictive models in the compiled Chicago dataset.
EEG fingerprinting.
This analysis investigates whether individuals’ inter-electrode correlation patterns are unique and stable enough to distinguish them from a group. These methods were adopted from a previous paper that investigated functional connectome fingerprinting in fMRI 7.
To identify individuals in the Oregon dataset, we compared participants’ vectorized shape-task and color-task inter-electrode correlation matrices. Specifically, for each individual, we correlated their “target” color-task vector of inter-electrode correlations from with a “database” of all shape-task vector of inter-electrode correlations. An individual was considered accurately identified if the maximum correlation was with their own shape-task data. We repeated this analysis using shape-task vectors as the “targets” and color-task vectors as the “database”. Finally, we characterized EEG fingerprinting accuracy as the number of correct identifications divided by the number of participants*2.
For the Chicago study, we analyzed data from the subset of 45 individuals who participated in more than one experiment. For each person in this sample, we first calculated and vectorized an inter-electrode correlation matrix from each experiment in which they participated separately. Next, we selected one of these vectors to serve as the “target”. We compared this target to a database, which included 19 individuals’ vectors of inter-electrode correlation from another task in which the target individual also participated (including the target individual). We chose 19 because this was the minimum number of participants in any individual study, and we wanted to equate chance levels across the experiments. To predict subject identity, we computed the similarity between the target vector of inter-electrode correlations and each database vector, and the predicted identity was that with the maximal similarity score. Similarity was defined as the Pearson correlation between the vectors.
To ask whether the specific inter-electrode correlations (“features”) that predict behavior contribute to identification more than expected by chance, we ran these analyses including only those features that significantly predicted behavior (see EEG-based predictive modeling below). To determine whether these results were due to down sampling the number of features included in the ERP correlation matrix, we compared the results from the significant feature analysis to an analysis with the same number of randomly selected features.
To account for potential differences in skull thickness across individuals that could affect inter-electrode correlations and result in an overestimation of identification accuracy, we applied a Laplacian transformation to the time series data and then re-ran identification analyses.
To assess the statistical significance of identification accuracy, we performed non-parametric permutation tests. We ran the same analyses as above, except we shuffled the subject labels in each iteration, so that they would randomly align with the ERP correlation data. This shuffling of labels was repeated 10,000 times.
EEG-based predictive modeling.
Prediction methods were adopted from previous work using fMRI functional connectivity to predict behavior (14, 33). We first separated data into training and testing sets. For internally validated models, we used 5-fold cross-validation and permutation testing to determine whether the model significantly predicted behavior. For external validation analyses, we trained models using data collected in Oregon and tested them using data collected in Chicago and vice versa.
To identify the features that were significantly related to behavior in the training sample, we correlated each value in the matrix of inter-electrode correlations to K score using Spearman’s correlation. This approach replicated the connectome-based predictive modeling pipeline applied to fMRI data (33) and allowed us to compare the anatomy of the inter-electrode correlation feature sets (“networks”) positively and negatively correlated with behavior. For the internally validated models and the externally validated model trained on the Chicago dataset, we selected the top 5% of features that most strongly predicted behavior in the positive and negative directions. For the externally validated model that was trained on all of the Oregon data, we identified the 5% of features most positively and the 5% of features most negatively correlated with working memory capacity (10% of total features) in the color and shape task separately. We defined predictive features as those that were included in the top 5% of features positively and negatively predicting behavior in both tasks to get the most robust predictive feature set. We did not perform this overlap analysis for the externally validated model that was trained on the Chicago dataset because each task had a relatively small sample size (ns = 19–29), which could result in unreliable features.
Using these selected features, we then calculated single-subject summary values for all training subjects. To do this, we summed the correlation-strength values for each individual for both the positively and negatively predictive feature sets separately, and then we took the difference between them. We trained a linear model with these summary features and K scores from the training set to predict behavior from inter-electrode correlations. We then used this model to predict K score in the testing set. To do this, we calculated summary features in test set participants, and input these summary scores into the model defined in the training sample to generate predictions for each test set subject’s K score. To determine whether our models significantly predicted performance, we then compared predicted K scores to the observed K scores using Spearman rank correlation (rs) values, mean square error (mse), and non-parametric p-values for both.
For the models validated within-site, we used 5-fold cross validation, which we iterated 10,000 times. We calculated rs and mse within each fold. We then Fisher-z transformed the rs values, resulting in one zobs value per fold. We then averaged the zobs and mseobs values across folds within one iteration. We additionally calculated null znull and msenull values within each fold by shuffling the behavioral labels. This resulted in 10,000 observed and 10,000 null z and mse values. We then took the mean of the 10,000 observed zobs and mseobs values. We then transformed the mean zobs value back to rs. This resulted in one observed rs and mseobs value across all folds and iterations. We then calculated non-parametric p-values for both rs and mse.
To determine whether the observed rs value was significantly above change, we compared the null rs distribution to mean observed rs using the following formula:
Where rnull is the null rs distribution; robs is the mean rs value; nIter is the number of iterations; and p is the non-parametric p-value for rs.
We calculated the significance of mse using a very similar formula:
Here, msenull is the null mse distribution; mseobs is the observed mse distribution; nIter is the number of iterations; and p is the non-parametric p-value for mse. Here, we determine how many values from the null distribution are less than or equal to the mean of the observed mse distribution because we expect mse to be smaller when the model fits the data better.
For our externally validated models, we calculated one Fisher-z transformed rs (zobs) and mseobs value for each observed data point in the testing dataset. We additionally calculated 10,000 null Fisher-z transformed rs and mse by shuffling the behavioral labels 10,000 times. We then compared these null distributions to their respective observed values using the same formulas as we used for the internally validated models. However, in this case, there was only one observed rs and mse value, so those values were compared to the null rnull and msenull distributions.
Relationship between number of trials and working memory capacity.
The Chicago-site data is a compilation of 12 different experiments, and a given participant could have completed any number of these studies. There is some concern that participants who completed multiple studies could be different from participants who only completed one study. One possibility is that participants with higher working memory capacity may be more compliant test subjects and, thus, could have completed more studies than subjects with lower working memory capacities. If this is the case, then our inter-electrode correlation matrices would include more trials for higher than for lower working memory capacity participants. To investigate whether this was the case, we correlated working memory capacity with the number of trials included in our analyses. We found that there was not a significant relationship between working memory capacity and number of trials (r=0.13, p=0.10, mse=1.65e6). Therefore, our ability to predict working memory capacity across individuals in the Chicago-site dataset was not contingent on differences in the number of trials between high and low working memory capacity individuals. This is not an issue in the Oregon-site data because all participants completed the same number of trials.
Of note, predictions for the Chicago-site data are numerically worse than those for the Oregon-site data for both within-site and across-site validation. This could be due to the increased procedural variability in the Chicago-site data (e.g., different set sizes, stimuli, experimental procedures) compared to the Oregon-site data. Furthermore, the Chicago-site behavioral scores were calculated using a variety of different set sizes, whereas the Oregon-site behavioral scores were calculated using only set size 2 and 6. These differences in set size across the two datasets could drive differences in the reliability of measured working memory capacity. For example, if someone’s true working memory capacity is 4 items, but their working memory capacity is calculated using set size 3 only, their calculated working memory capacity would be an underestimate of their true capacity. Notably, however, that these differences between the two datasets emphasizes that our models are a robust, reproducible, and generalizable predictor. That is, despite difference between the datasets, we were still able to significantly predict working memory capacity.
Model overlap.
We calculated whether the predictive features from the Oregon and Chicago models significantly overlapped. We determined significance of feature set overlap using the hypergeometric cumulative density function in Matlab14. The formula was as follows: p=1-hygecdf(x, M, K, N), where x is the number of overlapping features, M is the total number of features in the matrix, K is the number of features in the Oregon model, and N is the number of features in the Chicago model.
Interestingly, this set of predictive features that were included in both the Oregon-site and Chicago-site models did not include any frontal electrodes. One possible explanation for the lack of frontal electrodes could be that frontal electrodes tend to be noisier than posterior electrodes, potentially even after artifact rejection. Given this, we sought to determine whether frontal electrodes were less reliable than other electrodes, and if that reliability was correlated with behavioral relevance. To do this, we calculated split-half reliability of each electrode (see STAR methods for further detail). Then, we compared each electrodes’ reliability score with their behavioral relevance—as measured by the correlation between inter-electrode correlation strength and behavior, averaged for each electrode. We found that there was not a significant relationship between electrode reliability and predictability: Oregon, r = −0.13, p = 0.62; Chicago, r = 0.40, p = 0.12. Therefore, the lack of features (i.e., inter-electrode correlations) involving frontal electrodes in our working memory models is unlikely to be due to an increase in noise in those features relative to others. Another possible explanation for the lack of frontal edges in our predictive network is that signals from these electrodes could be more consistent across people than posterior and occipital electrodes. This would make them less informative of individual variation and could explain why they are not included in our individual differences-based models.
Relationships between predicted working memory and fluid intelligence.
Finally, we asked whether models that predicted working memory capacity across individuals also predicted fluid intelligence (gF). We ran these analyses only with the Oregon dataset because these participants completed behavioral tasks assessing a range of cognitive abilities in addition to the change detection tasks used to measure K. These analyses included the 138 participants who completed all cognitive tasks.
We were specifically interested in the relationship between inter-electrode correlations and gF. To investigate this relationship, for each subject in the Oregon dataset, we calculated the strength in the working memory feature set defined in the Chicago sample. Then, using 5-fold cross-validation, we defined a linear model relating this strength value to gF. We then used this model to predict the left-out set of participants’ gF scores. Previous research has shown that gF and K score are highly correlated 27,28. Given this relationship, we calculated a theoretical ceiling of these models’ performance as the correlation between gF and K scores.
In the Predicting fluid intelligence section of the main text, we demonstrated that our working memory models generalized to predict individual differences in gF. Of note, this gF modeling approach is similar to correlating predicted K and observed gF, except that the linear model has a gF-specific coefficient that changes in every fold. Both observed and predicted K scores are correlated with gF (observed K: rs = 0.41, p = 5.98e–7; predicted K: rs = 0.18, p = 0. 038), suggesting that retaining the model is not critical.
As a secondary analysis, we also investigated the relationship between predicted K scores and performance on all other behavioral tasks (Table S1). To do this, we computed the Spearman correlation between 1) observed K score and all the other tasks and 2) predicted K score and all the other tasks. Then, we correlated these two columns of data to determine whether there was a significant relationship between performance on all tasks and the measured and predicted K scores. We did this separately for the color and the shape tasks.
Supplementary Material
Key resource table
| REAGENT or RESOURCE | SOURCE | IDENTIFIER |
|---|---|---|
| Software and Algorithms | ||
| Matlab 2016b | MathWorks | mathworks.com/products/matlab.html |
| EEG Lab | Schwartz Center for Computational Neuroscience | https://sccn.ucsd.edu/eeglab/index.php |
| Code and data availability | Open Science Framework | https://osf.io/dbmuc/?view_only=6d76f387774241fdb40dcfe68f66ec93 |
Highlights.
Individuals show stable and unique inter-electrode correlations of EEG activity.
Inter-electrode correlation patterns can be used to identify individuals.
Models based on these patterns predict working memory and fluid intelligence.
Common and unique signals of cognition are present in sparse EEG activity.
ACKNOWLEDGMENTS
Research was supported by NIMH grant ROIMH087214 and Office of Naval Research grant N00014-12-1-0972. We thank Tobias Feldmann-Wüstefeld for allowing us to use his dataset in our analyses and Kirsten C.S. Adam for organizing and pre-processing the Oregon site data.
Footnotes
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
DECLARATION OF INTERESTS
The authors declare no competing interests.
REFERENCES
- 1.Engel S, Glover G, and Wandell BA (1997). Retinotopic organization in human visual cortex and the spatial precision of functional MRI. Cereb. Cortex 7, 181–192. [DOI] [PubMed] [Google Scholar]
- 2.Kanwisher N, McDermott J, and Chun MM (1997). The Fusiform Face Area: A Module in Human Extrastriate Cortex Specialized for Face Perception. 10. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Vogel EK, and Machizawa MG (2004). Neural activity predicts individual differences in visual working memory capacity. Nature 428, 748–751. [DOI] [PubMed] [Google Scholar]
- 4.Damoiseaux JS, Rombouts SARB, Barkhof F, Scheltens P, Stam CJ, Smith SM, and Beckmann CF (2006). Consistent resting-state networks across healthy subjects. Proc. Natl. Acad. Sci 103, 13848–13853. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Duncan J, and Owen AM (2000). Common regions of the human frontal lobe recruited by diverse cognitive demands. Trends Neurosci. 23, 475–483. [DOI] [PubMed] [Google Scholar]
- 6.Charest I, Kievit RA, Schmitz TW, Deca D, and Kriegeskorte N (2014). Unique semantic space in the brain of each beholder predicts perceived similarity. Proc. Natl. Acad. Sci 111, 14565–14570. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Finn ES, Shen X, Scheinost D, Rosenberg MD, Huang J, Chun MM, Papademetris X, and Constable RT (2015). Functional connectome fingerprinting: identifying individuals using patterns of brain connectivity. Nat. Neurosci 18, 1664–1671. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Gratton C, Lauman TO, Nielsen AN, Greene DJ, Gordon EM, Gillmore AW, Nelson SM, Coalson RS, Snyder AZ, Schlaggar BL, et al. (2018). Functional Brain Networks Are Dominated by Stable Group and Individual Factors, Not Cognitive or Daily Variation. Neuron. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Miranda-Dominguez O, Mills BD, Carpenter SD, Grant KA, Kroenke CD, Nigg JT, and Fair DA (2014). Connectotyping: Model Based Fingerprinting of the Functional Connectome. PLoS ONE 9, e111048. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Greene DJ, Koller JM, Hampton JM, Wesevich V, Van AN, Nguyen AL, Hoyt CR, McIntyre L, Earl EA, Klein RL, et al. (2018). Behavioral interventions for reducing head motion during MRI scans in children. NeuroImage. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Barbey AK (2018). Network Neuroscience Theory of Human Intelligence. Trends Cogn. Sci 22. [DOI] [PubMed] [Google Scholar]
- 12.Bressler SL, and Menon V (2010). Large-scale brain networks in cognition: emerging methods and principles. Trends Cogn. Sci 14, 277–290. [DOI] [PubMed] [Google Scholar]
- 13.Duncan J (2010). The multiple-demand (MD) system of the primate brain: mental programs for intelligent behaviour. Trends Cogn. Sci 14, 172–179. [DOI] [PubMed] [Google Scholar]
- 14.Jung RE, and Haier RJ (2007). The Parieto-Frontal Integration Theory (P-FIT) of intelligence: Converging neuroimaging evidence. Behav. Brain Sci 30, 135–154. [DOI] [PubMed] [Google Scholar]
- 15.Medaglia JD, Lynall M-E, and Bassett DS (2015). Cognitive Network Neuroscience. J. Cogn. Neurosci 27, 1471–1491. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Park H-J, and Friston K (2013). Structural and Functional Brain Networks: From Connections to Cognition. Science 342, 1238411–1238411. [DOI] [PubMed] [Google Scholar]
- 17.Woo C-W, Chang LJ, Lindquist MA, and Wager TD (2017). Building better biomarkers: brain models in translational neuroimaging. Nat. Neurosci 20, 365–377. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Poldrack RA, Huckins G, and Varoquaux G (2020). Establishment of best practices for evidence for prediction: a review. JAMA Psychiatry. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Siegel JS, Mitra A, Laumann TO, Seitzman BA, Raichle M, Corbetta M, and Snyder AZ (2017). Data Quality Influences Observed Links Between Functional Connectivity and Behavior. Cereb. Cortex 27, 4492–4502. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Jayarathne I, Cohen M, and Amarakeerthi S (2020). Person identification from EEG using various machine learning techniques with inter-hemispheric amplitude ratio. PLOS ONE 15, e0238872. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Ma L, Minett JW, Blu T, and Wang WS-Y (2015). Resting State EEG-based biometrics for individual identification using convolutional neural networks. In 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 2848–2851. [DOI] [PubMed] [Google Scholar]
- 22.Valizadeh SA, Riener R, Elmer S, and Jäncke L (2019). Decrypting the electrophysiological individuality of the human brain: Identification of individuals based on resting-state EEG activity. NeuroImage 197, 470–481. [DOI] [PubMed] [Google Scholar]
- 23.Cowan N (2001). The magical number 4 in short-term memory: a reconsideration of mental storage capacity. Behav. Brain Sci [DOI] [PubMed] [Google Scholar]
- 24.Pashler H (1988). Familiarity and visual change detection. Percept. Psychophys 44, 369–378. [DOI] [PubMed] [Google Scholar]
- 25.MacNamee B, Cunningham P, Byrne S, and Corrigan OI (2002). The problem of bias in training data in regression problems in medical decision support. Artif. Intell. Med 24, 51–70. [DOI] [PubMed] [Google Scholar]
- 26.Rosenberg MD, Finn ES, Scheinost D, Papademetris X, Shen X, Constable RT, and Chun MM (2016). A neuromarker of sustained attention from whole-brain functional connectivity. Nat. Neurosci 19, 165–171. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Unsworth N, Fukuda K, Awh E, and Vogel EK (2015). Working Memory Delay Activity Predicts Individual Differences in Cognitive Abilities. J. Cogn. Neurosci 27, 853–865. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Conway ARA, Cowan N, Bunting MF, Therriault DJ, and Minkoff SRB (2002). A latent variable analysis of working memory capacity, short-term memory capacity, processing speed, and general fluid intelligence. Intelligence 30, 163–183. [Google Scholar]
- 29.Engle RW, Laughlin JE, Tuholski SW, and Conway ARA (1999). Working Memory, Short-Term Memory, and General Fluid Intelligence: A Latent-Variable Approach. J. Exp. Psychol. Gen, 23. [DOI] [PubMed] [Google Scholar]
- 30.Fukuda K, Vogel E, Mayr U, and Awh E (2010). Quantity, not quality: the relationship between fluid intelligence and working memory capacity. Psychon. Bull. Rev 17, 673–679. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Adam KCS, Vogel EK, and Awh E (2020). Multivariate analysis reveals a generalizable human electrophysiological signature of working memory load (Neuroscience). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Schubert AL, Hagemann D, and Frischkorn GT (2017). Is general intelligence little more than the speed of higher-order processing? J. Exp. Psychol. Gen [DOI] [PubMed] [Google Scholar]
- 33.Rosenberg MD, Finn ES, Scheinost D, Constable RT, and Chun MM (2017). Characterizing attention with predictive network models. Trends Cogn. Sci 21, 290–302. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Dubois J, and Adolphs R (2016). Building a Science of Individual Differences from fMRI. Trends Cogn. Sci 20, 425–443. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Llera A, Wolfers T, Mulders P, and Beckmann CF (2019). Inter-individual differences in human brain structure and morphology link to variation in demographics and behavior. eLife 8, e44443. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Nentwich M, Ai L, Madsen J, Telesford QK, Haufe S, Milham MP, and Parra LC (2020). Functional connectivity of EEG is subject-specific, associated with phenotype, and different from fMRI. NeuroImage. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Xifra-Porxas A, Kassinopoulos M, and Mitsis GD (2020). Physiological and head motion signatures in static and time-varying functional connectivity and their subject discriminability (Neuroscience). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Kayser J, and Tenke CE (2015). On the benefits of using surface Laplacian (Current Source Density) methodology in electrophysiology. Int. J. Psychophysiol. Off. J. Int. Organ. Psychophysiol 97, 171–173. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Hakim N, Adam KCS, Gunseli E, Awh E, and Vogel EK (2019). Dissecting the Neural Focus of Attention Reveals Distinct Processes for Spatial Attention and Object-Based Storage in Visual Working Memory. Psychol. Sci 30, 526–540. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Hakim N, Feldmann-Wüstefeld T, Awh E, and Vogel EK (2019). Perturbing Neural Representations of Working Memory with Task-irrelevant Interruption. J. Cogn. Neurosci 32, 558–569. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Hakim N, Feldmann-Wüstefeld T, Awh E, and Vogel EK (2021). Controlling the Flow of Distracting Information in Working Memory. Cereb. Cortex, bhab013. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Tsubomi H, Fukuda K, Watanabe K, and Vogel EK (2013). Neural Limits to Representing Objects Still within View. J. Neurosci 33, 8257–8263. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
Data, analysis scripts, and a predictive model that is trained on the combination of the Oregon-site and Chicago-site datasets will be available online upon publication at the following URL:https://osf.io/dbmuc/?view_only=6d76f387774241fdb40dcfe68f66ec93. See Figure S3 for more information about the number of testing participants and model performance and see Figure S2 for more information on model performance based on the number of predictive edges. Data were collected at two different study sites from individuals of both sexes: University of Oregon Eugene and the University of Chicago. Whereas data from the University of Oregon were collected as part of a single study, data from the University of Chicago were compiled from multiple studies. Participants were recruited from the respective university network and from their surrounding communities. Some of these samples are described in previous publications, while others are unpublished. All data are described below.
