Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2021 Aug 15.
Published in final edited form as: Neuroimage. 2021 May 14;237:118165. doi: 10.1016/j.neuroimage.2021.118165

Electrophysiological Decoding of Spatial and Color Processing in Human Prefrontal Cortex

Byoung-Kyong Min a,b,*, Hyun-Seok Kim c, Wonjun Ko a, Min-Hee Ahn d, Heung-Il Suk a,b, Dimitrios Pantazis e, Robert T Knight f
PMCID: PMC8344402  NIHMSID: NIHMS1723782  PMID: 34000400

Abstract

The prefrontal cortex (PFC) plays a pivotal role in goal-directed cognition, yet its representational code remains an open problem with decoding techniques ineffective in disentangling task-relevant variables from PFC. Here we applied regularized linear discriminant analysis to human scalp EEG data and were able to distinguish a mental-rotation task versus a color-perception task with 87% decoding accuracy. Dorsal and ventral areas in lateral PFC provided the dominant features dissociating the two tasks. Our findings show that EEG can reliably decode two independent task states from PFC and emphasize the PFC dorsal/ventral functional specificity in processing the where rotation task versus the what color task.

Keywords: brain-machine interface, cognition, electroencephalography, prefrontal cortex

1. Introduction

The prefrontal cortex (PFC) is central to goal-directed cognition and has been implicated in encoding both stable and dynamic representations of task-relevant variables such as goals, rules, and rewards (Duncan, 2001; Fuster, 2013; Miller and Cohen, 2001; Rougier et al., 2005). Therefore, characterizing the representational code of PFC holds promise to open important new avenues in the study of decision-making, cognitive control, planning, and reasoning. Multivariate pattern analysis (MVPA) methods are crucial to achieving this goal (Norman et al., 2006). These methods, instead of focusing on individual signals (e.g. voxels), search for reproducible spatial patterns of activity that differentiate across experimental conditions. This is accomplished through the use of powerful machine learning classifiers that decode the information that is represented in activity patterns.

A growing number of studies with machine learning classifiers report reliable decoding of neural representations from multiple areas in the brain (Brouwer and Heeger, 2009; Harrison and Tong, 2009; Horikawa et al., 2013; Kamitani and Tong, 2005; Lemm et al., 2011; Muller et al., 2008; Wolpaw and Wolpaw, 2012). However, decoding PFC signals remains a challenge for both fMRI and electrophysiological non-invasive EEG measurements (Bhandari et al., 2018). Here, we assessed whether EEG can reliably measure representational information in PFC. Specifically, we investigated whether engagement in two distinct tasks, a mental-rotation and a color-perception task, could be predicted by EEG PFC features.

The mental-rotation and color-perception tasks used the same set of stimulus images, which displayed objects of varying spatial orientation and color, to control for sensory effects. We hypothesized that visual information from the primary visual cortex would propagate through the dorsal pathway processing “where” information preferentially for the rotation task (spatial manipulation) (Goodale and Milner, 1992; Hesse et al., 2014) and the ventral pathway processing “what” information preferentially for the color task (visual perception) (Kravitz et al., 2013; Mishkin et al., 1983), with subsequent representational patterns reflecting this dorsal/ventral organization decodable in PFC. To assess whether EEG signals could resolve these patterns, we localized EEG signals across major cortical areas spanning both PFC and non-PFC regions. We then extracted EEG features across canonical frequency bands and trained a regularized linear discriminant analysis classifier to discriminate between the two tasks. A systematic relationship between PFC signals and task performance would provide evidence for the role of PFC EEG features in encoding task-relevant information. In addition, the activation patterns detected by the classifier would provide insights into the representational format of the PFC.

2. Materials and methods

2.1. Participants

Twenty healthy individuals (age: 22.8 ± 2.6; 10 men and 10 women) participated in this study. The study was conducted in accordance with the ethical guidelines established by the Institutional Review Board of Korea University (No. 1040548-KU-IRB-14–28-A-2) and the Declaration of Helsinki (World Medical Association, 2013). Participants had normal or corrected-to-normal vision, and none were color-blind, as determined by the Ishihara color test. Participants provided informed consent prior to the study.

2.2. Materials and procedure

Pairs of stimuli randomly drawn from a set of red or green objects (Fig. 1) were presented bilaterally on a black background at an eccentricity of 5 ° visual angle on a computer monitor, which was placed in front of the participant at a distance of 65 cm. In the center of the monitor, a small gray fixation cross was presented. Each stimulus spanned 5 ° visual angle and was presented for 5 sec. Stimulus presentation was followed by a variable inter-stimulus interval ranging from 2.2 to 2.8 sec with a mean value of 2.5 sec. All types of stimuli appeared pseudo-randomly with equal probability.

Fig. 1. Mental-rotation and color-perception tasks.

Fig. 1.

Upper panel: Example stimuli of the mental-rotation task, with degrees of object-rotation ranging 0 °, 45 °, 90 °, and 135 ° clockwise, which progressively increase task-difficulty (difficulty 1 to 4, respectively). Lower panel: Example stimuli of the color-perception task, which varied saturation values of red and green to 100%, 50%, 10% and 5% (difficulty 1 to 4, respectively). Participants performed a two-alternate forced-choice task reporting whether the shapes of the two objects are identical when rotated (mental-rotation task) or the colors of the two objects are the same (color-perception task).

Using the same stimulus set, the present study consisted of two tasks, a mental-rotation and color-perception task, which were counterbalanced in their presented order and were designed to activate different PFC areas. An essential function of the lateral PFC [LPFC; including both dorsolateral PFC (DLPFC) and ventrolateral PFC (VLPFC)] is executive control functions such as volitional responses to achieve an intended goal (Fuster, 2008; Royall et al., 2002; Sarazin et al., 1998). Dorsal route activation (DLPFC; processing “where” information) was predicted during the mental-rotation task (Shepard and Metzler, 1971). In contrast, ventral route activation (VLPFC; processing “what” information) was predicted during the color-perception task. In the mental-rotation task, participants were instructed to determine whether the shapes of two images were identically matched when rotated (see Fig. 1). In the color-perception task, participants were instructed to determine whether the colors of two presented objects were the same. During these two tasks, participants were instructed to fix their eyes on a fixation dot presented on the center of monitor. Participants were also instructed to respond by pressing a button with one hand as quickly as possible whenever the two images were identical in the task-relevant feature of the images (shape or color, respectively) and to otherwise press another button with the opposite hand. Response hands were counterbalanced across participants. Each task comprised 4 blocks with a short break in between; each block included 80 trials. In each block, 4 levels of task-difficulty and 2 types of object-pair (identical vs. mirror-reflected) of 10 different shapes of objects were presented as stimuli in a random order. Participants underwent a training session to become familiar with the task before the experimental session.

Four different levels of the task difficulty were generated for each task. For the rotation task, the degrees of object-rotation were 0 °, 45 °, 90 °, and 135 ° clockwise, which is the order of the increasing task-difficulty (difficulty 1 to 4, respectively; see the top panel in Fig. 1 as examples). When the object is rotated more, its identification becomes more difficult. For the color-perception task, the saturation values of red and green varied from 100%, 50%, 10% and 5% (difficulty 1 to 4, respectively; see the bottom panel in Fig. 1 as examples). The isoluminant RGB values for red and green were used for the experiment. As the difference in saturation values between two presented objects decreased, identification became more difficult. Since an easy task would not effectively induce differences in brain activity between these two tasks, the easiest task-difficulty (i.e. difficulty 1) was not included for further analysis. Reaction times and accuracies of task-performance in each task-difficulty were measured for the behavioral analysis. Reaction times values outside of mean ± 1.98 SD for each individual were considered outliers and were discarded from further analyses.

2.3. EEG Acquisition

The EEG was measured using a BrainAmp DC amplifier (Brain Products, Germany) with 64 Ag/AgCl electrodes in an actiCAP (Brain Products, Germany) in accordance with the international 10–10 system. An electrode was placed on the tip of the nose as reference, and a ground electrode was placed at electrode AFz. Electrode impedances were maintained below 5 k Ω prior to data acquisition. The EEG was recorded at 500 Hz. For further analyses, EEG data were epoched from 500 ms prestimulus to 7000 ms poststimulus. Eye movement activity was monitored with an EOG electrode placed sub-orbitally to the left eye, and vertical and horizontal electro-ocular activity was computed using two pairs of electrodes placed vertically and horizontally with respect to both eyes (i.e. Fp1 and EOG for the vertical EOG, F7 and F8 for the horizontal EOG). All epochs were visually inspected for artifacts, and epochs containing eye movements or other artifacts (maximum amplitude ± 100 μV and maximal gradient voltage step 50 μV/ms) were automatically rejected from further analyses. Only the trials with correct responses were collected for further analysis. Three participants were excluded from further analyses because of poor data quality.

2.4. Data Analysis

We assessed whether a decoder could detect task engagement in humans using source-level EEG signals from 15 Brodmann areas (BAs) comprising cortical areas in both dorsal and ventral visual routes up to PFC. Figure 2 depicts a flowchart of the analytic procedure.

Fig. 2. Flowchart of analyses procedures.

Fig. 2.

Following source reconstruction, a sequence of processing steps extracted signals from 15 BAs by IDF-bandpass filtering, aggregated across voxels, applied Hilbert transform, and aggregated across time. Last, an rLDA classifier was used to compute decoding accuracies and activation patterns. BA: Brodmann area; IDF: individual dominant frequencies. rLDA: regularized linear discriminant analysis.

2.4.1. Source reconstruction

First, source-level cortical activity was estimated from the scalp EEG signals measured using exact low-resolution tomography analysis (eLORETA) (Pascual-Marqui, 1999; Pascual-Marqui, 2007; Pascual-Marqui et al., 2002; van der Loo et al., 2011). eLORETA outputs were modeled as a set of 2,394 voxels of 7 × 7 × 7 mm forming a 3D description of the cortex. The transformation criteria for source localization take advantage of a brain model based on the Talairach coordinate system included in the anatomical brain atlas (MNI-305) (Collins et al., 1994; Evans et al., 1993; Oakes et al., 2004; Talairach and Tournoux, 1988) and the international 10–10 system data. eLORETA provides more precise and accurate information because it has higher spatial resolution than sLORETA (Jatoi et al., 2014).

2.4.2. Extraction of signals in brain regions of interest

To investigate which brain areas encode task-related information, we selected the following 12 regions of interest (ROIs), which spanned the area of 15 BAs (Fig. 3). For lateral and anterior PFC, the DLPFC (BA 9 and 46), VLPFC (BA 45 and 47), and anterior prefrontal cortex (BA 10) were chosen. The orbitofrontal cortex (BA 11) and anterior cingulate cortex (BA 24) were selected for the orbital and medial frontal cortex, respectively. BA 5 and 40 were selected for the posterior parietal cortex (PPC) (Aflalo et al., 2015). The primary motor cortex (BA 4) and premotor area (BA 6) were selected to assess motor activity, while the primary visual cortex (BA 17), secondary visual cortex (BA 18), associative visual cortex (BA 19), and inferior temporal gyrus (BA 20) were analyzed for primary and extrastriate visual processing. All of these anatomical areas were delineated based on the atlas embedded in LORETA software (Pascual-Marqui et al., 2002).

Fig. 3. Cortical locations of 15 Brodmann areas.

Fig. 3.

The viewpoints of the cortical maps are left-lateral (A) and left-medial (B) (asterisks indicate the anterior direction). For the right hemisphere, the corresponding areas are symmetrically placed.

The following five canonical frequency bands were used in the present study: delta (0.5 to 4 Hz), theta (4 to 8 Hz), alpha (8 to 13 Hz), beta (13 to 30 Hz), and gamma (30 to 50 Hz) (Buzsaki and Draguhn, 2004). However, since the dominant peak frequency within each frequency band varied between participants, we estimated individual dominant frequencies (IDF) for each participant and frequency band separately, based on the power spectrum average across all training trials and voxels belonging to the 15 selected BAs. It was difficult to clearly identify peak frequency in every frequency band for most individuals. In the present study, we aimed to fine tune the selection of frequency bands to improve discrimination between them and assess their contribution to decoding performance. The IDF for each frequency band was selected at the maximum power spectral density within each frequency band during the entire 5-sec poststimulus period.

Following the determination of IDFs, the source-level cortical activities were bandpass filtered within a range of IDF ± 1 Hz for each frequency band. The source-level IDF-bandpass-filtered time series were then averaged across the corresponding voxels of each BA. Signals were averaged across voxels because the limited spatial resolution of non-invasive EEG recordings did not allow inferences at a voxel level, and we focused on the differences in information encoded between BAs. Last, the envelopes of the source-level IDF-bandpass-filtered EEG time series of the 15 BAs for each frequency band were computed using the Hilbert transform. To reduce feature dimensionality, the extracted envelopes of the time series were averaged within 50-ms non-overlapping sliding windows, yielding down-sampled time series over the entire 5-sec poststimulus period.

2.4.3. Classification using regularized LDA

The averaged envelope of the source-level IDF-bandpass-filtered EEG time series in each ROI over the entire 5-sec poststimulus period was used as input features to the classifier. With this feature representation, we then applied a linear discriminant analysis (LDA) (Duda et al., 2001) to extract a class-discriminative feature fn and built a Gaussian classifier. Given a test sample X˜, we used the following decision rule:

C˜={C1,ifp(f˜C1)>p(f˜C2)C2,otherwise (1)

where p(f˜Ci) is a likelihood of a feature f˜ conditioned on the class i.

Classical| LDA is optimal in the sense that it minimizes the risk of misclassification for new samples drawn from known Gaussian distributions (Duda et al., 2001). Particularly, regularized LDA (rLDA) is a powerful machine learning technique that yields excellent results for single-trial event-related potential classification, which are superior to classical LDA when the ratio of features to trials is high (Blankertz et al., 2011; Lemm et al., 2011; Tomioka and Müller, 2010). Thus, in the present study, rLDA with shrinkage was used as a classification algorithm. In order to handle a singular sample covariance matrix Ŝ, estimated with EEG signal features, we used a regularized covariance S˜=(1λ)S^+μλIp, where Ip denotes an identity matrix with p diagonal entries, μ is mean of the eigenvalues of S^ (i.e. i=1pS^ii/p) and λ ∈ [ 0, 1] is a ‘shrinkage’ hyperparameter. The shrinkage parameter λ was computed analytically using the Ledoit-Wolf estimator (Ledoit and Wolf, 2004). The rLDA uses this regularized sample covariance, instead of the original sample covariance Ŝ (Friedman, 1989).

After fixing the parameters of the rLDA with shrinkage on the training data, the resulting calibrated classifier was used for out-of-sample prediction, i.e. novel unseen EEG trials could be decoded. We performed a 5-fold cross-validation (Lemm et al., 2011) to obtain the performance of out-of-sample classification. Thus, with trials remaining after artefact-rejection, we designated 512 trials for training and the remaining 128 trials for testing, out of all 640 trials (320 trials per each task) per participant. Model (hyper-)parameters were chosen during the cross-validation process, and this procedure was iterated 5 times to provide different combinations of training and test data sets. The resulting decoding accuracies were averaged. This decoding procedure was performed for each participant separately. The decoded signals were evaluated in terms of whether the information encoded in the task could be successfully reconstructed (i.e. whether the task that the participant performed was correctly decoded). For classification, the rates of successful classification of the test data were compared for evaluating the decoding performance. The decoding accuracy was computed based on the estimated activities of all 15 BAs (with activities averaged across right and left hemispheres). Given that PFC signals are weak and we had no prior hypothesis on hemispheric laterality, we averaged signals across hemispheres to increase the signal-to-noise ratio. To enhance performance, we applied a filter-bank method (Ang et al., 2008) that concatenates all features of all frequency bands (delta, theta, alpha, beta, and gamma bands) as input features and trained rLDA.

2.4.4. Computation of activation pattern

Regarding the direction type of classification models (i.e. forward or backward), our classification model corresponds to a backward model. Given observed EEG signals, our model finds the source or task label information that possibly induces the observations. In order to gain better understanding of the classifier with respect to the neurophysiological basis of the extracted task-relevant signal, an ‘activation pattern’ approach (Haufe et al., 2014) was adopted in the present study. Due to the linear property and independence among vectors in the weight matrix Ŵ, it is straightforward to obtain its counterpart A = (Ŵ −1) in a forward model, where each column is considered as an ‘activation pattern’. The observed EEG signals are then understood as a linear combination of the activation patterns in A. The learned parameters of linear classifiers such as rLDA (i.e. their weight vectors) cannot be interpreted with respect to the origin of the signal of interest because the parameters of the models are a function of the task-relevant signal and the task-uninformative signals (i.e. noise signals) (Blankertz et al., 2011; Haufe et al., 2014; Lemm et al., 2011). Therefore, in order to visualize how the extracted signal is encoded in the features that are used by the classifier, a so-called ‘activation pattern’ has to be computed (Dähne et al., 2015; Haufe et al., 2014). To assess the impact of different cortical areas, we converted the weight vectors of the rLDA classifier to activation patterns, which provide neurophysiologically interpretable values (Haufe et al., 2014).

Assuming that the task-relevant and task-uninformative signals are uncorrelated, the activation pattern is given by the covariance between the classifier output and the input features to the classifier (Haufe et al., 2014). To compare classification contributions across 15 BAs, input features were normalized by dividing with its maximum value in each trial. In practice, we estimated an activation pattern involving all 15 BAs by multiplying the covariance in the normalized input features of 15 BAs with the classifier weight as follows:

A^=ΣX(W^) (2)

where ΣX and Ŵ denote the covariance matrices of normalized input features of 15 BAs and classifier weight vectors, respectively.

2.4.5. Interpretation of activation pattern

The activation patterns based on the rLDA provide neurophysiological insights related to the class label (i.e. task types in this study). The sign of the activation pattern was directly related to the direction of the classification. As a linear classifier was used in the present study, the sign of the pattern depended on how the classes were coded, here + 1 for the mental-rotation task and −1 for the color-perception task. Thus, a positive sign in the activation pattern means that the corresponding feature has larger values for the class coded as + 1, which is the mental-rotation task. Similarly, a negative sign in the activation pattern represents the contribution of the class coded as −1, which is the color-perception task. To extract neurophysiologically interpretable brain–machine interfacing (BMI) features decisive in decoding performance, the activation pattern values of DLPFC were compared with those of VLPFC.

2.4.6. Comparison of PFC contribution to decoding performance

To compare the amount of contribution (i.e. feature strength) of each BA to the decoding accuracy, the activation patterns (normalized by the mean value of 15 BA absolute signals in each participant) of each BA were compared. As both positive DLPFC and negative VLPFC activities were dominantly and contrastively observed over the entire 5 sec in the grand average, the activation patterns of all 15 BAs were compared based on the time window when these most pronounced activation patterns (i.e. when a maximal gap between positive DLPFC and negative VLPFC activities was detected) were individually observed for each participant within the shared time window between stimulus onset and button-pressing for both the rotation and the color task. To compute the LPFC contribution in the activation pattern to decoding performance, the mean values of the absolute LPFC (i.e. BAs 9, 45, 46, and 47) activation pattern values were compared with those of the remaining non-LPFC region. Similarly, the contribution comparison between PFC and non-PFC was computed using the ratio of the absolute activation pattern values of PFC and non-PFC.

In order to arrive at a comparable single activation pattern for each condition that could be visualized as a 3D topographical map, the grand-averaged activation patterns of each frequency band were normalized by the mean value of 15 BA absolute signals in each participant. Hence, based on the activation patterns, it allows us to gain a neurophysiological insight of the input channels, i.e. Brodmann areas, associated with the classification task. This is an important aspect of activation pattern analysis; thus, the contribution of PFC to the decoding of EEG signals for BMIs, reflected in the values of activation patterns is interpretable in terms of neurophysiological concepts.

2.4.7. Statistical analyses

All of the measures were analyzed using two-tailed paired-sample t-tests, and one-sample t-test was used to statistically examine whether the classification accuracies were significantly higher than by chance (50%). A false discovery rate (FDR) of q < 0.05 (Benjamini and Hochberg, 1995) was used to correct for multiple comparisons. To statistically assess whether DLPFC activity fluctuated primarily in the positive domain of the activation pattern and VLPFC activity oscillated mostly in the negative domain of the activation pattern, we computed the 95% confidence intervals of each mean throughout the entire 5 sec of the task performance in each frequency band. All analyses were performed using MATLAB (ver. R2018b, MathWorks, USA) or Python (Python Software Foundation, https://www.python.org).

3. Results

3.1. Behavioral performance in mental-rotation and color-perception tasks

We observed significant differences in behavioral responses for the different levels of task difficulty (Fig. 4). Both tasks had consistently higher reaction times with increasing task difficulties (mental-rotation task: difficulty 1: 1136.1 ms, difficulty 2: 1520.8 ms, difficulty 3: 1793.2 ms, difficulty 4: 1964.6 ms; color-perception task: difficulty 1: 733.4 ms, difficulty 2: 777.8 ms, difficulty 3: 1032.6 ms, difficulty 4: 1416.7 ms). Similarly, performance accuracies declined with higher task difficulty, though not all differences were statistically significant (mental-rotation task: difficulty 1: 98.8%, difficulty 2: 98.2%, difficulty 3: 96.0%, difficulty 4: 92.1%; color-perception task: difficulty 1: 98.5%, difficulty 2: 98.5%, difficulty 3: 97.7%, difficulty 4: 89.0%).

Fig. 4. Behavioral performance.

Fig. 4.

Reaction times and performance accuracy in the mental-rotation task (A) and color-perception task (B). Error bars indicate standard errors of the mean. Note a significant systematic increase in reaction times at higher task difficulties. *q < 0.05 and NS = non-significant, FDR-corrected.

3.2. Decoding task engagement from EEG time courses of 15 Brodmann areas

The beta band provided the highest decoding accuracy of 83.72% (t(16) = 18.711, q < 0.05, FDR-corrected) among all frequency bands (Fig. 5), but this did not differ between bands as all other frequency bands yielded decoding accuracies well above chance level (delta 74.26%: t(16) = 11.009, q < 0.05, FDR-corrected; theta 80.03%: t(16) = 16.868, q < 0.05, FDR-corrected; alpha 80.50%: t(16) = 17.943, q < 0.05, FDR-corrected; gamma 83.45%: t(16) = 20.756, q < 0.05, FDR-corrected). A filter-bank method (Ang et al., 2008) using the combinational features of all frequency bands increased decoding accuracy to 86.99% (t(16) = 23.347, q < 0.05, FDR-corrected).

Fig. 5. Decoding task engagement (mental-rotation vs. color-perception) using spectral features of 15 BAs across different frequency bands.

Fig. 5.

Decoding accuracy is significant in all frequency bands. A filter-bank method combining features from all frequency bands yielded the highest decoding accuracy of 87%. * q < 0.05, FDR-corrected. Error bars indicate standard errors of the mean.

3.3. Contribution of subregions of LPFC to decoding performance

Different behavior dependent activation patterns were observed between DLPFC and VLPFC (Fig. 6). DLPFC activity was prominently observed in a positive domain of activation patterns (95% confidence intervals) supporting the mental-rotation task (delta [0.200–0.517], theta [0.011–0.368], alpha [0.082–0.438], beta [0.024–0.350], gamma [0.153–0.481]), whereas VLPFC activity was predominantly detected in a negative domain of activation patterns indicating more feature strength for the color-perception task (delta [−0.132–0.098], theta [−0.364–−0.118], alpha [−0.420–−0.185], beta [−0.369–−0.139], gamma [−0.339–−0.089]) throughout the entire 5 sec of the task performance.

Fig. 6. Dissociative activation patterns between DLPFC and VLPFC.

Fig. 6.

Time courses of activation patterns (feature strength) between DLPFC (red curves) and VLPFC(blue curves) in the (A) delta, (B) theta, (C) alpha, (D) beta, and (E) gamma bands. The scales were normalized by the mean value of 15 BA absolute signals within each individual. Error bands indicate standard errors of the mean; the vertical black error bars represent 95% confidence intervals of mean activities throughout the entire 5 sec of the task performance. DLPFC patterns are primarily in the positive domain, whereas VLPFC are in the negative domain. Black lines below curves represent the stimulation period; the vertical blue dotted line represents the grand-averaged response onset for the color-perception task, and the vertical red dotted line represents the grand-averaged response onset for the mental-rotation task.

Activation patterns within each BA averaged across participants are shown in Figure 7. Overall, the activation pattern values of the DLPFC were significantly higher than those of the VLPFC in all frequency bands: delta (t(16) = 3.763, q < 0.05, FDR-corrected; DLPFC: 0.642 vs. VLPFC: −0.447), theta (t(16) = 6.654, q < 0.05, FDR-corrected; DLPFC: 0.671 vs. VLPFC: −0.544), alpha (t(16) = 9.410, q < 0.05, FDR-corrected; DLPFC: 0.923 vs. VLPFC: −0.655), beta (t(16) = 2.345, q < 0.05, FDR-corrected; DLPFC: 0.902 vs. VLPFC: −0.064), and gamma bands (t(16) = 4.545, q < 0.05, FDR-corrected; DLPFC: 0.922 vs. VLPFC: −0.600). The sign of the activation pattern (feature strength) is indicative of the class label (i.e. task types in this study), with positive values relevant to the mental-rotation task and negative values relevant to the color-perception task. Since the DLPFC consistently exhibited a positive feature strength while the VLPFC demonstrated a negative feature strength based on 95% confidence intervals of their mean activities during the task performance, the DLPFC features reflected the mental-rotation task and the VLPFC features were linked to the color-perception task. These observations are consistent with the dorsal (for rotation task) and ventral (for color task) stream processing model. Furthermore, the averages of the absolute LPFC activation patterns were higher than those of the non-LPFC in all frequency bands: delta (t(16) = 2.284, q < 0.05, FDR-corrected; LPFC: 1.205 vs. non-LPFC: 0.926), theta (t(16) = 2.803, q < 0.05, FDR-corrected; LPFC: 1.221 vs. non-LPFC: 0.920), alpha (t(16) = 2.135, q < 0.05, FDR-corrected; LPFC: 1.247 vs. non-LPFC: 0.910), beta (t(16) = 3.917, q < 0.05, FDR-corrected; LPFC: 1.420 vs. non-LPFC: 0.847), and gamma bands (t(16) = 4.144, q < 0.05, FDR-corrected; LPFC: 1.374 vs. non-LPFC: 0.864). These findings demonstrated that the most dominant feature for distinguishing two tasks was observed in the LPFC.

Fig. 7. Grand-averaged activation patterns in the 15 BAs.

Fig. 7.

Activation patterns across 15 different BAs were averaged across participants in the (A) delta, (B) theta, (C) alpha, (D) beta, and (E) gamma bands. The scales were normalized by the mean value of 15 BA absolute signals within each individual. An upward direction (positive domain) of the activation pattern (red bars) represents a classification feature relevant to the mental-rotation task while a downward direction (negative domain) of the activation patterns (blue bars) indicates relevancy to the color-perception task. Error bars indicate standard errors of the mean. The 15 BAs in the x-axis are order based on their relative anatomical locations starting from frontal and ending to occipital regions. The mean values of the absolute LPFC (BA 9, 46, 45, and 47) activation pattern values were statistically compared with those in the remaining non-LPFC regions. Note, the LPFC region yielded significant contributionsin the decoding accuracy as compared with the non-LPFC region. * q < 0.05 and NS = non-significant, FDR-corrected two-tailed paired t-tests between LPFC and non-LPFC regions as well as across DLPFC (i.e. BA 9 and 46) and VLPFC (i.e. BA 45 and 47).

The grand-averaged activation patterns of all frequency bands are displayed on the cortex (Fig. 8). It is noteworthy that the DLPFC-centered dominant positive value was observed across all frequency bands. In contrast, the color-task-relevant features of negative activation patterns were evident in the VLPFC and occipital visual processing areas.

Fig. 8. Cortical maps of activation patterns.

Fig. 8.

Cortical maps of grand-averaged activation patterns of 15 BAs are shown in the (A) delta, (B) theta, (C) alpha, (D) beta, and (E) gamma bands. In each set of cortical maps, the viewpoints are left-lateral, superior, right-lateral, left-medial, right-medial, anterior, inferior, and posterior (asterisks indicate the anterior direction). The scales were normalized by the mean value of 15 BA absolute signals within each individual. The maximum of the color-coding scale is set to the maximum value across all frequency bands. Red regions represent the most dominant features for classifying the rotation task, whereas blue regions indicate the most dominant features for classifying the color-perception task. Within the PFC, DLPFC exhibits red activation, and VLPFC displays blue activation.

The LPFC contribution to the decoding accuracy in the activation patterns was enhanced compared to non-LPFC contribution in the theta, beta, and gamma bands (Fig. 9); delta (t(16) = 1.984, n.s.), theta (t(16) = 2.597, q < 0.05, FDR-corrected; LPFC: 56.16% vs. non-LPFC: 43.84%), alpha (t(16) = 1.872, n.s.), beta (t(16) = 3.898, q < 0.05, FDR-corrected; LPFC: 61.26% vs. non-LPFC: 38.74%), and gamma bands (t(16) = 3.978, q < 0.05, FDR-corrected; LPFC: 60.33% vs. non-LPFC: 39.67%). The averaged LPFC contribution to decoding accuracy was 57.8% across all frequency bands, and 59.2% for the theta, beta, and gamma bands.

Fig. 9. LPFC versus non-LPFC contributions (%) in activation patterns of each frequency band.

Fig. 9.

*q < 0.05, FDR-corrected two-tailed paired t-tests between LPFC (red) and non-LPFC (blue) regions. Error bars indicate standard errors of the mean.

4. Discussion

EEG activation pattern analysis using source reconstructed data distinguished mental-rotation vs. color-perception tasks. The most salient features selected by the rLDA classifier were concentrated in LPFC known to be central to virtually all cognitive tasks, including working memory and decision-making among others (Fuster, 2008; Royall et al., 2002; Sarazin et al., 1998). The mental-rotation task required continuous spatial information processing and utilization of mental manipulation, and the color-perception task required holding information online to make a successful match; both processes are known to engage LPFC. We observed that EEG signals distinguishing the two tasks were maximal in LPFC.

Differential activation patterns between DLPFC and VLPFC demonstrated that LPFC signals can categorize each task performance. The sign of the activation pattern predicts the direction of classification (positive for the mental-rotation task and negative for the color-perception task) (Haufe et al., 2014). Positive classification features for mental rotation were reliably observed in DLPFC regions while negative classification features for color perception were detected in VLPFC regions. These findings are in accord with proposals that DLPFC is the end point for the dorsal stream (i.e. the “where” pathway for the mental-rotation task) (Goodale and Milner, 1992; Hesse et al., 2014) and VLPFC is the end point for the ventral stream (i.e. the “what” pathway for the color-perception task) (Kravitz et al., 2013; Mishkin et al., 1983). We observed that spatial information processing and mental manipulation during object rotation engaged more DLPFC resources, as compared to the color-perception task, which was weighted toward VLPFC and posterior visual cortices. Notably, beta and gamma bands (Fig. 7 DE) had positive activation patterns in the mental-rotation task in BA 5 and 40, which belong to the dorsal stream. In contrast, negative values of activation patterns indicating relevancy to the color-perception task were observed particu larly along with occipito-temporal visual cortices such as BA 17, 18, 19, and 20, encompassing the ventral stream (Fig. 7). These observations provided a neurophysiological signature for the dissociation of the dorsal “where” route for the mental-rotation task and the ventral “what” route for the color-perception task.

The selective contribution of LPFC to decoding accuracy was most pronounced in the theta, beta, and gamma bands (Fig. 9). We identified frequency-specific effects contributing to LPFC decoding accuracy, but we did nor observe differential decoder performance across frequency bands. All five canonical bands supported task discrimination (Fig. 5) and their activation patterns shared numerous spatial similarities (Fig. 8). Thus, it is reasonable to expect that a corresponding analysis with a broadband signal would achieve similar discrimination results as shown in Figure 5. PFC-based decoding performance could be improved using different combinations of analytic approaches (e.g. wide-band filters), and optimal methods for decoding should be explored in future work.

The entire PFC (including not only LPFC but also BA 10 and 11) provided a further enhanced contribution to decoding accuracy compared to the non-PFC region in the alpha, beta, and gamma bands (Fig. S1). The role of neural oscillations in PFC-mediated mental processing has been highlighted in recent studies with humans (Johnson et al., 2018) and monkeys (Miller et al., 2018). PFC-dependent top-down processing integrates and controls bottom-up visual information from occipital cortices (Barcelo et al., 2000). Reciprocal connections between the PFC and the occipital cortices (Hwang and Luna, 2013; Wakana et al., 2004) through the fronto-occipital fasciculi provides a neuroanatomical substrate for control. The extensive and reciprocal connections between the PFC and other brain regions provide further neuroanatomical substrates for PFC control of diverse cognitive processes (Barbas, 2000). Functional interactions between the PFC and posterior cortical regions have been consistently reported during executive control (Gazzaley et al., 2004; McIntosh et al., 1996). For instance, electrocorticography studies report phase coupling between fronto-parietal cortices during spatial attention control (Szczepanski et al., 2014) and over the posterior cortex during visual tasks (Voytek et al., 2010).

Taken together, our observations provide evidence that PFC activity reliably decodes the implementation of intended goals in the mental-rotation/color-perception paradigm. Both PFC and PPC have been linked to goal-directed movement planning (Brandi et al., 2014; Fincham et al., 2002; Gremel and Costa, 2013; Lindner et al., 2010; Niedermeyer, 1998; Pereira et al., 2017; Petzschner and Kruger, 2012; Rosenberg-Katz et al., 2012). PFC signals provided enhanced decoding performance in the present tasks compared to sensorimotor and parietal regions. For instance, LPFC features were more robust than the sensorimotor features in the mental-rotation paradigm (Figs. 7 and 8). DLPFC signals exhibited dominant features of mental-rotation processing over these motor-related cortical areas across all frequency bands. Presumably, the mental-rotation task did not require sensorimotor activity for efficient performance. This may be due to the fact that pure mental-manipulation of a visualized object does not need to induce efferent commands to move peripheral muscles, which are engaged in motor imagery BMI paradigms typically requiring imagining movement of peripheral limbs. The current findings reveal that higher-order PFC-based cognitive brain signals, which encode goal-directed intentions instead of details on how to perform reaching, enlarge the repertoire of BMI control signals.

Activation patterns for the color-perception task were enhanced in both VLPFC and occipital areas. This is likely due to the dominant visual processing of color features in occipital cortices (Hadjikhani et al., 1998; Tootell et al., 2004). These results indicate that neural features are dependent on the characteristics of the assigned cognitive task. This suggests that depending on task type, the brain regions inducing the most dominant control features can be selectively manipulated overcoming limitation of classical BMI task paradigms.

PFC provides rich signal space for cognitive BMI applications yet despite the prominent role in goal-directed cognition, PFC has largely been ignored in BMI research (Min et al., 2017). Advances in BMI technology have been applied to scenarios ranging from therapeutic approaches to consumer applications (Dornhege et al., 2007; Min et al., 2010; Wolpaw and Wolpaw, 2012). For example, BMI has been used to assist patients with motor deficits, such as amyotrophic lateral sclerosis (McCane et al., 2014), as well as by healthy individuals for lifestyle enhancement including biofeedback and other relaxation metrics (Blankertz et al., 2010; Millán and Carmena, 2010; Millán et al., 2010; Muller et al., 2008; Tan and Nijholt, 2010). The majority of non-invasive BMI brain signals utilize primary sensorimotor cortex activity including mu and beta rhythms focused on motor control (Muller-Putz et al., 2005; Pfurtscheller and Neuper, 1997), posterior parietal scalp potentials including the P300 (Kleih and Kubler, 2013; Sellers and Donchin, 2006), or SSVEP paradigms (Muller-Putz and Pfurtscheller, 2008). Recent studies report that direct invasive recordings from the human posterior parietal cortex can decode higher-level aspects of movement and these signals can effectively control a robotic arm for a tetraplegic patient with implanted electrodes and can be used as potent BMI control signals for decoding higher-level aspects of movement (Aflalo et al., 2015; Hauschild et al., 2012; Lindner et al., 2010), suggesting that the association cortex provides reliable control signals. However, it is noteworthy that LPFC contributed more to decoding performance than the posterior parietal cortex (BA 5 and 40) in the present study. This suggests that PFC provides a novel signal source for goal-directed intention applicable to BMI (Carlson and Millan, 2013; Iturrate et al., 2015). The current findings provide evidence for use of prefrontal brain activity for non-invasive cognitive BMI-control signals that are goal-directed and task-specific, including attention, memory implementation, and domain-general decision making (Fuster, 1997). This technology can be applied to various higher-order PFC-dependent cognitive functions, such as planning and performance-monitoring independent of how actions are executed through button pressing, speech, or eye movement.

PFC-based decoding approaches have recently received growing attention in both EEG and MEG studies. For example, using a deep-learning technique, more than 85% accuracy was observed when decoding imaginary vowels using frontal EEG signals (Parhi and Tewfik, 2021). In a MEG study, classifiers were trained to distinguish the MEG field patterns during the presentation of two probabilistic outcomes (reward, loss), and then applied to decode such patterns during deliberation (Castegnetti et al., 2020). Decodable outcome representations predominantly captured in the PFC during probabilistic decision-making predicted subsequent action with a classification accuracy of 70%. Taken together, these findings support the use for decoding of cognitive signatures of goal-directed tasks for development of reliable and robust PFC-mediated EEG/MEG-based BMIs.

Supplementary Material

1

Acknowledgements

We are thankful to Dr. Sung Chan Jun, Dr. Marco Congedo, Dr. Donghyeon Kim, Dr. Kwangyeol Baek, Dr. Bumhee Park, Dr. Sung Young Park, Dr. David W. Gow, and Dr. Tom Sgouros for their valuable comments and support of this study. This work was supported by the Convergent Technology R&D Program for Human Augmentation (grant number 2020M3C1B8081319 to B.-K.M.), the Information Technology Research Center (ITRC) Support Program (grant number IITP-2020–2016-0–00464 to B.-K.M.), the Institute for Information & Communications Technology Promotion (IITP) Grant (Department of Artificial Intelligence, Korea University; grant number 2019–0-00079 to H.-I.S.), and the Basic Science Research Program (grant number 2019R1I1A1A01061545 to M.-H.A.), which are funded by the Korean government (MSICT) through the National Research Foundation of Korea; and the National Institute of Health (grant number R37NS21135 to R.T.K.). The authors declare no competing interests.

Footnotes

Data and code availability

Data are not shared in a public repository due to the privacy rights of human subjects. However, the data and analysis tools used in the current study are available from the corresponding author upon reasonable request.

Supplementary materials

Supplementary material associated with this article can be found, in the online version, at doi:10.1016/j.neuroimage.2021.118165.

References

  1. Aflalo T, Kellis S, Klaes C, Lee B, Shi Y, Pejsa K, Shanfield K, Hayes-Jackson S, Aisen M, Heck C, Liu C, Andersen RA, 2015. Decoding motor imagery from the posterior parietal cortex of a tetraplegic human. Science 348, 906–910. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Ang KK, Chin ZY, Zhang H, Guan C, 2008. Filter bank common spatial pattern (FBCSP) in brain-computer interface. In: 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence, pp. 2390–2397 IEEE. [Google Scholar]
  3. Barbas H, 2000. Connections underlying the synthesis of cognition, memory, and emotion in primate prefrontal cortices. Brain Res Bull 52, 319–330. [DOI] [PubMed] [Google Scholar]
  4. Barcelo F, Suwazono S, Knight RT, 2000. Prefrontal modulation of visual processing in humans. Nature neuroscience 3, 399–403. [DOI] [PubMed] [Google Scholar]
  5. Benjamini Y, Hochberg Y, 1995. Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal statistical society: series B (Methodological) 57, 289–300. [Google Scholar]
  6. Bhandari A, Gagne C, Badre D, 2018. Just above chance: is it harder to decode information from prefrontal cortex hemodynamic activity patterns? Journal of cognitive neuroscience 30, 1473–1498. [DOI] [PubMed] [Google Scholar]
  7. Blankertz B, Lemm S, Treder M, Haufe S, Müller K-R, 2011. Single-trial analysis and classification of ERP components—a tutorial. Neuroimage 56, 814–825. [DOI] [PubMed] [Google Scholar]
  8. Blankertz B, Tangermann M, Vidaurre C, Fazli S, Sannelli C, Haufe S, Maeder C, Ramsey L, Sturm I, Curio G, Muller KR, 2010. The Berlin Brain-Computer Interface: Non-Medical Uses of BCI Technology. Front Neurosci 4, 198. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Brandi ML, Wohlschlager A, Sorg C, Hermsdorfer J, 2014. The neural correlates of planning and executing actual tool use. J Neurosci 34, 13183–13194. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Brouwer GJ, Heeger DJ, 2009. Decoding and reconstructing color from responses in human visual cortex. Journal of Neuroscience 29, 13992–14003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Buzsaki G, Draguhn A, 2004. Neuronal oscillations in cortical networks. Science 304, 1926–1929. [DOI] [PubMed] [Google Scholar]
  12. Carlson T, Millan J.d.R., 2013. Brain-Controlled Wheelchairs A Robotic Architecture. IEEE Robotics and Automation Magazine 20, 65–73. [Google Scholar]
  13. Castegnetti G, Tzovara A, Khemka S, Melinščak F, Barnes GR, Dolan RJ, Bach DR, 2020. Representation of probabilistic outcomes during risky decision–making. Nature communications 11, 1–11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Collins DL, Neelin P, Peters TM, Evans AC, 1994. Automatic 3D intersubject registration of MR volumetric data in standardized Talairach space. Journal of computer assisted tomography 18, 192–205. [PubMed] [Google Scholar]
  15. Dähne S, Bießmann F, Samek W, Haufe S, Goltz D, Gundlach C, Villringer A, Fazli S, Müller K-R, 2015. Multivariate machine learning methods for fusing multimodal functional neuroimaging data. Proceedings of the IEEE 103, 1507–1530. [Google Scholar]
  16. Dornhege G, Millán J.d.R., Hinterberger T, McFarland D, Müller K, 2007. Towards Brain-Computer Interfacing. The MIT Press, Cambridge, MA. [Google Scholar]
  17. Duda RO, Hart PE, Stork DG, 2001. Pattern Classification, 2nd ed. Wiley & Sons. [Google Scholar]
  18. Duncan J, 2001. An adaptive coding model of neural function in prefrontal cortex. Nature reviews neuroscience 2, 820–829. [DOI] [PubMed] [Google Scholar]
  19. Evans AC, Collins DL, Mills S, Brown E, Kelly R, Peters TM, 1993. 3D statistical neuroanatomical models from 305 MRI volumes. IEEE, pp. 1813–1817. [Google Scholar]
  20. Fincham JM, Carter CS, van Veen V, Stenger VA, Anderson JR, 2002. Neural mechanisms of planning: a computational analysis using event-related fMRI. Proc Natl Acad Sci U S A 99, 3346–3351. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Friedman JH, 1989. Regularized Discriminant-Analysis. Journal of the American Statistical Association 84, 165–175. [Google Scholar]
  22. Fuster JM, 1997. The prefrontal cortex: anatomy, physiology, and neuropsychology of the frontal lobe, 3rd ed. Lippincott-Raven, Philadelphia. [Google Scholar]
  23. Fuster JM, 2008. The prefrontal cortex, 4 ed. Academic Press. [Google Scholar]
  24. Fuster JM, 2013. Cognitive functions of the prefrontal cortex, 2 ed. Oxford university press, New York. [Google Scholar]
  25. Gazzaley A, Rissman J, D’Esposito M, 2004. Functional connectivity during working memory maintenance. Cogn Affect Behav Neurosci 4, 580–599. [DOI] [PubMed] [Google Scholar]
  26. Goodale MA, Milner AD, 1992. Separate Visual Pathways for Perception and Action. Trends in Neurosciences 15, 20–25. [DOI] [PubMed] [Google Scholar]
  27. Gremel CM, Costa RM, 2013. Premotor cortex is critical for goal-directed actions. Front Comput Neurosci 7, 110. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Hadjikhani N, Liu AK, Dale AM, Cavanagh P, Tootell RB, 1998. Retinotopy and color sensitivity in human visual cortical area V8. Nat Neurosci 1, 235–241. [DOI] [PubMed] [Google Scholar]
  29. Harrison SA, Tong F, 2009. Decoding reveals the contents of visual working memory in early visual areas. Nature 458, 632–635. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Haufe S, Meinecke F, Görgen K, Dähne S, Haynes J-D, Blankertz B, Bießmann F, 2014. On the interpretation of weight vectors of linear models in multivariate neuroimaging. Neuroimage 87, 96–110. [DOI] [PubMed] [Google Scholar]
  31. Hauschild M, Mulliken GH, Fineman I, Loeb GE, Andersen RA, 2012. Cognitive signals for brain-machine interfaces in posterior parietal cortex include continuous 3D trajectory commands. Proc Natl Acad Sci U S A 109, 17075–17080. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Hesse C, Ball K, Schenk T, 2014. Pointing in visual periphery: is DF’s dorsal stream intact? PloS one 9, e91420. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Horikawa T, Tamaki M, Miyawaki Y, Kamitani Y, 2013. Neural decoding of visual imagery during sleep. Science 340, 639–642. [DOI] [PubMed] [Google Scholar]
  34. Hwang K, Luna B, 2013. The development of brain connectivity supporting prefrontal cortical functions. In: Stuss DT, Knight RT (Eds.), Principles of frontal lobe function. Oxford University Press, New York, pp. 164–184. [Google Scholar]
  35. Iturrate I, Chavarriaga R, Montesano L, Minguez J, Millan JD, 2015. Teaching brain-machine interfaces as an alternative paradigm to neuroprosthetics control. Scientific Reports 5, 13893. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Jatoi MA, Kamel N, Malik AS, Faye I, 2014. EEG based brain source localization comparison of sLORETA and eLORETA. Australasian Physical & Engineering Sciences in Medicine 37, 713–721. [DOI] [PubMed] [Google Scholar]
  37. Johnson EL, Adams JN, Solbakk AK, Endestad T, Larsson PG, Ivanovic J, Meling TR, Lin JJ, Knight RT, 2018. Dynamic frontotemporal systems process space and time in working memory. Plos Biology 16. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Kamitani Y, Tong F, 2005. Decoding the visual and subjective contents of the human brain. Nature neuroscience 8, 679–685. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Kleih SC, Kubler A, 2013. Empathy, motivation, and P300-BCI performance. Front Hum Neurosci 7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Kravitz DJ, Saleem KS, Baker CI, Ungerleider LG, Mishkin M, 2013. The ventral visual pathway: an expanded neural framework for the processing of object quality. Trends in Cognitive Sciences 17, 26–49. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Ledoit O, Wolf M, 2004. A well-conditioned estimator for large-dimensional covariance matrices. Journal of multivariate analysis 88, 365–411. [Google Scholar]
  42. Lemm S, Blankertz B, Dickhaus T, Müller K-R, 2011. Introduction to machine learning for brain imaging. Neuroimage 56, 387–399. [DOI] [PubMed] [Google Scholar]
  43. Lindner A, Iyer A, Kagan I, Andersen RA, 2010. Human posterior parietal cortex plans where to reach and what to avoid. J Neurosci 30, 11715–11725. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. McCane LM, Sellers EW, McFarland DJ, Mak JN, Carmack CS, Zeitlin D, Wolpaw JR, Vaughan TM, 2014. Brain-computer interface (BCI) evaluation in people with amyotrophic lateral sclerosis. Amyotroph Lateral Scler Frontotemporal Degener. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. McIntosh AR, Grady CL, Haxby JV, Ungerleider LG, Horwitz B, 1996. Changes in limbic and prefrontal functional interactions in a working memory task for faces. Cereb Cortex 6, 571–584. [DOI] [PubMed] [Google Scholar]
  46. Millán J.d.R., Carmena JM, 2010. Invasive or noninvasive: understanding brain-machine interface technology. IEEE Eng Med Biol Mag 29, 16–22. [DOI] [PubMed] [Google Scholar]
  47. Millán J, Rupp R, Muller-Putz GR, Murray-Smith R, Giugliemma C, Tangermann M, Vidaurre C, Cincotti F, Kubler A, Leeb R, Neuper C, Muller KR, Mattia D, 2010. Combining Brain-Computer Interfaces and Assistive Technologies: State-of-the-Art and Challenges. Front Neurosci 4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Miller EK, Cohen JD, 2001. An integrative theory of prefrontal cortex function. Annual review of neuroscience 24, 167–202. [DOI] [PubMed] [Google Scholar]
  49. Miller EK, Lundqvist M, Bastos AM, 2018. Working Memory 2.0. Neuron 100, 463–475. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Min BK, Chavarriaga R, Millan JDR, 2017. Harnessing Prefrontal Cognitive Signals for Brain-Machine Interfaces. Trends Biotechnol 35, 585–597. [DOI] [PubMed] [Google Scholar]
  51. Min BK, Marzelli MJ, Yoo SS, 2010. Neuroimaging-based approaches in the brain–computer interface. Trends Biotechnol 28, 552–560. [DOI] [PubMed] [Google Scholar]
  52. Mishkin M, Ungerleider LG, Macko KA, 1983. Object Vision and Spatial Vision - 2 Cortical Pathways. Trends in Neurosciences 6, 414–417. [Google Scholar]
  53. Muller-Putz GR, Pfurtscheller G, 2008. Control of an electrical prosthesis with an SSVEP-based BCI. IEEE Trans Biomed Eng 55, 361–364. [DOI] [PubMed] [Google Scholar]
  54. Muller-Putz GR, Scherer R, Brauneis C, Pfurtscheller G, 2005. Steady-state visual evoked potential (SSVEP)-based communication: impact of harmonic frequency components. J Neural Eng 2, 123–130. [DOI] [PubMed] [Google Scholar]
  55. Muller KR, Tangermann M, Dornhege G, Krauledat M, Curio G, Blankertz B, 2008. Machine learning for real-time single-trial EEG-analysis: from brain-computer interfacing to mental state monitoring. J Neurosci Methods 167, 82–90. [DOI] [PubMed] [Google Scholar]
  56. Niedermeyer E, 1998. Frontal lobe functions and dysfunctions. Clin Electroencephalogr 29, 79–90. [DOI] [PubMed] [Google Scholar]
  57. Norman KA, Polyn SM, Detre GJ, Haxby JV, 2006. Beyond mind-reading: multi-voxel pattern analysis of fMRI data. Trends in cognitive sciences 10, 424–430. [DOI] [PubMed] [Google Scholar]
  58. Oakes TR, Pizzagalli DA, Hendrick AM, Horras KA, Larson CL, Abercrombie HC, Schaefer SM, Koger JV, Davidson RJ, 2004. Functional coupling of simultaneous electrical and metabolic activity in the human brain. Human Brain Mapping 21, 257–270. [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Parhi M, Tewfik AH, 2021. Classifying imaginary vowels from frontal lobe eeg via deep learning. 2020 28th European Signal Processing Conference (EUSIPCO). IEEE, pp. 1195–1199. [Google Scholar]
  60. Pascual-Marqui RD, 1999. Review of methods for solving the EEG inverse problem. International journal of bioelectromagnetism 1, 75–86. [Google Scholar]
  61. Pascual-Marqui RD, 2007. Discrete, 3D distributed, linear imaging methods of electric neuronal activity. Part 1: exact, zero error localization. arXiv preprint arXiv:0710.3341. [Google Scholar]
  62. Pascual-Marqui RD, Esslen M, Kochi K, Lehmann D, 2002. Functional imaging with low-resolution brain electromagnetic tomography (LORETA): a review. Methods and findings in experimental and clinical pharmacology 24, 91–95. [PubMed] [Google Scholar]
  63. Pereira J, Ofner P, Schwarz A, Sburlea AI, Muller-Putz GR, 2017. EEG neural correlates of goal-directed movement intention. Neuroimage 149, 129–140. [DOI] [PMC free article] [PubMed] [Google Scholar]
  64. Petzschner FH, Kruger M, 2012. How to Reach: Movement Planning in the Posterior Parietal Cortex. Journal of Neuroscience 32, 4703–4704. [DOI] [PMC free article] [PubMed] [Google Scholar]
  65. Pfurtscheller G, Neuper C, 1997. Motor imagery activates primary sensorimotor area in humans. Neurosci Lett 239, 65–68. [DOI] [PubMed] [Google Scholar]
  66. Rosenberg-Katz K, Jamshy S, Singer N, Podlipsky I, Kipervasser S, Andelman F, Neufeld MY, Intrator N, Fried I, Hendler T, 2012. Enhanced functional synchronization of medial and lateral PFC underlies internally-guided action planning. Front Hum Neurosci 6, 79. [DOI] [PMC free article] [PubMed] [Google Scholar]
  67. Rougier NP, Noelle DC, Braver TS, Cohen JD, O’Reilly RC, 2005. Prefrontal cortex and flexible cognitive control: Rules without symbols. Proceedings of the National Academy of Sciences 102, 7338–7343. [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Royall DR, Lauterbach EC, Cummings JL, Reeve A, Rummans TA, Kaufer DI, LaFrance WC Jr., Coffey CE, 2002. Executive control function: a review of its promise and challenges for clinical research. A report from the Committee on Research of the American Neuropsychiatric Association. J Neuropsychiatry Clin Neurosci 14, 377–405. [DOI] [PubMed] [Google Scholar]
  69. Sarazin M, Pillon B, Giannakopoulos P, Rancurel G, Samson Y, Dubois B, 1998. Clinicometabolic dissociation of cognitive functions and social behavior in frontal lobe lesions. Neurology 51, 142–148. [DOI] [PubMed] [Google Scholar]
  70. Sellers EW, Donchin E, 2006. A P300-based brain-computer interface: initial tests by ALS patients. Clin Neurophysiol 117, 538–548. [DOI] [PubMed] [Google Scholar]
  71. Shepard RN, Metzler J, 1971. Mental rotation of three-dimensional objects. Science 171, 701–703. [DOI] [PubMed] [Google Scholar]
  72. Szczepanski SM, Crone NE, Kuperman RA, Auguste KI, Parvizi J, Knight RT, 2014. Dynamic changes in phase-amplitude coupling facilitate spatial attention control in fronto-parietal cortex. Plos Biology 12, e1001936. [DOI] [PMC free article] [PubMed] [Google Scholar]
  73. Talairach J, Tournoux P, 1988. Co-planar stereotaxic atlas of the human brain. 3-Dimensional proportional system: an approach to cerebral imaging. Thieme Medical Publishers, Inc., Georg Thieme Verlag, Stuttgart, New York. [Google Scholar]
  74. Tan DS, Nijholt A, 2010. Brain-computer interfaces: applying our minds to human–computer interaction. Springer, London. [Google Scholar]
  75. Tomioka R, Müller K-R, 2010. A regularized discriminative framework for EEG analysis with application to brain–computer interface. Neuroimage 49, 415–432. [DOI] [PubMed] [Google Scholar]
  76. Tootell RB, Nelissen K, Vanduffel W, Orban GA, 2004. Search for color ‘center(s)’ in macaque visual cortex. Cereb Cortex 14, 353–363. [DOI] [PubMed] [Google Scholar]
  77. van der Loo E, Congedo M, Vanneste S, Van De Heyning P, De Ridder D, 2011. Insular lateralization in tinnitus distress. Autonomic Neuroscience-Basic & Clinical 165, 191–194. [DOI] [PubMed] [Google Scholar]
  78. Voytek B, Canolty RT, Shestyuk A, Crone NE, Parvizi J, Knight RT, 2010. Shifts in gamma phase-amplitude coupling frequency from theta to alpha over posterior cortex during visual tasks. Front Hum Neurosci 4, 191. [DOI] [PMC free article] [PubMed] [Google Scholar]
  79. Wakana S, Jiang H, Nagae-Poetscher LM, van Zijl PC, Mori S, 2004. Fiber tract-based atlas of human white matter anatomy. Radiology 230, 77–87. [DOI] [PubMed] [Google Scholar]
  80. Wolpaw J, Wolpaw EW, 2012. Brain-computer interfaces: principles and practice. Oxford University Press, New York, NY, USA. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

1

RESOURCES