Skip to main content
Health Information Science and Systems logoLink to Health Information Science and Systems
. 2021 Mar 13;9(1):13. doi: 10.1007/s13755-021-00142-y

Crosstalk disrupts the production of motor imagery brain signals in brain–computer interfaces

Phoebe S-H Neo 1,2,, Terence Mayne 1,2, Xiping Fu 1, Zhiyi Huang 1, Elizabeth A Franz 2,3
PMCID: PMC7956071  PMID: 33786162

Abstract

Brain–computer interfaces (BCIs) target specific brain activity for neuropsychological rehabilitation, and also allow patients with motor disabilities to control mobility and communication devices. Motor imagery of single-handed actions is used in BCIs but many users cannot control the BCIs effectively, limiting applications in the health systems. Crosstalk is unintended brain activations that interfere with bimanual actions and could also occur during motor imagery. To test if crosstalk impaired BCI user performance, we recorded EEG in 46 participants while they imagined movements in four experimental conditions using motor imagery: left hand (L), right hand (R), tongue (T) and feet (F). Pairwise classification accuracies of the tasks were compared (LR, LF, LT, RF, RT, FT), using common spatio-spectral filters and linear discriminant analysis. We hypothesized that LR classification accuracy would be lower than every other combination that included a hand imagery due to crosstalk. As predicted, classification accuracy for LR (58%) was reliably the lowest. Interestingly, participants who showed poor LR classification also demonstrated at least one good TR, TL, FR or FL classification; and good LR classification was detected in 16% of the participants. For the first time, we showed that crosstalk occurred in motor imagery, and affected BCI performance negatively. Such effects are effector-sensitive regardless of the BCI methods used; and likely not apparent to the user or the BCI developer. This means that tasks choice is crucial when designing BCI. Critically, the effects of crosstalk appear mitigatable. We conclude that understanding crosstalk mitigation is important for improving BCI applicability.

Supplementary Information

The online version of this article contains supplementary material available (10.1007/s13755-021-00142-y).

Keywords: Rehabilitation, Motor imagery, BCI, EEG, Machine learning, Crosstalk

Introduction

Brain–computer interfaces (BCIs) can replace physical controls by using recognition of brain activity to control electrical devices, offering important implications for therapy such as stroke rehabilitation [14]. Many BCIs use sensorimotor electroencephalography (EEG) activity [58] because voluntary imagination of movements [9, 10] shows classifiable EEG features [11]. In particular, imagination of left- and right- hand movements produces distinct EEG activity over the respective contralateral sensorimotor areas [12]. Training patients to control these signals enables those with severe motor disabilities to use their brain signals to supplement impaired muscle control; control a limb orthosis; or use a word processor, etc. [13, 14].

Viable BCIs depend on reliable and accurate classification. Some BCIs can already classify two different motor imagery tasks to 100% accuracy in an individual [1517]. However, such performance appears tenable only in a small number of people; in a highly controlled environment; and after considerable practice [18, 19]. It has been estimated that about 1/3 of people can easily use their mind to control devices; another 1/3 can do it with training; and the remaining 1/3 cannot seem to master training using BCIs [18, 20].

Current research focuses on more effective computations and analyses of the brain signals [2123] but we also need more effective user training [18, 20, 23]. Neuropsychological factors that might mitigate poor classification accuracy require directed study. If users cannot consistently produce distinguishable signals, no signal processing algorithms can classify them. However, we also must appreciate that some factors are not under direct control of users, and any consistency in those might allow for at least some classification success if appropriate experimental procedures can pinpoint them. Of those less easy to control, we know that motor complexity [24], feedback [25], realistic settings [26, 27], learning mechanisms [28] and motivational states21 affect user performances.

The impact of crosstalk, which we define as ‘unintended or unanticipated brain activation that is not likely to be directly controlled by a user’ might produce consistent patterns that directly affect classification accuracy; this has not been investigated. The firm research base on crosstalk makes it a strong candidate as a mitigating factor on classification accuracy. Below we first define, and give examples of, the well-known form of crosstalk explored in bimanual tasks. We then present the primary hypothesis stemming from that work, and finally, our experimental methods and results.

In a paradigmatic bimanual task, continuous drawing of an intended line with one hand and a circle with the other, leads to more elliptical line trajectories and more linear circles, effects which occur even when participants think they are drawing accurately [2931]. Similar effects of spatial coupling generalise to other bimanual tasks [28]. The disrupted trajectories are due to unintended spatial coupling of the planning processes associated with the output of each hand, often referred to as crosstalk. Although there are separate corticospinal pathways that transmit efferent signals to motor systems of the distinct hands, networks involved in movement planning, selection, and even prediction of actions are connected via the corpus callosum between the two sides of the brain [29, 3137]. It follows that unintended disruptions, or crosstalk, occurs when signals from the involved circuits in one hemisphere overflow into the other.

Crosstalk is more likely to occur between neural circuits that are functionally linked [38], such as our two hands, because of their functional interactions in a large majority of tasks performed during bimanual tasks in our daily lives [28]. Consequently, single-handed movements can also activate the ipsilateral sensorimotor areas [3942], probably as a result of coupling of the separate brain areas during bimanual actions. This leads to highly similar neural representations of movements of the left and right hands during single-handed movements. Imagined and actual movements rely on similar processes [43, 44], including cognitive constraints [45]. Thus, if crosstalk also occurs during motor imagery, it should lead to more similar neural representations between the left and right hands even when they perform tasks singly (or tasks are imagined singly for the separate hands). A clear prediction emerges based on that logic: BCI classification accuracy should be low when the machine learning classifier attempts to distinguish between activation associated with that particular pair of tasks. Thus, the particular pair of tasks being classified using BCI should be of key interest in studies aimed at examining the efficacy of BCIs.

Four motor imagery tasks have been the focus of BCI research: those involving the left hand, right hand, tongue, or feet. Out of the pairwise combinations of those four, a primary prediction is that crosstalk should affect, and hence lead to lower classification accuracy, of left versus right hand tasks; more so than for any of the other task pairs that include the hand. The rank order of accuracy in the classification of the remaining motor task combinations is not as clear. The tongue and feet, in particular, are not well studied compared to the hands. However, the manner in which the other pairs of tasks are ordered with respect to classification accuracy might further inform us about their tendency for similar representations in the brain during motor imagery. In addition to testing that primary hypothesis and its key prediction, we were also able to examine other additional factors that could influence BCI performance, including real-time classification across multiple sessions. For efficiency of space and clarity of the main study, the details and findings associated with these are reported in the supplementary materials section.

Methods

This study included three experiments. Experiments 1 and 2 were intended to be small-sample preliminary tests of experimental protocols for Experiment 3. Methods were successively adjusted across the three experiments to improve the protocols, with careful attention paid to the instructions given to participants and possible mitigating factors that can and cannot be directly controlled. The main modifications across our series of experiments did not affect the results, however, so crosstalk (which was partially controllable through our pairing of tasks across the experiments), was the primary focus. Minor changes in protocol across the series of experiments are reported in the supplementary materials section. Below, we report the core methods across experiments.

Participants

Forty-six participants were recruited from the Department of Psychology at the University of Otago. This included 36 females and 10 males (females were chosen as the majority due to availability in recruitment). Ages ranged from 17 to 29 years (mean = 21; SD = 3). Based on self-reports, 44 participants were right-handed, one was left-handed and one was ambidextrous. Procedures were in accordance with the University of Otago Human Ethics Committee. Participants received either extra credit in their coursework or a $20 petrol voucher depending on experiment, in partial compensation for their time.

Data acquisition

EEG was recorded from 32 channels (Waveguard cap, ANT B.V., Enschede, The Netherlands, www.ant-neuro.com) with an Advanced Neuro Technology (ANT) amplifier. Electrode impedances were tested with ANT software (asa version 4.7) and kept below 5 kΩ. Recording electrodes were FP1, FPZ, FP2, F7, F8, F3, FZ, F4, FC5, FC1, FC2, FC6, T7, T8 C3, CZ, C4, CP5, CP1, CP2, CP6, P7, P3, PZ, P4, P8, POZ, O1, OZ, O2, M1 and M2. EEG was sampled at 256 Hz, referenced to the vertex (AFz) and recorded with OpenVibe Acquisition Server [46] and the Lab Streaming Layer (LSL) 1.10.2 [47], and stored with Lab Recorder 1.10 [47] in XDF format. Markers signifying events in the motor imagery tasks were embedded in the XDF file during recording.

Imagery task stimuli

Figure 1 shows the sequence and the associated time interval in which the stimuli were presented. The task began with a blank screen (2000 ms), followed by simultaneous onset (1000 ms) of a cross and a beep. An arrow in one of four possible directions (left, right, up or down) was then presented (1500 ms). This was followed by a cross (4000 ms) after which feedback questions were presented. A blank screen was then presented for a variable time interval (500 to 2500 ms) before the next trial began. The feedback questions were, “In the last trial, did you” 1) “feel the movement?”; 2) “visualize any images in first person?”; 3) “visualize any images in third person?”; 4) “imagine the wrong action?”; and 5) “move?”. Eight practice trials and 6 blocks of 40 imagery trials were presented. In each block, the sequence of left (L), right (R), up (U) and down (D) arrows was randomised every four trials, e.g., LRUD DULR LRUD LRDU etc.

Fig. 1.

Fig. 1

Schematic sequences of a trial in the motor imagery task used in all experiments. Duration is indicated in milliseconds under each screen. A trial began with a blank screen, followed by the concurrent presentation of a beep and a fixation cross and then the onset of an arrow in one of four possible directions. The direction of the arrow indicated the body part (left arrow: left hand; right arrow: right hand, down arrow: feet; up arrow: tongue) to imagine moving. The upcoming cross signalled to the participants to begin imagining tapping movements with the body part indicated. A trial ended with a blank screen with variable inter-trial interval ranging from 500 ms to 2,500 ms

Imagery task procedures

Participants were instructed to imagine tapping either their left hand, right hand tongue or feet according to the direction of the arrow (in respective order: left, right, up and down) on the screen. The experimenter demonstrated the hand actions to the participants (tapping all the fingers on the desk) and verbally explained that they should move their tongue up and down, and tap their feet on the floor. Participants were told to start imagining the action at the onset of the second cross for 4000 ms continuously. It was emphasized to them that they should ‘feel’ rather than visualize the actions [48]. After the practice trials, the experimenter asked the participants if they had any questions. The experimenter sat next to the participant throughout the session. Participants gave a verbal response to the feedback questions presented at the end of each trial. The experimenter would mark the trial for exclusion with a key press if participants reported that they 1) could not feel the motor imagery; 2) pictured instead of feeling the action; 3) imagined the wrong action, or 4) actually moved. Between blocks of trials, participants were offered 5-min rest breaks.

EEG pre-processing

The EEG were rereferenced to the average of the left and right mastoids (M1 and M2 electrodes) and resampled to 128 Hz. A Finite Impulse Response (FIR) bandpass filter for 7-30 Hz was then applied (the transition window was 6-8 Hz and 28-32 Hz for the lower and upper edges, respectively). EEG artefacts were removed using Artefact Subspace Reconstruction (ASR) algorithm [49]. The algorithm first assembled clean reference sections of data by removing sections that contained spectral power values falling outside 3.5 to 5 standard deviations from the power distribution across the channels. All the data were than compared with the cleaned reference data, and if the subspace of data fell outside of 15 standard deviations of the reference data, it was reconstructed using a mixing matrix derived from the reference data. Four-second epochs of EEG from the imagery period were than extracted from the continuous EEG for each action. Erroneous epochs based on participants’ feedback (such as self-report of actually moving) were excluded. EEG features were extracted from each epoch using spatial filters (see Sect. 2.6 below) and then classified pairwise using discriminant analysis (see Sect. 2.7 below).

Spatial filtering

Common spatial filters, more commonly referred to as common spatial patterns (CSP) [50], refers to a supervised data reduction technique (common spatial patterns are actually visualization plots of the filters). It uses invariant filters to select combinations of recording channels that maximises the variance of EEG spectral power in one task while minimising it in the other [50]. Many of the successful algorithms in BCI competition III used CSP type techniques [51]. We used a variant of CSP that included spectral filters that select a frequency combination for each channel combination: details on common spatio-spectral pattern (CSSP) were described in [50]. This was chosen because it had been employed successfully in real-time classification previously and was relatively easy to implement [52, 53]; hence allowed for consistency in methods across our experiments (see reports on the real-time experiment in the supplementary section). For each participant and every pairwise combination of the imagery tasks, 30 spatial filters were produced (30 combinations of recording channels) and ranked according to their eigenvalues. The eigenvalues indicated the size of the variances maximized/minimized in one task relative to the other in the pair. Only the top ranked filter of each action in a combination was applied, i.e. two filters were selected. One filter maximized the variance of, e.g. activity from left hand imagery (keeping that of the right hand, relative constant) and vice versa for the other filter.

Linear discriminant analysis, tenfold cross-validation and classification model

Spatial filtering was followed by pattern learning and classification of EEG epochs with Linear Discriminant Analysis (LDA) [54]. Tenfold cross validation [55] procedures were used to estimate the performance of LDA (see supplementary materials section S.2.4 for details on tenfold validation).

Implementation of procedures

We created MATLAB 2014a scripts and used BCILAB [47] and EEGLAB 13.4.4b [56] functions to implement EEG pre-processing, CSSP and LDA algorithms. We used EEGLAB to import EEG (using the Load XDF EEGLAB plugin), re-reference the data (pop_reref.m), resample (pop_resample.m), and remove artefacts (with the plugin Clean Raw Data 0.31). We used BCILAB functions to implement spatial filtering, classifications and performance estimation procedures (bci_train.m;ParadigmCSP.m ml_trainlda.m utl_crossval.m and ml_predictlda.m).

Analyses of results

All statistical analyses were conducted with Analysis of Variance (ANOVA) in SPSS. The Mauchly’s test was used to test for the assumption of sphericity for within-subject factors.

To test the hypothesis that left- versus right- hand imagery would produce the worst classification accuracy, we extracted the pairwise classification accuracy for each of the 6 possible combinations (see Table S2 in supplementary materials for the individual values that went into the statistical analyses): Tongue versus Left hand (TL); Left hand versus Right hand (LR); Tongue versus Right hand (TR); Feet versus Right Hand (FR); Feet versus Left hand (FL) and Feet versus Tongue (FT).

A within-subject factor, ‘pairs’, with six levels (TL, LR, TR, FR, FL, FT), was created to test for an omnibus effect that assessed if at least one of the combinations was significantly different from another. Post hoc Bonferroni pairwise comparisons were conducted to assess which pairs of imagined actions were significantly different from each other.

Additionally, each participant’s common spatial patterns for the left- and/or right- hand tasks in each of the pairwise combinations (LR, TR, TL, FR and FL) were ranked according to the LR classification accuracy. We then visually inspected the patterns to identify detectable trends.

Results

Classification accuracies

The classification accuracies across experiments showed similar trends and were therefore combined in the following analyses. Sphericity was violated according to the Mauchly’s test, x2(14) = 41.87, p < 0.0001, so Greenhouse–Geisser corrected tests are reported (e = 0.75).

There was a significant effect of the factor ‘pairs’, i.e. classification accuracies differed significantly across the task combinations, F(3.8, 97.04) = 16.16, p < 0.0001. As shown in Fig. 2, LR shows the lowest classification accuracy at 58% (SD = 10, 95% CI [5561]. This was reliably above chance level, t(42) = 5.16, p < 0.0001 (mean difference from 50% chance level = 8.45, SD = 11, 95% CI [5.16 – 11.78]). The results of the post hoc tests are summarized with brackets in Fig. 2 and show that LR was significantly lower than every other combination, except FT. Although FT (M = 61%, SD = 11, 95% CI [58 – 65]) was not significantly different from LR, it was reliably lower than only TL (M = 71%, SD = 14, 95% CI [67 – 77] and TR (M = 69%, SD = 13, 95% CI [65—73]). TL showed the highest classification accuracy, being significantly higher than LR, FT and FR (M = 64%, SD = 12, 95% CI [60—67]), but not TR or FL (M = 67%, SD = 13, 95% CI [64—71]), both of which showed the second highest classification accuracy.

Fig. 2.

Fig. 2

Classification accuracy for motor imagery task combinations (all experiments combined)

TL: Tongue versus Left hand; LR: Left hand versus Right hand; TR: Tongue versus Right hand; FR: Feet versus Right Hand; FL: Feet versus Left hand; FT: Feet versus Tongue. Post hoc tests were Bonferroni corrected. *p < 0.05; **p < 0.01 and *** p < 0.001. Notably, the hypothesis that LR would produce the lowest accuracy was a-priori and therefore did not necessitate post-hoc corrections, even though conducted here in interest of being conservative and thorough.

Good LR classification

Although LR showed the worst classification accuracy, there were participants who showed good LR classification. Seven of them (16% of all participants) showed classification rate above 70%. Their CSPs showed focused contralateral brain activations in the central-posterior regions for the left- and right- hand tasks respectively (for an example, see Figure 3a, first row). We therefore set 70% as the threshold for ‘good classification’. Note that all good LR performers also tended to show good performance in all of the other tasks (for an example, see Fig. 3a, second and third row). Poor LR performers did not show any distinguishable patterns (for an example, see Fig. 3b, first row). For a summary table of individuals’ classification accuracies and common spatial patterns for the hands, see Table S2 and Figure S3 in supplementary materials.

Fig. 3.

Fig. 3

Examples of common spatial patterns

Two common spatial patterns were generated for each of the imagery task in a pairwise classification. Focusing on the classifications for the hands, we show the common spatial patterns, respectively, for the left, and right, hand tasks (see top labels) paired with the opposite hand, the tongue or the feet (see bottom labels, TL, TR, FR and FR) L: left hand; R: right hand; T: tongue and F: feet. S47 and S16 are labels for individual participants. Good LR performance was defined as classification above 70%. The accuracy in percentage for each pairwise classification is indicated in brackets below each pattern. Note that the colour map indicates relative activity and has no direct association to absolute signed values.

Poor LR but good classification of hand from tongue or feet

Out of the 35 remaining participants, 12 were poor imagery task performers who did not show good performance in any of the task combinations. Twenty three of them (54% of all participants) were poor LR performers but showed at least one good classification (above 70%) from TL, TR, FL and/or FL combinations. Unlike good LR classifications, a variety of patterns were detected for the classification of the left- and right- hand task classifications when the other task involved the tongue or feet. Examples from a participant are shown in Fig. 3b (second and third rows).

Discussion

The LR task combination showed significantly lower classification accuracy than the TL, TR, FL and FR task combinations. Slightly more than half of our participants showed good classification of the hand from the tongue or feet, even when they showed poor LR classification. These effects are consistent with our hypothesis and the concept of crosstalk. The remaining participants were split between poor imagery task performers who did not do well in any of the combinations and, good LR performers who tended to also show good classification in all of the other task combinations.

Our findings support previous work by Morash, Bai [57], also comparing classification accuracies of the same task combinations used in the present study. That study did not find significant differences due to a small sample size (n = 6), however the LR combination showed the lowest classification accuracy. The findings, taken together, suggest generalisability of the findings that hand imagery disrupts classification of brain activity using BCIs more so for tasks using the hands than for tasks using other combined effectors such as the tongue or feet.

As indicated above, the key effects of the present study are consistent with the classic effects of crosstalk (unintended brain activation), also commonly referred to as spatial coupling, in which bimanual brain areas are functionally linked due to frequent conjoint use of our hands in our daily lives [38]. Indeed, unintended brain activity occurs and leads to co-activation of ipsilateral and contralateral sensorimotor areas in single-handed actions [3941], as well as interference in bimanual actions [29, 3137, 58].

Motor imagery tasks are commonly used in BCIs due to the presence of distinct EEG features, such as the prominence of sensorimotor EEG over the brain(s) area contralateral to the hand used in the imagery task. However, people vary in their BCI abilities. Up to two thirds of people cannot produce distinguishable signals and hence do not show good classifications without extensive practice [18, 20]. Our findings suggest that crosstalk can also occur in motor imagery, and impedes the production of distinguishable brain signals, with tasks using single-handed imagery of the hands.

Perhaps most importantly, it appears that the effects of crosstalk on classification accuracy can be mitigated; this is further suggested by the findings of good LR classification at accuracy levels that are comparable with previous studies [19, 25, 59]. Yet, there are other mitigating factors that are more difficult to control. Even though participants were instructed explicitly (and shown visually) how to tap the involved effectors to produce the tasks, we cannot be certain that everyone conceptualised/imagined the tasks identically. Nor can we be certain that all participants attempted to ‘feel’ the movement as instructed (as opposed to visualising it), another potential mitigating factor that is outside of direct experimental control. Importantly, however, despite those factors, classification accuracy was clearly distinct when the two hands were involved in the paired tasks.

It is important to point out that traditional pre-processing and classification methods[5052] were applied in the present study. While those methods produced high classification rates (up to 98%) in many individual participants, when averaged across the large sample, the mean accuracies were not as impressive. Methods developed more recently, such as the multiscale principal component analysis (MSPCA)[60] which makes use of Empirical Wavelet Transform (EWT)[61] and has shown good motor imagery classification performance [62, 63], would perform better. However, algorithm development studies such as those indicated above were not designed to identify sources of human factors mitigating poor classification. Typically, the sample size has been very small (n = 5), often on highly-trained subjects; moreover, to facilitate comparison across studies, the same public data were used. Consequently, improvements in classification results would have addressed unidentified, non-specific effects, localized to the small sample, which are not representative of the general population. To develop algorithms that might work more generally, including in naturalistic environments, it is critically important to pinpoint the factors that contribute to low levels of classification accuracy.

The present study demonstrates the novel finding that crosstalk is likely to reduce the efficacy of classification using BCIs, regardless of the precise BCI methods used. Thus, the specific imagery task is critically important. Notably, effects of crosstalk are not likely apparent to the user (see S3.1 in the supplementary section), or BCI developer. To demonstrate efficacy in addressing crosstalk, an algorithm would need to generate high, within-subject classification accuracies across all the common combinations of imagined actions, such as those included here. Furthermore, future studies must demonstrate effects of crosstalk and BCI efficacy across a broad range of people, i.e. in even larger samples, randomly selected, in order for true generalisability.

Future directed studies on how users, in addition to algorithms, can mitigate the effects of crosstalk (or spatial coupling in the brain), would also provide us with valuable insights on brain plasticity and its requirements for control and change. To this end, exploration of methods that can functionally ‘unlink’ the existing associations between the left and right hand, could generate solutions to the current challenge. More generally, unravelling the mechanisms of crosstalk mitigation would improve our understanding of BCI user control and the applicability of BCIs in the health systems.

Supplementary Information

Below is the link to the electronic supplementary material.

13755_2021_142_MOESM1_ESM.docx (8.1MB, docx)

Electronic supplementary material 1 (DOCX 8297 kb)

Funding

Dr Phoebe S.-H. Neo was supported by EEGSmart during the data collection phase of this study. She was supported by the Department of Psychology and the Department of Computer Science, University of Otago, during the preparation of the manuscript.

Compliance with ethical standards

Conflict of interest:

The authors declare that they have no conflict of interest.

Ethical approval:

The procedures of this study were carried out in accordance with the University of Otago Human Ethics Committee.

Consent to participate:

All the participants gave their consent to participate prior to taking part in the study.

Consent for publication:

All the authors gave their consent to publish the study.

Footnotes

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.De Vries S, Mulder T. Motor imagery and stroke rehabilitation: a critical discussion. J Rehabil Med. 2007;39(1):5–13. doi: 10.2340/16501977-0020. [DOI] [PubMed] [Google Scholar]
  • 2.Page SJ, et al. Cortical plasticity following motor skill learning during mental practice in stroke. Neurorehabil Neural Repair. 2009;23(4):382–388. doi: 10.1177/1545968308326427. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Marzbani H, Marateb HR, Mansourian M. Neurofeedback: a comprehensive review on system design, methodology and clinical applications. Basic Clin Neurosci. 2016;7(2):143. doi: 10.15412/J.BCN.03070208. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Kübler A, Neumann N. Brain-computer interfaces—the key for the conscious brain locked into a paralyzed body. In: Laureys S, editor. Progress in brain research. Amsterdam: Elsevier; 2005. pp. 513–525. [DOI] [PubMed] [Google Scholar]
  • 5.Cincotti, F., et al. EEG-based Brain-Computer Interface to support post-stroke motor rehabilitation of the upper limb. in 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society. 2012. IEEE. [DOI] [PubMed]
  • 6.Pichiorri F, et al. Sensorimotor rhythm-based brain-computer interface training: the impact on motor cortical responsiveness. J Neural Eng. 2011;8(2):025020. doi: 10.1088/1741-2560/8/2/025020. [DOI] [PubMed] [Google Scholar]
  • 7.Kaiser V, et al. First steps toward a motor imagery based stroke BCI: new strategy to set up a classifier. Front Neurosci. 2011;5:86. doi: 10.3389/fnins.2011.00086. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Ang, K.K., et al. Clinical study of neurorehabilitation in stroke using EEG-based motor imagery brain-computer interface with robotic feedback. in 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology. 2010. IEEE. [DOI] [PubMed]
  • 9.Pfurtscheller G, Neuper C. Motor imagery activates primary sensorimotor area in humans. Neurosci Lett. 1997;239(2):65–68. doi: 10.1016/s0304-3940(97)00889-6. [DOI] [PubMed] [Google Scholar]
  • 10.Pfurtscheller G, et al. EEG-based discrimination between imagination of right and left hand movement. Electroencephalogr Clin Neurophysiol. 1997;103(6):642–651. doi: 10.1016/s0013-4694(97)00080-1. [DOI] [PubMed] [Google Scholar]
  • 11.Pfurtscheller G, et al. Mu rhythm (de)synchronization and EEG single-trial classification of different motor imagery tasks. Neuroimage. 2006;31(1):153–159. doi: 10.1016/j.neuroimage.2005.12.003. [DOI] [PubMed] [Google Scholar]
  • 12.Pfurtscheller G, Neuper C. Dynamics of sensorimotor oscillations in a motor task. In: Graimann B, Pfurtscheller G, Allison B, editors. Brain-Computer interfaces: revolutionizing human-computer interaction. Berlin: Springer; 2010. pp. 47–64. [Google Scholar]
  • 13.Wolpaw JR, et al. Brain–computer interfaces for communication and control. Clin Neurophysiol. 2002;113(6):767–791. doi: 10.1016/s1388-2457(02)00057-3. [DOI] [PubMed] [Google Scholar]
  • 14.Daly JJ, Wolpaw JR. Brain–computer interfaces in neurological rehabilitation. Lancet Neurol. 2008;7(11):1032–1043. doi: 10.1016/S1474-4422(08)70223-0. [DOI] [PubMed] [Google Scholar]
  • 15.Krauledat M, et al. Towards zero training for brain-computer interfacing. PLoS ONE. 2008;3(8):e2967. doi: 10.1371/journal.pone.0002967. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Müller K-R, et al. Machine learning for real-time single-trial EEG-analysis: from brain–computer interfacing to mental state monitoring. J Neurosci Methods. 2008;167(1):82–90. doi: 10.1016/j.jneumeth.2007.09.022. [DOI] [PubMed] [Google Scholar]
  • 17.Tangermann M, et al. Review of the BCI competition IV. Front Neurosci. 2012;6:55. doi: 10.3389/fnins.2012.00055. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Allison BZ, Neuper C. Brain-computer interfaces. New York: Springer; 2010. Could anyone use a BCI? pp. 35–54. [Google Scholar]
  • 19.Ahn M, Jun SC. Performance variation in motor imagery brain-computer interface: a brief review. J Neurosci Methods. 2015;243:103–110. doi: 10.1016/j.jneumeth.2015.01.033. [DOI] [PubMed] [Google Scholar]
  • 20.Friedrich EVC, et al. Mind over brain, brain over mind: cognitive causes and consequences of controlling brain activity. Front Hum Neurosci. 2014;8:348. doi: 10.3389/fnhum.2014.00348. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Lotte F, et al. A review of classification algorithms for EEG-based brain–computer interfaces: a 10 year update. J Neural Eng. 2018;15(3):031005. doi: 10.1088/1741-2552/aab2f2. [DOI] [PubMed] [Google Scholar]
  • 22.Hamedi M, Salleh S-H, Noor AM. Electroencephalographic motor imagery brain connectivity analysis for BCI: a review. Neural Comput. 2016;28(6):999–1041. doi: 10.1162/NECO_a_00838. [DOI] [PubMed] [Google Scholar]
  • 23.Lotte F, Larrue F, Muhl C. Flaws in current human training protocols for spontaneous Brain-Computer Interfaces: lessons learned from instructional design. Front Hum Neurosci. 2013;7:568. doi: 10.3389/fnhum.2013.00568. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Gibson RM, et al. Complexity and familiarity enhance single-trial detectability of imagined movements with electroencephalography. Clin Neurophysiol. 2014;125(8):1556–1567. doi: 10.1016/j.clinph.2013.11.034. [DOI] [PubMed] [Google Scholar]
  • 25.Neuper C, et al. Motor imagery and action observation: Modulation of sensorimotor brain rhythms during mental control of a brain-computer interface. Clin Neurophysiol. 2009;120(2):239–247. doi: 10.1016/j.clinph.2008.11.015. [DOI] [PubMed] [Google Scholar]
  • 26.Lecuyer A, et al. Brain-computer interfaces, virtual reality, and videogames. Computer. 2008;41(10):66–+. [Google Scholar]
  • 27.Pfurtscheller G, et al. Walking from thought. Brain Res. 2006;1071(1):145–152. doi: 10.1016/j.brainres.2005.11.083. [DOI] [PubMed] [Google Scholar]
  • 28.Kober SE, et al. Learning to modulate one's own brain activity: the effect of spontaneous mental strategies. Front Hum Neurosci. 2013;7:695. doi: 10.3389/fnhum.2013.00695. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Franz EA, Zelaznik HN, McCabe G. Spatial topological constraints in a bimanual task. Acta Physiol (Oxf) 1991;77(2):137–151. doi: 10.1016/0001-6918(91)90028-x. [DOI] [PubMed] [Google Scholar]
  • 30.Franz EA. Spatial coupling in the coordination of complex actions. The Quarterly Journal of Experimental Psychology: Section A. 1997;50(3):684–704. doi: 10.1080/713755726. [DOI] [PubMed] [Google Scholar]
  • 31.Franz, E.A., Bimanual action representation: a window to human evolution. Taking action: Cognitive neuroscience perspectives on the problem of intentional acts, ed. S. Johnston-Frey, 2003: p. 259–88.
  • 32.Nair DG, et al. Cortical and cerebellar activity of the human brain during imagined and executed unimanual and bimanual action sequences: a functional MRI study. Cognitive brain research. 2003;15(3):250–260. doi: 10.1016/s0926-6410(02)00197-0. [DOI] [PubMed] [Google Scholar]
  • 33.Jäncke L, et al. Differential magnetic resonance signal change in human sensorimotor cortex to finger movements of different rate of the dominant and subdominant hand. Cognitive Brain Research. 1998;6(4):279–284. doi: 10.1016/s0926-6410(98)00003-2. [DOI] [PubMed] [Google Scholar]
  • 34.Grefkes C, et al. Dynamic intra- and interhemispheric interactions during unilateral and bilateral hand movements assessed with fMRI and DCM. Neuroimage. 2008;41(4):1382–1394. doi: 10.1016/j.neuroimage.2008.03.048. [DOI] [PubMed] [Google Scholar]
  • 35.Franz EA, et al. Dissociation of spatial and temporal coupling in the bimanual movements of callosotomy patients. Psychol Sci. 1996;7(5):306–310. [Google Scholar]
  • 36.Franz EA, Waldie KE, Smith MJ. The effect of callosotomy on novel versus familiar bimanual actions: a neural dissociation between controlled and automatic processes? Psychol Sci. 2000;11(1):82–85. doi: 10.1111/1467-9280.00220. [DOI] [PubMed] [Google Scholar]
  • 37.Franz EA. The allocation of attention to learning of goal-directed actions: a cognitive neuroscience framework focusing on the basal ganglia. Front Psychol. 2012;3:535. doi: 10.3389/fpsyg.2012.00535. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Kinsbourne M. Hemispheric specialization and the growth of human understanding. Am Psychol. 1982;37(4):411. doi: 10.1037//0003-066x.37.4.411. [DOI] [PubMed] [Google Scholar]
  • 39.Bai O, et al. Asymmetric spatiotemporal patterns of event-related desynchronization preceding voluntary sequential finger movements: a high-resolution EEG study. Clin Neurophysiol. 2005;116(5):1213–1221. doi: 10.1016/j.clinph.2005.01.006. [DOI] [PubMed] [Google Scholar]
  • 40.Begliomini C, et al. Exploring manual asymmetries during grasping: a dynamic causal modeling approach. Front Psychol. 2015;6:167. doi: 10.3389/fpsyg.2015.00167. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Serrien DJ, Ivry RB, Swinnen SP. Dynamics of hemispheric specialization and integration in the context of motor control. Nat Rev Neurosci. 2006;7(2):160–166. doi: 10.1038/nrn1849. [DOI] [PubMed] [Google Scholar]
  • 42.Volkmann J, et al. Handedness and asymmetry of hand representation in human motor cortex. J Neurophysiol. 1998;79(4):2149–2154. doi: 10.1152/jn.1998.79.4.2149. [DOI] [PubMed] [Google Scholar]
  • 43.Lotze M, et al. Activation of cortical and cerebellar motor areas during executed and imagined hand movements: an fMRI study. J Cogn Neurosci. 1999;11(5):491–501. doi: 10.1162/089892999563553. [DOI] [PubMed] [Google Scholar]
  • 44.Gerardin E, et al. Partially overlapping neural networks for real and imagined hand movements. Cereb Cortex. 2000;10(11):1093–1104. doi: 10.1093/cercor/10.11.1093. [DOI] [PubMed] [Google Scholar]
  • 45.Dahm SF, Rieger M. Cognitive constraints on motor imagery. Psychol Res. 2016;80(2):235–247. doi: 10.1007/s00426-015-0656-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Renard Y, et al. OpenViBE: an open-source software platform to design, test, and use brain-computer interfaces in real and virtual environments. Presence Teleoper Virtual Environ. 2010;19(1):35–53. [Google Scholar]
  • 47.Kothe C, Makeig S. BCILAB: a platform for brain–computer interface development. J Neural Eng. 2013;10(5):056014. doi: 10.1088/1741-2560/10/5/056014. [DOI] [PubMed] [Google Scholar]
  • 48.Neuper C, et al. Imagery of motor actions: Differential effects of kinesthetic and visual-motor mode of imagery in single-trial EEG. Cogn Brain Res. 2005;25(3):668–677. doi: 10.1016/j.cogbrainres.2005.08.014. [DOI] [PubMed] [Google Scholar]
  • 49.Chang, C.-Y., et al. Evaluation of artifact subspace reconstruction for automatic EEG artifact removal. in 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). 2018. IEEE. [DOI] [PubMed]
  • 50.Blankertz B, et al. Optimizing Spatial filters for Robust EEG Single-Trial Analysis. Signal Processing Magazine, IEEE. 2008;25(1):41–56. [Google Scholar]
  • 51.Blankertz B, et al. The BCI competition III: Validating alternative approaches to actual BCI problems. IEEE Trans Neural Syst Rehabil Eng. 2006;14(2):153–159. doi: 10.1109/TNSRE.2006.875642. [DOI] [PubMed] [Google Scholar]
  • 52.Blankertz B, et al. The non-invasive Berlin brain–computer interface: fast acquisition of effective performance in untrained subjects. Neuroimage. 2007;37(2):539–550. doi: 10.1016/j.neuroimage.2007.01.051. [DOI] [PubMed] [Google Scholar]
  • 53.Guger C, Ramoser H, Pfurtscheller G. Real-time EEG analysis with subject-specific spatial patterns for a brain-computer interface (BCI) IEEE Trans Rehabil Eng. 2000;8(4):447–456. doi: 10.1109/86.895947. [DOI] [PubMed] [Google Scholar]
  • 54.Vidaurre, C., et al. Unsupervised adaptation of the LDA classifier for brain–computer interfaces. in Proceedings of the 4th International Brain-Computer Interface Workshop and Training Course. 2008. Citeseer.
  • 55.Refaeilzadeh P, Tang L, Liu H. Encyclopedia of database systems. New York: Springer; 2009. Cross-validation; pp. 532–538. [Google Scholar]
  • 56.Delorme A, Makeig S. EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J Neurosci Methods. 2004;134(1):9–21. doi: 10.1016/j.jneumeth.2003.10.009. [DOI] [PubMed] [Google Scholar]
  • 57.Morash V, et al. Classifying EEG signals preceding right hand, left hand, tongue, and right foot movements and motor imageries. Clin Neurophysiol. 2008;119(11):2570–2578. doi: 10.1016/j.clinph.2008.08.013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58.Franz EA, McCormick R. Conceptual unifying constraints override sensorimotor interference during anticipatory control of bimanual actions. Exp Brain Res. 2010;205(2):273–282. doi: 10.1007/s00221-010-2365-5. [DOI] [PubMed] [Google Scholar]
  • 59.Blankertz B, et al. Neurophysiological predictor of SMR-based BCI performance. Neuroimage. 2010;51(4):1303–1309. doi: 10.1016/j.neuroimage.2010.03.022. [DOI] [PubMed] [Google Scholar]
  • 60.Sadiq MT, et al. Motor imagery BCI classification based on novel two-dimensional modelling in empirical wavelet transform. Electron Lett. 2020;56(25):1367–1369. [Google Scholar]
  • 61.Sadiq MT, et al. Motor imagery EEG signals classification based on mode amplitude and frequency components using empirical wavelet transform. IEEE Access. 2019;7:127678–127692. [Google Scholar]
  • 62.Sadiq MT, et al. Motor imagery EEG signals decoding by multivariate empirical wavelet transform-based framework for robust brain-computer interfaces. IEEE Access. 2019;7:171431–171451. [Google Scholar]
  • 63.Sadiq MT, Yu X, Yuan Z. Exploiting dimensionality reduction and neural network techniques for the development of expert brain–computer interfaces. Expert Syst Appl. 2021;164:114031. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

13755_2021_142_MOESM1_ESM.docx (8.1MB, docx)

Electronic supplementary material 1 (DOCX 8297 kb)


Articles from Health Information Science and Systems are provided here courtesy of Springer

RESOURCES