Abstract
The way people imagine greatly affects performance of brain-computer interface (BCI) based on motion imagery (MI). Action sequence is a basic unit of imitation, learning, and memory for motor behavior. Whether it influences the MI-BCI is unknown, and how to manifest this influence is difficult since the MI is a spontaneous brain activity. To investigate the influence of the action sequence, this study proposes a novel paradigm named action sequences observing and delayed matching task to use images and videos to guide people to observe, match and reinforce the memory of sequence. Seven subjects’ ERPs and MI performance are analyzed under four different levels of complexities or orders of the sequence. Results demonstrated that the action sequence in terms of complexity and sequence order significantly affects the MI. The complex action in positive order obtains stronger ERD/ERS and more pronounced MI feature distributions, and yields an MI classification accuracy that is 12.3% higher than complex action in negative order (p < 0.05). In addition, the ERP amplitudes derived from the supplementary motor area show a positive correlation to the MI. This study demonstrates a new perspective of improving imagery in the MI-BCI by considering the complexity and order of the action sequences, and provides a novel index for manifesting the MI performance by ERP.
Keywords: Action observation, Action sequence, Brain-computer interface, Event-related potential, Motor imagery
Introduction
Brain-computer interfaces (BCI) can directly translate brain activities into commands without relying on peripheral nerves or muscles, providing an effective way for individuals to interact with the external world (Ju et al. 2022). The BCI provides a new technological approach to human–computer interaction control, rehabilitation, and assistance for people with disabilities. The electroencephalogram (EEG) signal is widely used in the BCI studies due to its high temporal resolution and high transplant ability of acquisition equipment. Currently, the most common approaches for the EEG-based BCI mainly include motor imagery (MI) (Achanccaray et al. 2021; Zhu et al. 2021), steady-state evoked potential (SSVEP) (Wang et al. 2015), event-related potentials (ERP) (Jin et al. 2014), etc.
The MI is a mental representation of motor behavior and does not rely on external conditioned stimuli (Li et al. 2018). When subjects imagine actions of different limbs (for example, left/right hand), the EEG signal amplitude, usually observed in mu (8–12 Hz) and beta (13–30 Hz) bands, is enhanced or diminished in the sensorimotor cortical, and these phenomena are called event-related desynchronization (ERD) and event-related synchronization (ERS) (Xu et al. 2021). The MI-BCI can decode the subject’s motor intention by extracting the features of the sensorimotor rhythm and transforming them into external control commands. In addition, recent studies have shown that the MI-BCI-based feedback training can enhance the plasticity of the central nervous system. Therefore, the MI-BCI has important application prospects for medical rehabilitation, assistive technology and motor learning (Guillot et al. 2009; Zhang et al. 2016). However, the imagination process of the MI is subjective and abstract, leading to that the subjects have difficulty imaging the motor behavior and inducing obvious sensorimotor cortical activation, which limits accurate recognition of motor intention (Ahn and Jun 2015; Zuo et al. 2020). Moreover, approximately 20% of the BCI users are unable to achieve the average level of the MI-BCI accuracy, and this phenomenon is called "BCI inefficiency" (Marins et al. 2019). How to effectively train individuals to yield better the MI-BCI performance is one of the bottlenecks of the BCI development.
The task-complexity has been demonstrated to have a significant effect on the MI performance (Bian et al. 2018). Specifically, complex imagery task induces stronger activation, which evokes a stronger the ERD/ERS and improves the MI-BCI performance. The classification accuracy of the brain-computer interface (BCI) plays a crucial role in practical applications. The way people imagine greatly affects the classification accuracy of BCI based on motion imagery (MI). The action sequence is defined as the basic unit of imitation, learning, and memory for motor behavior, and is one of the factors that influence the accuracy of the MI-BCI further. For example, a study designed a paradigm to imagine different angles of finger actions, and showed that finger abduction–adduction in a range of angular displacement of 60–95° can produce stronger sensorimotor cortical activation than 0–35° angular displacement (Bufalari et al. 2010). Reference (Ahn et al. 2014) also shows that subjects performed better on the MI when imagining and executing more complex finger-tapping sequences than the simple ones, assuming that the activation of mental motor representations might be optimized by the task-complexity. On the other hand, it has been previously shown that the MI task guided by complex actions showed improved ERD and classification accuracy (Boiten et al. 1992). It is obvious to see that increasing the task-complexity can enhance the ERD activation, so taking the task-complexity into consideration is assumed to be effective for boosting the MI-BCI performance.
Action observation (AO) has been widely used to boost the MI-BCI performance. It is the perception of the action of others and is regarded as a simulation of action as well as the MI (Kaneko et al. 2021). Previous research has established that there exists a functional association between the AO and the MI: they both can activate sensorimotor cortical and the activated cortical areas for the same action pattern overlap (Grezes and Decety 2001). On this basis, many researchers have launched studies on the AO-MI hybrid paradigm. A recent study compared subjects’ MI performances of the AO, and the MI, and found that the AO-MI, and the AO-MI produced stronger ERD than the AO and MI (Romano Smith et al. 2019). The participants practicing a manual aiming task with either simultaneous or alternate AO-MI showed better performance than the MI or the AO alone (Romano-Smith et al. 2018). The improvement of the AO-MI might be attributed to more cortical-motor activation compared to the AO or the MI only (Bunno and Suzuki 2020; Zhang et al. 2021). Therefore, the hybrid paradigm AO-MI is an effective way to enhance the MI-BCI performance.
Current studies have demonstrated that both a complex the MI task and a hybrid paradigm contribute to boosting the ERD activation and the MI-BCI performance. Among the factors that affect the complexity, action sequence is an important one. From the view of psychology, the action sequence is defined as the basic unit of imitation, learning, and memory for motor behavior (Dezfouli and Balleine 2012). And learning and sequencing discrete action sequences is necessary for people to manipulate motor behaviors. It should be emphasized that the difficulty of the action sequence task showed differences according to the subjects' familiarity because it has a contribution to memory, training, and learning. Therefore, it is reasonable to introduce the action sequence into the BCI training. (Höhne and Tangermann 2014) first proposed to use positive sequences to replace random sequences for stimulation, which reduced workload and enhanced comfort. Another study also found better memory learning performance in a positive sequence compared to a random sequence (Garr 2019). In addition, the MI is a complex neurocognitive process that is influenced by multiple factors and their interaction. Multifactorial tasks would induce complex neurophysiological responses, sometimes enhancing and sometimes inhibiting the brain region activation. However, few reports have investigated the combined influence of sequence order and action difficulty on MI performance. We assume that the action sequences have an influence on the MI, and try to explore this issue by using the MI paradigm with action sequences.
Studies show that the processes of the MI are spontaneous, and the role of action sequences in the MI activation is difficult to separate and analyze (Monaco et al. 2020). How to manifest the MI processes is a challenge to analyze the MI performance. Event-related potentials (ERP) have been widely used as an objective index to reflect high-level cognitive activation. Recent studies revealed that the ERP has a correlation with the MI. It has been previously shown that the MI could evoke significant activation of the supplement motor area (SMA), which pointed out the commonality of training mechanisms between the ERP and the MI (Halder et al. 2019). Taking this correlation into consideration, using the ERP to manifest the MI is feasible. This is because the ERP is an evoked potential whose evoking process depends on the external stimulus conditions. By assigning certain factors to the stimulus, the ERP can be used to observe the relationship between the cognitive activity and the factors. Therefore, the ERP can act as a complementary process for the MI manifestation process due to the correlation between the ERP and the MI. This study assumes to use the ERP to analyze the MI process and to explore the influence of the complexity of the action sequence on the MI process and the MI-BCI performance.
This study proposes a novel paradigm named action sequences observing and delayed matching task (AS-ODMT) to investigate the action sequence influence on the MI. Four types of sequences applying different levels of complexity (simple actions in positive sequence, simple actions in negative sequence, complex actions in positive sequence, and complex actions in negative sequence) are compared in the AS-ODMT for the MI tasks of imaging the left-hand and the right-hand. Seven subjects are required to observe and memorize the sequence, and judge it by its order. The EEG data and behavioral data are collected during this process. The results show that the action sequences significantly influence the ERD/ERS and the MI classification accuracy. In addition, the ERP amplitude is affected by this difference, and positively correlated with the MI classification accuracy, which is a more profound finding from previous findings (Zich et al. 2015; Marchesotti et al. 2017). The action sequence influence on the MI is found by designing the AS-ODMT paradigm, which not only provides a novel way for exploring the MI process, but also gives guidance for boosting the MI-BCI performance through the action sequence in the future.
The arrangement of this study is below. Material and methods introduce the AS-ODMT experiment, including participants, paradigm, the ERP and MI feature extraction, and classification. Result shows the influence of the action sequence in terms of the ERP, the MI, the BCI performance and behavioral analysis. Discussion discusses the reasons for the influence of the action sequence, and the correlation between the ERP and the MI. At last, conclusion is given on the action sequence influence and its helpfulness for the MI-BCI.
Materials and methods
Participants
This study was approved by the Biomedical Ethics Review Committee of the Hebei University of Technology and followed the Declaration of Helsinki. Seven healthy adults (mean ± standard deviation = 24 ± 1.2 years old, range = 22–26 years old) with no prior experience with MI-based BCIs participated in this study (refers to S1-S7). All participants had normal or corrected-to-normal vision, no reported history of drugs that affected the central nervous system, and no neurological disease. All participants provided written informed consent before the experiment.
Experiment description
Action sequences of the AS-ODMT
Four types of action sequences are applied for the AS-ODMT to explore the influence of action sequences on the MI performance. The action sequences are determined from a first-person perspective: simple action in positive sequence (S-P), simple action in negative sequence (S-N), complex action in positive sequence (C-P), and complex actions in negative sequence (C-N). Action difficulty and sequence order are established as the two factors of the action sequence. Action difficulty represents the standard level of the hand action, and the standard action meets subjects’ habits and is easily completed. This study classifies hand actions into simple actions (S) and complex actions (C) based on the standard level. S refers to the action with a higher standard level that people are accustomed to using and it is easy to complete. C has a lower standard level, and it is more difficult for people to complete due to unfamiliarity The sequence order refers to the manipulation of motor behavior through the ordered orchestration of discrete action sequences. This study arranges discrete sequences in time sequence according to people's habits, and such sequence is defined as positive sequence (P). The order of discrete sequences in negative sequence (N) is quite the opposite chronologically to the order of S.
This study introduces sketch drawing as experimental material to investigate the influence of action sequences on the MI performance, videos and images of hand grasping a pen to draw sketches on white paper are visual materials. The content of the drawing selected 9 weather forecast icons, including "sunny", "cloudy", "overcast ", "moon", "rain", "thunder", "snow", "high temperature", and "tornado". The weather forecast icons are composed of some simple strokes, which are moderately difficult for the subjects and can effectively guide the subjects' imagination. For example, the "moon" visual material represents the nighttime clear weather. Subjects complete its sketches by drawing two curves with their heads and tails together, which was moderately difficult for the subjects. In the materials, the hand wears a white glove with a black background. Each action contains nine left-hand actions and nine right-hand actions with a playing speed of 4 times within 6 s. Table 1 lists four combinations of the different action difficulties and sequence orders using these actions.
Table 1.
Action sequences in the experiment
| Action sequences | Action difficulty | Sequence order |
|---|---|---|
| S-P | simple | positive |
| S-N | simple | negative |
| C-P | complex | positive |
| C-N | complex | negative |
S-P: draws the weather icons with an action of simple pencil grip in a positive sequence. The experimental material shows a standard action to hold the pencil using the thumb, index finger, and middle finger. This study prescribes a customary drawing sequence in a positive sequence.
S-N: draws the weather icons with an action of simple pencil grip in a negative sequence. The S-N maintains the exact same drawing content and pencil grip as the S-P, but displays them in reverse chronological order. This ensures that the S-P and S-N are merely reversed the sequence order.
C-P: draws the weather icons with an action of complex pencil grip in a positive sequence. The sequence is the same as in the S-P group, but the experimental material shows a non-standard action to hold the pen using the bent index and the bent middle finger.
C-N: draws the weather icons with an action of complex pencil grip in a negative sequence. The C-N maintains the exact same drawing content and pencil grip as the C-P, but displays them in reverse chronological order. This ensures that the C-N and C-P are merely reversed the sequence order.
The AS-ODMT
The AS-ODMT consists of a memory phase, a matching phase, and a reinforcement phase, as shown in Fig. 1. The three phases help the subject to observe, memorize and match the motor action using sequence information, which further reinforces the MI during the reinforcement phase.
Fig. 1.
The timing of the AS-ODMT paradigm using S-P, C-P, S-N, and C-N action sequences. The AS-ODMT paradigm contains three phases: the memory phase, the matching phase, and the reinforcement phase. Each subject needs to do three times of the AS-DMPT with S-P, C-P, S-N, and C-N respectively. The videos and images are shown in the center of the screen to memorize and match action sequences during the memory and matching phases. Then the subject is instructed to imagine the correct direction and motor action by the videos and arrows during the reinforcement phase
Memory phase provides videos of the action sequence for the subject to memorize. This phase randomly plays 6 videos including 3 left-hand actions and 3 right-hand actions. Each video is shown only once for 6 s with an interval of 3 s between two consecutive videos. The subject is asked to complete the AO for the six videos and to memorize these action sequences as many as possible. A total of 6 trials constitute this phase.
Matching phase matches 25 visual image stimuli to the video of the action sequence by memory. The image is a screenshot from the action video that includes all 6 videos derived from the memory phase. The subject is required to respond to the image by pressing keys to indicate a match or a mismatch between the image and the video from memory. At the beginning (t = 0 s), a white cross is displayed in the center of the screen for 3 s (t = 0–3 s) to remind the subject to prepare. Then, a visual image stimulus is presented in the next 6 s (t = 3–9 s) to allow the subject to make a judgment. If the visual image stimulus matches the video of the action sequence, press “Y”. If it does not match, press “N”. If the subject does not give any response within 6 s, this would be treated as a missed judgment. After that (t = 9–12 s), the next trial begins with a white cross appearing. 25 trials constitute this phase in which each image is shown once. During this phase, subjects need to try their best to retrieve information from memory to match the shown image to the videos played in the previous phase.
Reinforcement phase: plays the action sequence video to reinforce the subject’s MI by AO. At the beginning (t = 0–1 s), a white cross is shown for 1 s for preparation. Then the screen shows the arrow and the video that has been played during the memory phase for 6 s (t = 1–7 s). The arrow indicates which hand to imagine, while the video provides dynamic guidance to the subject's specific imaginary actions. The subject performs AO by observing the action sequence in the video and simultaneously executes MI of the same actions. Then the subject has a short rest of 2 s (t = 7–9 s). A total of 80 trials constitute this phase.
Data collection and processing
The AS-ODMT is done 4 times for each subject to complete the MI of S-P, C-P, S-N, and C-N, therefore everyone conducts 320 trials of MI. The order of the three action sequences is counterbalanced for each subject with a 15 min rest between each action sequence. The subject needs to score the four action sequences from 1 to 5 in terms of experimental difficulty after the experiment. A higher score represents a higher level of subjective difficulty assessment.
Each subject was seated on a comfortable chair 80 cm away from a 19-inch LED monitor (60 Hz refresh rate, 1920 × 1080 screen resolution) in a quiet room. The monitor was adjusted to make the interface in the center of the visual field. The subject was asked to relax and avoid unnecessary actions, such as clenching of teeth. The EEG data from 64 channels of the whole brain was acquired using a Neuroscan SynAmps2 amplifier at a sampling rate of 1000 Hz. The channels were arranged based on the 10–20 International System. The 21 channels located near the supplementary motor area (FC5, FC3, FC1, FCz, FC2, FC4, FC6, C5, C3, C1, Cz, C2, C4, C6, CP5, CP3, CP1, CPz, CP2, CP4, CP6) were selected to analysis subjects’ the ERP and the MI. Reference channels were placed at the binaural mastoids A1 and A2. All impedances were below 5 kΩ.
All subjects' EEG signals are processed and visualized using Python version 3.7.12. All statistical analyses are performed using R Statistical Software (v4.1.2; R Core Team 2021). In the matching phase, this study uses a time window of 1.2 s to intercept the EEG data of 0.2 s before and 1 s after the image stimulus to analyze the ERP of the subject. The study adopted the mean measure for the ERP amplitude statistical extraction to minimize the interference of background noise in the data. (Clayson et al., 2013). According to the grand average waveform for groups and previous studies, the mean amplitude of 180–250 ms was identified as P300, and the mean amplitude of 280–350 ms was identified as N400.The intercepted data is band-pass filtered by a third-order zero-phase Butterworth filter from 0.1 to 30 Hz, and then the average data of the first 0.2 s is subtracted for baseline calibration.
The reinforcement phase intercepts 3–5 s of data for each MI for each channel as a data segment and filters the segment from 8 to 30 Hz through a fifth-order Butterworth filter. Each data segment is subsampled to 250 Hz, resulting in a 500-dimensional feature vector ((5000–3000) × 250/1000 = 500) in each channel. In this study, the selected 21 channels are used as feature channels, so that a 10,500-dimensional feature vector (500 × 21 = 10,500) can be obtained, and a total of 80 feature vectors are obtained from 80 MI trials.
Feature extraction
CSP and FBCSP algorithms are adopted to extract features from the EEG signals of right and left hand MI in the reinforcement phase to obtain the feature matrix.
Common spatial pattern (CSP)
Common spatial pattern (CSP), a widely used spatial filtering feature extraction algorithm of the MI-BCI (Kai Keng Ang et al. 2009; Qiu et al. 2017) is used to extract spatial distribution components. The CSP is used to determine the projection matrix to obtain the spatially filtered signal as follows:
| 1 |
where and represent the j-th trial before and after filtering, respectively. , where n refers to the number of trials of data sets. The spatial filter is taken from the first two columns and the last two columns of . The feature extraction is performed by , which reduces the dimensionality of a single trial from 4 × 500 to 4 × 1, so as to extract the most discriminative and non-redundant information in the EEG signal.
Filter bank common spatial pattern (FBCSP)
The FBCSP algorithm consists of 4 stages including decomposing the EEG signal into multiple passbands, calculating the feature vector using CSP (Ang et al. 2008), using the mutually informed best individual feature (MIBIF) algorithm for feature selection of the extracted features and performing classification using a classifier. The mutual information between feature with the class label is given by
| 2 |
where denotes the entropy and denotes the conditional entropy (Ang et al. 2011). After performing feature selection, the feature selected training data is denoted as where d ranges from 4 to 8.
Classification
SVM and DNN algorithms are used as binary classifiers to categorize the feature matrices and get the prediction of left and right hand in the reinforcement phase. The classifiers are trained following a subject-dependent mode. A tenfold cross-validation is used to estimate the accuracy of MI-BCI based on testing set invisibility principle (Müller et al. 2004). Each subject's data set was equally divided into 10 subsample sets: 9 sets are training set (72 trials), and the remaining one set was testing set (8 trials). The feature extraction algorithm and classifier are firstly trained using the training set, and then tested by the testing set. The process was repeated until each set was tested, and the average of all sets classification accuracies was the final accuracy.
Support vector machine (SVM)
SVM has good generalization properties and is insensitive to the curse of dimensionalty. We follow the subject-dependent strategy to train and test the SVM. Its equation is given as follows:
| 3 |
where (= 1, 2, 3 …) stands for the training vector, represents class label and the parameters and can be calculated using the equations in (Liu et al. 2010). We follow the subject-dependent strategy to train and test the SVM.
Deep neural networks (DNN)
The DNN trains neuron weights repeatedly through forward propagation by setting multiple neurons in different layers, and calibrates neuron weights through back propagation. Therefore, the output of the entire neural network is a cascade of nonlinear transformations of the input data, and its mathematical expression is below,
| 4 |
where L represents the number of network layers of the neural network, and θ represents all parameters in the learning network.
The Evaluation Index
The classification accuracy of MI (ACCMI) is to evaluate the overall MI-BCI performance. It is a ratio between the trials that are classified correctly and the total trials. A tenfold cross-validation is used to estimate the ACCMI. The equation of it is below.
| 5 |
where Ncorrect indicates the number of trials that are classified correctly, and Ntotal indicates the total trials.
The accuracy of behavioral judgments (ACCbehavior) in the matching phase is to evaluate the subject's memory judgment of action sequences. It is a ratio of the number of correct judgments to the total number of judgments. The equation of the ACCbehavior is below.
| 6 |
where Ncorrect indicates the number of correct judgments, Nwrong and Nmiss respectively represent the number of judgment wrong and missed judgments.
Statistical analysis
In this study, the Shapiro–Wilk (S–W) test (p > 0.05) is applied to verify the normal distribution. For the data that conformed to the normal distribution, the paired t-test with Bonferroni correction is adopted for statistical analyses. The Friedman test is used to assess the overall differences among groups since the data does not meet the normal distribution. The Wilcoxon signed-rank test with Bonferroni correction is used to further compare differences between groups. The significance level is 5%.
After completing each action sequence, each subject is asked a question about the paradigm: “Was this paradigm hard?” The question could be answered on a 1–5 Likert scale, indicating strong disagreement, moderate disagreement, neutrality, moderate agreement, or strong agreement.
Result
ERP analysis
In order to investigate whether the action sequence has an influence on the EEG of the subject, Fig. 2a shows all the subjects’ ERPs derived from channel CZ in the matching phase. The moment when the image appears in the matching phase serves as 0 s for the ERP. Using each image to superimpose these EEG signals 25 times. All four action sequences evoke significant P300 and N400 in the parietal regions. The C-P evokes larger P300 and N400 potential in the parietal region than the S-P, S-N, and C-N (χ2 = 15.696, df = 3, P = 0.006) (χ2 = 19.157, df = 3, P = 0.025). The statistical analysis presented in Fig. 2c reveals that the amplitude of the P300 reaches 3.54 μV of the C-P, 1.65 μV (Z = − 2.967, p = 0.025) higher than that of the S-P (1.98 μV) and 1.85 μV (Z = 3.105, p = 0.011) higher than that of C-N (1.69 μV). The peak difference of the P300-N400 reaches 7.84 μV of the C-P, 2.47 μV (Z = 4.105, p = 0.041) higher than that of the S-P (5.37 μV) and 3.45 μV (Z = − 4.235, p = 0.009) higher than that of the C-N (4.39 μV). The P300 and peak difference for C-N and S-N are not significantly different (Z = − 0.518, p = 0.913) (Z = 0.701, p = 0.552). This indicates that the C-P group has the highest mean amplitude, S-P the second highest, followed by C-N and S-N.
Fig. 2.
The ERP analysis of subjects in S-P, S-N, C-P, and C-N. a The superimposed ERP of the matching phase for all subjects in the CZ channel. b The brain topography maps at the moment of P300 and N400. c The results of the statistical analysis of subjects’ ERP. Figure 2 shows the effect of comparing action sequences on subjects' ERP in the time window from -200 ms to 1000 ms. The main activated brain regions and the EEG amplitude in channel CZ are strongly influenced by the action sequence
The latency of the P300 of the S-P is 184 ms (Z = − 2.691, p = 0.043), which has a significant difference between the C-P (210 ms), and there is no significant difference among the S-P, the S-N, and the C-N (Z = 0.679, p = 0.903) (Z = − 1.449, p = 0.884). Thus, the action sequence significantly influences the activation of ERP in the parietal region. (χ2 = 12.086, df = 3, P = 0.017).
The brain topographic maps visualize the spatial differences of the grand averages of the amplitude of ERPs at moments P300 and N400, as shown in Fig. 2b. The C-P elicits a stronger, and this positive change lasts about 50 ms in combination with the ERP waveform. Over time, this positive activation gradually changes to negative EEG activation (the N400 moment). Compared with the S-P, the S-N, and the C-N, the EEG activation in the C-P group is more intense and more widespread (χ2 = 24.186, df = 3, P = 0.009); in addition, the differences are mainly concentrated in the frontal and parietal, larger than the occipital region. The brain topographic maps suggest that the action sequences might affect the process of visual information in the frontal and parietal regions of the brain.
ERD/ERS analysis
To illustrate the impact of the action sequence on the MI in the reinforcement phase, Fig. 3 uses the PSD and the heat map to evaluate ERD/ERS. The PSD reflects signal power with frequency and can be used to show the ERD/ERS induced by the MI. Figure 3b presents the averaged PSD across 7 subjects at C3 and C4. All four action sequences can activate ERD/ERS since the PSD of C4 or C3 is lower than the contralateral when the subject imagines the corresponding hand postures. The C-P evokes the largest ERD/ERS difference mainly in 8–12 Hz, followed by the S-P (t = − 6.133, df = 6, P = 0.001), the S-N, and the C-N (t = 2.154, df = 6, P = 0.015). And the significant differences between S-N and C-N were not demonstrated (t = − 0.199, df = 6, P = 0.849). The AS-ODMT paradigm is demonstrated to activate the MI activation, therefore, it is reliable to apply the AS-ODMT to explore the influence of action sequences on MI.
Fig. 3.
The PSD curve and heat maps of S-P, S-N, and C-P in the reinforcement phase. a The heat map for the selected ten channels. The blue/red color indicates the occurrence of ERD/ERS phenomena. The horizontal axis is the number of trials of MI (1–40 for left-hand imagery and 41–80 for right-hand imagery) and the vertical axis is the channel name. b The average PSD curve for C3 and C4 (blue: red-hand, black: left-hand). Lines in the map have been smoothed
To further analyze the differences in MI due to the action sequences, Fig. 3a depicts the average heat maps of ten channels (including C3 and C4) in the sensorimotor region. The horizontal axis is the number of trials of MI (1–40 for left-hand imagery and 41–80 for right-hand imagery) and the vertical axis is the channel name. The colors blue and red indicate the occurrence of ERD and ERS respectively. When subjects imagine a certain action, ipsilateral brain regions show a more distinct red color, with blue occupying more of the contralateral brain regions, which indicates that C-P is able to activate the more pronounced ERD/ERS. The heat map of S-P has the same performance, but the degree of activation is overall lower than C-P. S-N and C-N can only show activation of ERS on the contralateral side, and ipsilateral lateral activation is not obvious. Furthermore, the heat maps and PSD of the four action sequences show similar intergroup differences (t = 4.007, df = 2, P = 0.013). This indicates that there are differences in the way the brain's sensorimotor regions process information when imagining different action sequences.
Classification accuracy of the MI-BCI
This study compares the accuracies and feature distribution to explore the influence of the action sequence on the MI-BCI. The action sequence has significant effects on the ACCMI (χ2 = 11.914 df = 3, P = 0.008), as shown in Fig. 4a. The average ACCMI shows that the ACCMI reaches 83.43% for the C-P, 7.99% (Z = − 2.070, p = 0.049) higher than that for S-P (75.46%), and 10.04% (Z = − 3.312, p = 0.006) higher than that for S-N. There is no significant difference in the ACCMI of C-N and S-N Among all 7 subjects (Z = − 0.828, p = 1.000). S6 achieves the best classification performance: the ACCMI reaches 90% for the C-P, 3.25% higher than that for S-P, and 15% higher than that for C-N. These results are consistent with the ERD/ERS results: The C-P with the highest complexity has the best MI performance in the four action sequences. This demonstrates that the action sequence can significantly affect the classification accuracy of the MI and thus the performance of the MI-BCI.
Fig. 4.

Offline classification accuracy across all subjects using tenfold cross-validation. a The classification accuracy of the S-P, S-N, C-P and C-N. b The feature distribution of S5. The red cross represents the left-hand and the blue circle represents the right-hand
Figure 4b visualizes the distribution of the left- and right-hand samples with the feature extraction. The interval between the two types of sample points is larger in C-P than in S-P, C-N and S-, indicating that the action difference significantly changes the distribution of the subjects' MI features. Moreover, the C-P has the highest ACCMI, which indicates that a significantly spaced sample point distribution will potentially improve the accuracy of the classifier.
We add the FBCSP and DNN algorithms to validate the influence of the action sequence on subjects' ACCMI obtained under the CSP-SVM algorithms. The ACCMI of four action sequences under four algorithms are shown in Fig. 5. We can observe that all the algorithms show a similar trend of ACCMI comparison: the C-P group always has the highest ACCMI, followed by S-P, C-N, and the lowest by S-N. Taking the FBCSP-DNN algorithm with the most significant difference as an example, the ACCMI reaches 83.43% for the C-P, 7.00% (Z = − 4.321, p = 0.017) higher than that for S-P (77.46%), and 15.00% (Z = 5.081, p = 0.033) higher than that for S-N (69.45%). The accuracy results of the FBCSP-DNN algorithm match those of the CSP-SVM. The above results fully all four algorithms could effectively demonstrate that the C-P group has the highest MI classification accuracy, which proves the reliability of the conclusion that the action sequence has a significant influence on the subjects' ACCMI in this study (χ2 = 15.014 df = 3, P = 0.001).
Fig. 5.

Bar plot comparing the BCI performance according to four action sequences and algorithms. (* p < 0.05, ** p < 0.01)
The correlation between ERP and MI
Figure 6 shows the correlation between the ERP amplitude and the MI classification accuracy. The temporal heat map shows the MI, which is presented in Fig. 6a. Figure 6b depicts the correlation between the ERP amplitude and ACCMI of subjects S1-S7. The temporal waves of the ERP are shown in Fig. 6c. The horizontal axis of the correlation curve is the ERP amplitude and the vertical axis is the ACCMI. The squares, circles and triangles in Fig. 6b represent subjects' P300 mean amplitude, the N400 mean amplitude, and the peak difference of the P300-N400, respectively. The straight line is a linear fit with a Pearson's test among the seven subjects. The four action sequences are arranged according to ACCMI from high to low. As the ACCMI changes from high to low, the ERP curve decreases significantly, whereas the correlation coefficient between ERP amplitude and ACCMI also shows a significant decrease. This indicates that the action sequence has an influence on MI and ERP (χ2 = 16.894 df = 3, P = 0.048).
Fig. 6.
The correlation between the ERP and the MI. a The temporal heat map of the MI. b The correlation curve between the ERP amplitude and ACCMI of subjects S1-S7. c The temporal waves of the ERPs. The three action sequences are arranged according to ACCMI from high to low. There is a positive correlation between the MI and the ERP. As the ERP amplitude decreases, the classification accuracy of the MI decreases
From Fig. 6, there is a positive correlation between the ERP amplitude and ACCMI. The overall trend is that the higher the ERP evoked in the SMA, the higher the MI classification accuracy. In the S-P and C-P groups, the ERP amplitude is higher and the heat map shows an obvious change when the subject imagines at the 3rd second. At the same time, the correlation curve of the ERP amplitude and the ACCMI showed a strong linear relationship. The linear correlation coefficients(r) of S-P and C-P respectively are 0.78(N = 7, P = 0.007), 0.39(N = 7, P = 0.015), 0.63(N = 7, P = 0.000) and 0.79(N = 7, P = 0.008), 0.65(N = 7, P = 0.044), 0.78(N = 7, P = 0.000). Whereas in the C-N group, the ERP amplitude is lower, the activation of the ERP become weak in the temporal and spatial domains. The linearity of the correlation curve between the ERP amplitude and the ACCMI is substantially weakened: r = 0.23(N = 7, P = 0.059), r = 0.49(N = 7, P = 0.035), r = 0.36(N = 7, P = 0.044) This indicates that the level of linearity in terms of the action sequence depends on the ERP amplitude.
Behavioral analyses
Table 2 shows the extent of subjects' memory of the action sequence during the matching phase. The table shows that the highest ACCBehavioral reaches 86.86% for S-P, 7.43% (Z = − 13.081, p = 0.003) higher than S-P (79.43%), and 14.29% (Z = 7.785, p = 0.013) higher than C-P (72.57%). This indicates that most subjects perform significant differences in memory extent for the four action sequences: the highest for S-P, the second for S-N, and the lowest for C-P and C-N. (χ2 = 11.894, df = 3, P = 0.007).
Table 2.
Behavioral accuracy of all 7 subjects
| S1 | S2 | S3 | S4 | S5 | S6 | S7 | Ave(%) | SD | |
|---|---|---|---|---|---|---|---|---|---|
| S-P | 80 | 80 | 72 | 84 | 88 | 84 | 68 | 79.43 | 7.09 |
| S-N | 84 | 96 | 80 | 100 | 96 | 92 | 60 | 86.86 | 13.80 |
| C-P | 64 | 84 | 52 | 92 | 80 | 84 | 52 | 72.57 | 16.40 |
| C-N | 63 | 65 | 70 | 96 | 78 | 74 | 60 | 70.86 | 17.26 |
The 7 subjects’ subjective comfort perception scores are shown in Table 3. The subjects scaling higher scores indicate he/she regards this action sequence through the MI as difficult and more easily fatigued. The table shows that the comfort perception scores of S-N reaches the highest (mean score = 4.00), 27.39% (Z = -2.226, p = 0.047 higher than that for C-P (mean score = 3.14) and 57.90% (Z = 3.081, p = 0.033) higher than that for S-P (mean score = 2.28). Almost all subjects agreed that the complex image tasks of the C-P increased the difficulty of the imagery process and that subjects showed more fatigue at the end of the imagery, which greatly reduced subjects' feelings of comfort during the experiment. In addition, subjects in the S-N even indicate that they can not complete the MI within the time limit, which is consistent with the highest score of S-N. This implies that both the negative sequence and the difficult action do reduce the imagined comfort level to some extent and affect the subjects' experimental experience. In addition, most subjects could not adapt to MI in negative order.
Table 3.
Likert scale
| S1 | S2 | S3 | S4 | S5 | S6 | S7 | Ave(%) | SD | |
|---|---|---|---|---|---|---|---|---|---|
| S-P | 3 | 3 | 2 | 2 | 2 | 1 | 3 | 2.28 | 0.75 |
| S-N | 4 | 3 | 4 | 5 | 5 | 3 | 4 | 4.00 | 0.81 |
| C-P | 3 | 3 | 3 | 4 | 2 | 4 | 3 | 3.14 | 0.69 |
| C-N | 5 | 4 | 4 | 4 | 5 | 4 | 3 | 4.14 | 0.54 |
Discussion
The AS-ODMT paradigm
This study designs the fusion paradigm AS-ODMT to improve the ERD/S and classification performance based on four action sequences. The results described in this paper demonstrate that both the complex task and the positive sequence can significantly improve the ERD/S activation and yield higher classification accuracy compared to the traditional paradigm task of the MI-BCI. The AS paradigm proposed in this study provides a more significant and more likely improvement in classification results than the hybrid paradigm combining solely visual guidance or AO as reported in the current literature (Shu et al. 2019). Furthermore, for those who are insufficiently proficient in operating MI-BCI, subjects are able to reinforce MI through observation, memory, and matching of sequence information in the three phases of the AS-ODMT paradigm, which provides the potential for improving MI classification accuracy. Obviously, this combination will expand the application of the MI-BCI.
Influence of the action sequence on ERP
It can be seen that the action sequence has a significant influence on the ERP. The result of ERP in this study is consistent to our previous study (Li et al. 2020), that the visual stimulus content influences the ERP amplitude and latency. Subjects' psychological expectations might be a reason for this difference. This assumption is demonstrated in our study. The comparison of ERP results between the S-P and C-P groups reveals that the experimental paradigm incorporating complex actions is able to trigger a higher amplitude of P300. The amount of subjects' mental resources in observational memory may be influenced by the specific content of the visual stimuli (Balconi and Pozzoli 2008). And the addition of complex actions allows subjects to increase their mental expectations within an acceptable range, so the C-P group has a higher ERP amplitude.
The influence of action sequences on subjects' ERP performance is investigated in this study, thus the measurement of ERP is important for the analysis of the results. Filtering can attenuate noisy signal components and improve the SNR, thereby enhancing the visibility and reliability of the ERP components. The third-order zero-phase Butterworth filter used in this study can avoid or minimize the filtering process of signal distortion and filtering artifacts. (Widmann et al. 2015). Therefore, the ERP waveforms plotted are reliable and stable in this study, so that the effects of different action sequences on the ERP waveforms can be explored more accurately. At present, the ERP quantification measures mainly include peak amplitude, mean amplitude and adaptive mean. Among them, the average amplitude can minimize the interference of noise in the ERP data and is considered to be the optimal ERP amplitude quantification method. The peak amplitude exhibits a large noise error to the true signal. (Clayson et al., 2013). This study analyzed the ERP amplitude (P300) performance of the four action sequences for these two methods, as shown in Table 4. It can be seen that there is a significant difference between the average quantization results of the two methods, (χ2 = 10.157, df = 2, P = 0.035), with the peak amplitude averaging 0.42 μV (Z = − 0.518, p = 0.013) higher than the mean amplitude, and the S-N group shows the largest difference of 0.58 μV. (Z = − 0.298, p = 0.043; Z = 0.301, p = 0.052). The main reason is that the peak amplitude only select the subject's maximum value at the P300 point in each action sequence, and the noise superimposed on the true signal further exaggerates the amplitude of the peaks. In contrast, the mean amplitude quantifies the average peak between two fixed time points, effectively averaging out the random noise in the EEG background and yielding a more accurate ERP mean. Thus the mean amplitude in our study achieves a lower but more accurate ERP amplitude quantization result. (Luck., 2013). In addition, the C-P group achieved the highest amplitude quantification results of 3.97 μV and 3.54 μV under both methods, followed by S-P (2.11 μV and 1.98 μV) and S-N (1.93 μV and 1.35 μV) (Z = − 0.309, p = 0.003; Z = 0.411, p = 0.032). In summary, both methods are effective in demonstrating that the C-P group has the highest ERP amplitude, which proves the reliability of this study's conclusion that the action sequence has a significant influence on the ERP (χ2 = 10.157, df = 2, P = 0.035). Moreover, statistical analysis on the topographic maps could reduce the expression of redundant information about the activation of brain regions under different action sequences. This approach could help to provide more information about the significance and localization of the influences of the action sequence in the future work.
Table 4.
the peak method and the mean method of P300
| Peak amplitude(μV) | Mean amplitude(μV) | |
|---|---|---|
| S-P | 2.21 | 1.98 |
| S-N | 1.83 | 1.35 |
| C-P | 3.97 | 3.54 |
| C-N | 2.07 | 1.92 |
Influence of the action sequence on ERD/S
Furthermore, this study shows that both the complex action and the positive sequence can significantly improve ERD/S activation efficiency and produce higher MI classification accuracy. There may be two reasons for this: first, in the high complexity, subjects need higher concentration to complete the MI. According to the subjects' self-reports, most of them pay more attention when imagining C-P within an acceptable range, and most subjects indicate that they are unable to concentrate enough to complete the imagery in the negative sequence. Second, in the high-complexity mode, the tempo of the motor imagery task is accelerated, requiring a greater degree of neural activation, thus mobilizing more nerves to engage in the motor imagery task. Previous studies have also found from functional magnetic resonance imaging and functional near-infrared spectroscopy that complex mental imagery tasks can produce greater hemodynamic changes in the brain (Holper and Wolf 2011; Vasilyev et al. 2017). In addition, Fig. 5 shows that all the algorithms show a similar trend of ACCMI comparison: the C-P group always has the highest ACCMI, followed by S-P and the lowest by C-N and S-N. This indicates that classifier differences do not significantly influence the results of the between-group comparisons of the four action sequences. However, the more advanced classifiers can improve the accuracy, which can help to further expand the application of MI-BCI system. We will consider classifier variability in subject MI performance in future work.
This study explores the differences in memory levels and MI performance across action sequence tasks through the AS-ODMT paradigm. The results show that most subjects have the lowest level of memory for complex action sequences in the C-P, but achieve the best MI-BCI performance in the C-P. However, the influence of visual memory levels on the MI-BCI system is not directly represented in this study. Researching the correlation between the two will help to optimise the MI-BCI training paradigm through participant screening and cognitive skills training, and we are interested in exploring this issue in our future studies. Moreover, a subject-dependent classifier is applied to yield accuracy (Nath et al. 2020). We consider using the subject-independent classifier in future work to reduce the number of experimental trials. Thereby, using more subject data to support the rigor of the conclusions of our study.
Furthermore, the statistical results also show that the combination of the two factors does not show a simple additive influence over a single factor. Multifactorial tasks generally produce more complicated neural physiological responses, which would contribute to subjects' MI performance in some cases and cause confusion in subjects' MI processes in other cases. We find that the combination of the two factors (C-P) could improve the classification accuracy compared to simple action and positive sequences alone. However, we also find that when negative order sequences are added to the MI tasks, most subjects did not show differences in MI performance between complex and simple actions. Given this combined influence of multiple factors on subjects' MI performance, we consider that the best combination of the action sequence should be optimized individually.
However, blindly increasing the complexity of the imaging task may cause more fatigue and discomfort to the participants and decrease subject MI performance. Therefore, we will further investigate to find the optimal level of complexity for augmented activation without compromising participant comfort to facilitate the use of action sequences in the MI-BCI system.
The correlation between ERP and MI
Many studies have demonstrated the association between ERP and MI in terms of training mechanisms (Ono et al. 2018). The essence of MI is the process of recognition and recall of behaviors already formed in memory. During the MI, the brain would frequently recall, retrieve, and re-output the action information of the actions. In particular, for motor-related regions, such as the SMA region, repeated MI activation stimulate this region more frequently. Therefore, the higher the ERP amplitude for recognizing similar behaviors, thus completing the reinforcement of MI for similar behaviors. Combined with the results of this study, it is shown that there was a significant correlation between the subjects' ERP activation sensitivity in the SMA area and the subjects' ability to activate MI.
Conclusion
This study proposes a paradigm of AS-ODMT to demonstrate the influence of the action sequence on MI by four action sequences, and to explore the correlation between the ERP and the MI. This paradigm uses the visual materials related to the ERP and MI to recall and memory the motor action sequence. The action sequence significantly affects the ERP amplitudes and the ERD/ERS activation, which influences the MI-BCI performance. Complex action and positive sequence can yield higher ERP and classification accuracy of MI. The ERP amplitude is positively related to the MI classification accuracy, indicating that the ERP can act as a complementary process for the MI manifestation process. This study provides an insight into the action sequence factors affecting the MI and BCI performance, which has potential to further improve the BCI system.
Acknowledgements
This work is supported in part by the Natural Science Foundation of Hebei Province (Grant Nos. F2021202003), the Technology Nova of Hebei University of Technology (Grant Nos. JBKYXX2007), the National Natural Science Foundation of China (Grant Nos. 51977060 and 62176090), and the Key Research and Development Foundation of Hebei (Grant Nos. 19277752D and 21372002D), and the STI 2030-major projects (Grant Nos. 2022ZD0208900), and the Shanghai Municipal Science and Technology Major Project (Grant No. 2021SHZDZX), and the Program of Introducing Talents of Discipline to Universities through the 111 Project (Grant Nos. B17017), and the Shu Guang Project (Grant Nos. 19SG25), and the Ministry of Education and Science of the Russian Federation (Grant Nos. 14.756.31.0001), and the Polish National Science Center (Grant Nos. UMO-2016/20/W/NZ4/00354), and the National Government Guided Special Funds for Local Science and Technology Development (Shenzhen, China) (Grant Nos. 2021Szvup043), and the Project of Jiangsu Province Science and Technology Plan Special Fund in 2022 (Grant Nos. BE2022064-1).
Footnotes
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- Achanccaray D, Izumi S-I, Hayashibe M (2021) Visual-electrotactile stimulation feedback to improve immersive brain-computer interface based on hand motor imagery. Comput Intell Neurosci 2021:1–13 10.1155/2021/8832686 [DOI] [Google Scholar]
- Ahn M, Jun SC (2015) Performance variation in motor imagery brain–computer interface: a brief review. J Neurosci Methods 243:103–110 10.1016/j.jneumeth.2015.01.033 [DOI] [PubMed] [Google Scholar]
- Ahn S, Ahn M, Cho H, Chan Jun S (2014) Achieving a hybrid brain–computer interface with tactile selective attention and motor imagery. J Neural Eng 11:066004 10.1088/1741-2560/11/6/066004 [DOI] [PubMed] [Google Scholar]
- Ang KK, Chin ZY, Zhang H, Guan C (2012) Filter bank common spatial pattern algorithm on BCI competition IV datasets 2a and 2b. Front Neurosci 6:39 10.3389/fnins.2012.00039 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ang KK, Chin ZY, Zhang H, Guan C (2009) Robust filter bank common spatial pattern (RFBCSP) in motor-imagery-based brain-computer interface. In: 2009 annual international conference of the IEEE engineering in medicine and biology society. IEEE, Minneapolis, MN, pp 578–581 [DOI] [PubMed]
- Balconi M, Pozzoli U (2008) Event-related oscillations (ERO) and event-related potentials (ERP) in emotional face recognition: a regression analysis. Int J Neurosci 118:1412–1424 10.1080/00207450601047119 [DOI] [PubMed] [Google Scholar]
- Bian Y, Qi H, Zhao L et al (2018) Improvements in event-related desynchronization and classification performance of motor imagery using instructive dynamic guidance and complex tasks. Comput Biol Med 96:266–273 10.1016/j.compbiomed.2018.03.018 [DOI] [PubMed] [Google Scholar]
- Boiten F, Sergeant J, Geuze R (1992) Event-related desynchronization: the effects of energetic and computational demands. Electroencephalogr Clin Neurophys 82:302–309 10.1016/0013-4694(92)90110-4 [DOI] [PubMed] [Google Scholar]
- Bufalari I, Sforza A, Cesari P et al (2010) Motor imagery beyond the joint limits: a transcranial magnetic stimulation study. Biol Psychol 85:283–290 10.1016/j.biopsycho.2010.07.015 [DOI] [PubMed] [Google Scholar]
- Bunno Y, Suzuki T (2020) Motor imagery while viewing self-finger movements facilitates the excitability of spinal motor neurons. Exp Brain Res 238:2077–2086 10.1007/s00221-020-05870-3 [DOI] [PubMed] [Google Scholar]
- Clayson PE, Baldwin SA, Larson MJ (2012) How does noise affect amplitude and latency measurement of event-related potentials (ERPs)? A methodological critique and simulation study. Psychophysiology 50(2):174–186 10.1111/psyp.12001 [DOI] [PubMed] [Google Scholar]
- Dezfouli A, Balleine BW (2012) Habits, action sequences and reinforcement learning: habits and action sequences. Eur J Neurosci 35:1036–1051 10.1111/j.1460-9568.2012.08050.x [DOI] [PMC free article] [PubMed] [Google Scholar]
- Garr E (2019) Contributions of the basal ganglia to action sequence learning and performance. Neurosci Biobehav Rev 107:279–295 10.1016/j.neubiorev.2019.09.017 [DOI] [PubMed] [Google Scholar]
- Grezes J, Decety J (2001) Functional anatomy of execution, mental simulation, observation, and verb generation of actions: a meta-analysis. Hum Brain Mapp 12:1–19 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Guillot A, Collet C, Nguyen VA et al (2009) Brain activity during visual versus kinesthetic imagery: an fMRI study. Hum Brain Mapp 30:2157–2172 10.1002/hbm.20658 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Halder S, Leinfelder T, Schulz SM, Kübler A (2019) Neural mechanisms of training an auditory event-related potential task in a brain–computer interface context. Hum Brain Mapp 40:2399–2412 10.1002/hbm.24531 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Höhne J, Tangermann M (2014) Towards user-friendly spelling with an auditory brain-computer interface: the charstreamer paradigm. PLoS ONE 9:e98322 10.1371/journal.pone.0098322 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Holper L, Wolf M (2011) Single-trial classification of motor imagery differing in task complexity: a functional near-infrared spectroscopy study. J NeuroEng Rehabil 8:34 10.1186/1743-0003-8-34 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jin J, Daly I, Zhang Y et al (2014) An optimized ERP brain–computer interface based on facial expression changes. J Neural Eng 11:036004 10.1088/1741-2560/11/3/036004 [DOI] [PubMed] [Google Scholar]
- Ju J, Feleke AG, Luo L, Fan X (2022) Recognition of drivers’ hard and soft braking intentions based on hybrid brain-computer interfaces. Cyborg Bionic Syst 2022:1–13 10.34133/2022/9847652 [DOI] [Google Scholar]
- Kaneko N, Yokoyama H, Masugi Y et al (2021) Phase dependent modulation of cortical activity during action observation and motor imagery of walking: an EEG study. Neuroimage 225:117486 10.1016/j.neuroimage.2020.117486 [DOI] [PubMed] [Google Scholar]
- Li W, Duan F, Sheng S et al (2018) A human-vehicle collaborative simulated driving system based on hybrid brain-computer interfaces and computer vision. IEEE Trans Cogn Dev Syst 10:810–822 10.1109/TCDS.2017.2766258 [DOI] [Google Scholar]
- Li M, Yang G, Xu G (2020) The effect of the graphic structures of humanoid robot on N200 and P300 potentials. IEEE Trans Neural Syst Rehabil Eng 28:1944–1954 10.1109/TNSRE.2020.3010250 [DOI] [PubMed] [Google Scholar]
- Li M, Zuo H, Zhou H, Xu G, Qi E (2023) A study of action difference on motor imagery based on delayed matching posture task. J Neural Eng. 10.1088/1741-2552/acb386 10.1088/1741-2552/acb386 [DOI] [PubMed] [Google Scholar]
- Liu C, Zhao H, Li C, Wang H (2010) Classification of ECoG motor imagery tasks based on CSP and SVM. In: 2010 3rd international conference on biomedical engineering and informatics. IEEE, Yantai, China, pp 804–807
- Lotte F, Guan C (2011) Regularizing common spatial patterns to improve BCI designs: unified theory and new algorithms. IEEE Trans Biomed Eng 58:355–362 10.1109/TBME.2010.2082539 [DOI] [PubMed] [Google Scholar]
- Marchesotti S, Martuzzi R, Schurger A et al (2017) Cortical and subcortical mechanisms of brain-machine interfaces: neural correlates of BMI-control. Hum Brain Mapp 38:2971–2989 10.1002/hbm.23566 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Monaco S, Malfatti G, Culham JC et al (2020) Decoding motor imagery and action planning in the early visual cortex: overlapping but distinct neural mechanisms. Neuroimage 218:116981 10.1016/j.neuroimage.2020.116981 [DOI] [PubMed] [Google Scholar]
- Müller, Klaus-Robert, Matthias Krauledat, G. Dornhege, et al (2004) Machine learning techniques for brain-computer interfaces. Biomed Tech 49.
- Ono Y, Wada K, Kurata M, Seki N (2018) Enhancement of motor-imagery ability via combined action observation and motor-imagery training with proprioceptive neurofeedback. Neuropsychologia 114:134–142 10.1016/j.neuropsychologia.2018.04.016 [DOI] [PubMed] [Google Scholar]
- Qiu Z, Allison BZ, Jin J et al (2017) Optimized motor imagery paradigm based on imagining Chinese characters writing movement. IEEE Trans Neural Syst Rehabil Eng 25:1009–1017 10.1109/TNSRE.2017.2655542 [DOI] [PubMed] [Google Scholar]
- Romano Smith S, Wood G, Coyles G et al (2019) The effect of action observation and motor imagery combinations on upper limb kinematics and EMG during dart-throwing. Scand J Med Sci Sports 29:1917–1929 10.1111/sms.13534 [DOI] [PubMed] [Google Scholar]
- Romano-Smith S, Wood G, Wright DJ, Wakefield CJ (2018) Simultaneous and alternate action observation and motor imagery combinations improve aiming performance. Psychol Sport Exerc 38:100–106 10.1016/j.psychsport.2018.06.003 [DOI] [Google Scholar]
- Shu X, Chen S, Meng J et al (2019) Tactile stimulation improves sensorimotor rhythm-based BCI performance in stroke patients. IEEE Trans Biomed Eng 66:1987–1995 10.1109/TBME.2018.2882075 [DOI] [PubMed] [Google Scholar]
- Vasilyev A, Liburkina S, Yakovlev L et al (2017) Assessing motor imagery in brain-computer interface training: psychological and neurophysiological correlates. Neuropsychologia 97:56–65 10.1016/j.neuropsychologia.2017.02.005 [DOI] [PubMed] [Google Scholar]
- Wang M, Daly I, Allison BZ et al (2015) A new hybrid BCI paradigm based on P300 and SSVEP. J Neurosci Methods 244:16–25 10.1016/j.jneumeth.2014.06.003 [DOI] [PubMed] [Google Scholar]
- Widmann A, Schröger E, Maess B (2015) Digital filter design for electrophysiological data: a practical approach. J Neurosci Methods 250:34–46 10.1016/j.jneumeth.2014.08.002 [DOI] [PubMed] [Google Scholar]
- Xu L, Xu M, Jung T-P, Ming D (2021) Review of brain encoding and decoding mechanisms for EEG-based brain–computer interface. Cogn Neurodyn 15:569–584 10.1007/s11571-021-09676-z [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zhang R, Li Y, Yan Y et al (2016) Control of a wheelchair in an indoor environment based on a brain-computer interface and automated navigation. IEEE Trans Neural Syst Rehabil Eng 24:128–139 10.1109/TNSRE.2015.2439298 [DOI] [PubMed] [Google Scholar]
- Zhang X, Hou W, Wu X et al (2021) A novel online action observation-based brain-computer interface that enhances event-related desynchronization. IEEE Trans Neural Syst Rehabil Eng 29:2605–2614 10.1109/TNSRE.2021.3133853 [DOI] [PubMed] [Google Scholar]
- Zhu Y, Li C, Jin H, Sun L (2021) Classifying motion intention of step length and synchronous walking speed by functional near-infrared spectroscopy. Cyborg Bionic Syst 2021:1–11 10.34133/2021/9821787 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zich C, Debener S, Kranczioch C et al (2015) Real-time EEG feedback during simultaneous EEG–fMRI identifies the cortical signature of motor imagery. Neuroimage 114:438–447 10.1016/j.neuroimage.2015.04.020 [DOI] [PubMed] [Google Scholar]
- Zuo C, Jin J, Yin E et al (2020) Novel hybrid brain–computer interface system based on motor imagery and P300. Cogn Neurodyn 14:253–265 10.1007/s11571-019-09560-x [DOI] [PMC free article] [PubMed] [Google Scholar]




