Skip to main content
Cognitive Neurodynamics logoLink to Cognitive Neurodynamics
. 2022 Jan 24;16(5):1123–1133. doi: 10.1007/s11571-021-09779-7

A multi-modal brain–computer interface based on threshold discrimination and its application in wheelchair control

Enzeng Dong 1, Haoran Zhang 1, Lin Zhu 2, Shengzhi Du 3, Jigang Tong 1,
PMCID: PMC9508306  PMID: 36237403

Abstract

In this study, we propose a novel multi-modal brain–computer interface (BCI) system based on the threshold discrimination, which is proposed for the first time to distinguish between SSVEP and MI potentials. The system combines these two heterogeneous signals to increase the number of control commands and improve the performance of asynchronous control of external devices. In this research, an electric wheelchair is controlled as an example. The user can continuously control the wheelchair to turn left/right through motion imagination (MI) by imagining left/right-hand movement and generate another 6 commands for the wheelchair control by focusing on the SSVEP stimulation panel. Ten subjects participated in a MI training session and eight of them completed a mobile obstacle-avoidance experiment in a complex environment requesting high control accuracy for successful manipulation. Comparing with the single-modal BCI-controlled wheelchair system, the results demonstrate that the proposed multi-modal method is effective by providing more satisfactory control accuracy, and show the potential of BCI-controlled systems to be applied in complex daily tasks.

Keywords: Brain–computer interface (BCI), Threshold discrimination, Multi-modal EEG signals, Motor imagination (MI), Steady-state visual evoked potential (SSVEP), Threshold strategy, BCI controlled wheelchair

Introduction

Brain–computer interface (BCI) directly transmits the instructions from the human brain to the designated machine terminals by recognizing electrical activities of the nervous system. This technology provides human with a communication channel without physical touch with the controlled objects, which is more efficient and convenient for assistive systems, and shows great innovative significance as well as application value in the field of human–robot communication. BCI has been widely used in various fields such as rehabilitation, disaster relief, entertainment experience, and improving the quality of life for disabled people (Fouad et al. 2015; Mak and Wolpaw 2009; Lebedev and Nicolelis 2006).

In the field of rehabilitation engineering, Electroencephalogram (EEG)-based BCIs for wheelchair control have attracted great attention because of their convenience, non-invasion and low cost (Rebsamen et al. 2010; Wang et al. 2014; Kim et al. 2018; Li et al. 2013). Common modalities used in EEG-based BCIs include steady-state visual evoked potentials (SSVEP) (Cheng et al. 2002; Franois et al. 2010), event-related potentials (ERPs) (Fazel-Reza et al. 2012; Jin et al. 2017), and event-related desynchronization/synchronization (ERD/ERS) potentials produced by motor imagery (MI) (Lafleur et al. 2013; Pfurtscheller and Pfurtscheller 2001). Several purely MI-based BCIs and SSVEP-based BCIs have been developed to realize basic control of external devices (Dong et al. 2018, 2017; Gao et al. 2019). Although the SSVEP-based BCIs has a higher information transfer rate (ITR) and can provide more classification commands, it can only provide discrete commands. MI-based BCIs can generate nearly continuous output in real-time but suffer from the limited number of distinguishable MI tasks (Ma et al. 2017; Ko et al. 2019).

An asynchronous SSVEP-BCI control a wheelchair (Zhang et al. 2020) was proposed. Eight healthy subjects controlled the wheelchair online to complete simple tasks, the average control accuracy was 93%, with possible further improvements to allow the continuous turning angles between 0° and 90°.

To overcome the limitation of using single-modal paradigms, many works have been reported in recent years, such as the multi-modal BCIs (Xu et al. 2021, 2020; Han et al. 2020; Zuo et al. 2020; Yan and Xu 2020; Lamti et al. 2019; Lee et al. 2018). The combination of BCIs for control of an application has been presented as a “multi-modal BCI”. In their paper, the authors described different ways on how a hybrid BCI can be constructed. A hybrid BCI is composed essentially of two or more BCIs that are operated sequentially or simultaneously. Multi-modal BCIs combine different modal EEG and detect at least two brain patterns simultaneously or sequentially to generate control commands (Li et al. 2013; Cao et al. 2014; Huang et al. 2019; Pfurtscheller et al. 2010; Horki et al. 2011; Duan et al. 2019). Lately, a multi-modal BCI was proposed that simultaneously combines P300 and SSVEP to improve the performance of asynchronous control (Li et al. 2013). This system detected the control/idle states and further recognized the target by determining whether both P300 and SSVEP occur in the same group of buttons, which was used to produce “go/stop” commands in wheelchair control (Li et al. 2013). Huang et al. proposed a multi-modal BCI based on MI-EEG and EOG signals. This system was used to control an integrated system (Huang et al. 2019). MI-based switch was used to turn ON/OFF an SSVEP-based BCI (Pfurtscheller et al. 2010). Multi-modal BCI based on MI and SSVEP signals were developed for control engineering (Horki et al. 2011; Duan et al. 2019). Most of these multi-modal BCIs were not applied to multi-dimensional control. In fact, it is a challenge of BCI systems to control devices that require highly accurate and continuous control.

In this paper, a multi-modal BCI system is proposed based on a method of threshold discrimination. The system can accurately determine the current signal modality so that different EEG modalities can be used to increase the number of control commands and improve the performance of asynchronous control. Users can continuously control the wheelchair to turn left/right by imagining left/right-hand movement and control the wheelchair to start, forward, back, stop, turn on/off lights and whistle, by focusing on the SSVEP stimulation panel. The threshold is set based on data analysis from offline experiments. In the online experiments, the biggest correlation coefficient between the current EEG and the SSVEP reference signals is calculated based on the canonical correlation analysis (CCA) (Lin et al. 2006). When the biggest correlation coefficient is less than the set threshold, MI signal classification is performed, otherwise, it is considered that the user is focusing on the flashing light panel and the SSVEP classification is performed. Ten healthy subjects participated in a MI training session, 8 of which with accuracy over 75% were qualified to complete a mobile obstacle-avoidance experiment using the proposed system. The results demonstrate the feasibility of the proposed method.

Methodology

System components

As shown in Fig. 1, the system is composed of a signal acquisition device, an SSVEP stimulation panel, an EEG signal processing module, a wheelchair control module, and an electric wheelchair. A G.GAMMASYS device (g.tec medical engineering, Inc., Austria) was used to capture scalp surface EEG signals for data acquisition. Each user wears an EEG cap (LT 16) with Ag–AgCl electrodes, as shown in Fig. 2. The reference and grounding electrodes are mounted on an ear lobe and the AFz position respectively, as shown in Fig. 2. The EEG signals used for analysis are recorded from 12 electrodes (“Pz,” “PO5,” “PO3,” “POz,” “PO4,” “PO6,” “O1,” “Oz,” “O2,” “C3,” “Cz,” and “C4”), which are marked in red in Fig. 2 (Kuhlman. 1978; Middendorf et al. 2000). The EEG signals are amplified by a g.USBamp RESEARCH amplifier with a sampling rate of 256 Hz. The impedances between the scalp and all electrodes are maintained below 5 kΩ. On the SSVEP stimulation panel, the stimuli are delivered through 6 groups of LEDs flashing at different frequencies (25-30HZ), and the duty cycle is 50%. The algorithm of the EEG signal processing module is performed on the computer. All the computations are carried on a Lenovo computer (Inter (R) Core (TM) i5-4590 CPU 3.30 GHz) with the software MATLAB (2017a). The main control chip of the wheelchair control module is a single-chip microcomputer (STC89C51), combined with Bluetooth module (BT04-A, following the V4.0BLE Bluetooth specification), which provides the electric wheelchair with a communication channel to replace the joystick mechanical control. The signal transmission rate is 1150kbps, and the command transmission delay time can be controlled within 500 ms.

Fig. 1.

Fig. 1

Hardware system composition

Fig. 2.

Fig. 2

Electrodes names and distribution. The electrodes used for analysis are marked in red

Control strategy

The system provides 8 commands: start, stop, forward, back, turn on/off lights, whistle, turn left, and turn right, as shown in Table 1 These commands are used for constructing the effectively control for the wheelchair.

Table 1.

The control command and their corresponding mental tasks

Control commands Evoking tasks
Start Focusing on the button flashing in 25 Hz
Stop Focusing on the button flashing in 26 Hz
Forward Focusing on the button flashing in 27 Hz
Back Focusing on the button flashing in 28 Hz
Turn on/off lights Focusing on the button flashing in 29 Hz
Whistle Focusing on the button flashing in 30 Hz
Turn left Imagine left-hand movement
Turn right Imagine right-hand movement

Threshold discrimination and classification algorithm

A threshold strategy is used to distinguish different modalities of EEG signals. Then, SSVEP or MI-based signals classification is performed. Figure 3 shows the procedure for the SSVEP and MI detection. The EEG signals are collected in real-time and filtered between 8 and 40 Hz. Then performing canonical correlation analysis (CCA) with the constructed 6 reference signals (25–30 HZ) to obtain the biggest correlation coefficient R.

R(X,Y)=cov(X,Y)D(X)D(Y) 1

where X is multi-channel EEG signals and Y is reference signals. cov(X,Y) is the covariance of X and Y. D(X) and D(Y) are the variances of X and Y, respectively. The value of correlation coefficient R is in [− 1, 1]. An absolute value of R closer to 1 indicates a higher linear correlation between X and Y. Sinusoidal signals are used as the reference signals Yf:

Yf=sin(2πft)cos(2πft)sin2πNhftcos2πNhft 2

where f is the stimulation frequency and Nh is the number of harmonics.

Fig. 3.

Fig. 3

A flow chart for the method used in the system

Figure 3 shows the flow chart of the proposed method, where R is the maximal correlation coefficient and I is the threshold. R is compared with the threshold I. If R < I, then the user is considered performing MI, and MI classification is performed. If R ≥ I, the user is considered focusing on the flashing lights, then the SSVEP classification is performed. The threshold will be determined later in this paper.

SSVEP classification

When the current signal is identified as SSVEP, after threshold discrimination, SSVEP classification is performed. Under no training conditions, the filter bank canonical correlation analysis (FBCCA) method enhances the CCA-based frequency detection of SSVEP signals and shows good performance in extracting task-related components, which is therefore used for SSVEP classification in the proposed system. Figure 4 shows the flowchart of FBCCA, consisting of three major procedures: (1) filter bank analysis; (2) performing CCA between SSVEP sub-band components and sinusoidal reference signals; and (3) target identification (Chen et al. 2015). In the first step, a filter bank analysis is performed for sub-band decompositions with multiple filters having different passbands. After the filter bank analysis, the standard CCA process is applied to each of the sub-band components XSBn,,n=1,2,,N separately, to calculate the correlation coefficients between the sub-band components and the predefined reference signals corresponding to all stimulation frequencies. For the kth reference signal (Yfk,k=1,2,,6), its correlation coefficients with N sub-band components form a 1-by-N correlation vector ρk defined as follows:

ρk=ρk1ρk2ρkN=ρXSB1TWXXSB1Yfk,YTWYXSB1YfkρXSB2TWXXSB2Yfk,YTWYXSB2YfkρXSBNTWXXSBNYfk,YTWYXSBNYfk 3

where ρk indicates the correlation coefficient between X and Y. In frequency detection of SSVEPs, X indicates multi-channel SSVEPs and Y is reference signals. x and y are their weighted linear combinations respectively. WX and WY are the weight vectors of X and Y. Therefore, x=XTWX and y=YTWY.

maxWX,WYρ(x,y)=EWXTXYTWYEWXTXXTWXEWYTYYTWY 4
Fig. 4.

Fig. 4

Flowchart of the FBCCA method for frequency detection of SSVEPs

A weighted square sum of the correlation coefficients corresponding to all sub-band components, i.e., ρk1,,ρkN, is defined as the feature for classification:

ρ~k=n=1Nw(n)×ρkn2 5

where n is the index of the sub-band. According to the fact that the signal-to-noise ratio (SNR) of SSVEP harmonics decreases as the response frequency increases, the weights for the sub-band components are defined as follows:

w(n)=n-a+b,n[1N] 6

where a and b are constants that maximize the classification performance. Grid search method using an offline analysis is used to determine a, b and N. From experimental data, we get a = 1.75, b = 0.25 and N = 5. Finally, ρ~k corresponding to all SSVEP stimulation frequencies i.e.,ρ~1,,ρ~6 is used for determining the frequency of the flashing light that the user focuses on, indicated by the maximal in ρ~k.

MI signals classification

When the current signal is identified as MI signal after threshold discrimination, the MI classification is performed. As shown in Fig. 5, the MI signal classification is divided into the offline model training and the online classification processes. In the offline model training process, each user is required to complete several tasks of imagining left/right-hand movements. For the feature extraction, the discriminative filter bank common spatial pattern (DFBCSP) method (Thomas et al. 2009) is applied. The flow chart of DFBCSP is shown in Fig. 5.

Fig. 5.

Fig. 5

Flowchart of the DFBCSP method for MI

The DFBCSP selects the appropriate FIR filter coefficients and divides the original frequency band into multiple sub-bands. If every M th coefficient of a finite impulse response filter h(n) is kept unchanged and all other coefficients are replaced by zeros, we get h(n):

h(n)=h(n)·cM(n) 7

where

cM(n)=1;forn=kM,k=0,1,2,M-10;otherwise 8

Find the power of each sub-band:

Pfi,t=1Tn=1Txt,f2(n) 9

Calculate the Fisher Ratio for each frequency band, and select the 4 frequency bands with the highest FR score:

FR(f)=SBSW 10

where

SW=k=1Ct=1nkPt-mk2 11

and

SB=k=1Cnkm-mk2 12

are the within-class variance and between-class variance, respectively, m is the total average, mk is the average for class k, (k = 1, 2), C is the number of classes, and nk denotes the number of trials for class k.

The original data was passed through the filters in the four frequency filters to obtain the CSP features, and the hybrid kernel function relevance vector machine (HKF-RVM) (Dong et al. 2020) is used for classification.

Experiments

Two types of experiments (offline and online) are designed in this paper. Ten healthy subjects (3 female and 7 males, aged between 25 and 27 years) participated in these experiments. Three subjects (A1, A2, A3) have BCI experiences, while the other seven subjects have no prior experience. All experiments are carried out according to the principles expressed in the Declaration of Helsinki. Before participating the experiments, all subjects get a detailed understanding of the purpose and possible consequences of the experiments.

Offline experiments for data collection

MI offline data collection

The workflow is illustrated in Fig. 6. Each subject is seated in a comfortable chair facing a computer screen. At t = 0 s, the experiment starts with a fixation cross appears on the screen, and a short beep informs the subject the start of experiment; at t = 2 s, a directional arrow appears on the screen (left, right, up or downwards) and remains on the screen for 1.25 s, at the same time, the subject is reminded to start preparing for a motor imagery task according to the direction arrow. The EEG data is recorded from t = 3 s and the motor imagery task continues until t = 6 s.

Fig. 6.

Fig. 6

The paradigm of MI training data collection

The next session is performed after a short interval. The data were used to train HKF-RVM models offline.

SSVEP offline data collection

The workflow is illustrated in Fig. 7. The subject is requested to relax, and then start the SSVEP stimulation induction program. The stimulations of flicking targets with predefined frequency are repeated with break periods (no flickers) between two periods. The duration of each stimulation is 7 s separated by 4 s rest.

Fig. 7.

Fig. 7

The paradigm of SSVEP data collection

Before each stimulation, one of the LED will be pointed by a yellow arrow. The subject must focus on the marked LED for the whole stimulation period. Throughout the experiment, each LED will be focused six times in sequence. At the end of the process, a file containing annotated EEG signals will be generated for subsequent data analysis.

Online mobile obstacle avoidance experiment

The purpose of this experiment is to evaluate the wheelchair manipulating performance of participants and the feasibility of the proposed method. A challenging task is designed, where subjects are asked to drive a wheelchair to follow a predefined trajectory (about 45 m) containing a narrow straightway, a door entrance, facing to obstacles, rotating around obstacles, and turning around obstacles with no collision. The layout of the experiment and tasks are shown in Fig. 8.

Fig. 8.

Fig. 8

Wheelchair control experiment with difficult tasks

To assess the subject’s feeling of control, after the online experiment the subjects filled out a questionnaire with three questions: (1) rate your ability to control the wheelchair to walk straight by focusing on the flickering lights; (2) rate your ability to control the turning of the wheelchair by motor imagery; (3) rate the control capability of the system compared with a single MI-BCI and a single SSVEP-BCI. The answer to each of the questions is a scale between 1 (low) and 10 (high).

Results

MI training results

In this study, each of the 10 subjects completed 8 MI training sessions. According to the binomial test theory, under normal circumstances, if the experimental significance reaches the 0.05 level, then it is considered that significant differences exist between the data. The corresponding p value of the experiment is calculated in Eq. (13), which should be less than 0.00625 (0.05/8) for the 8 sessions.

p=1-i=0mni0.5nI(0,1,,n)(i) 13

In Eq. (13), n is the number of trials in a session (20 in this case), and m is the number of the correctly predicted trials in a session. The indicator function I(0,1,,n)(i) ensures that m only adopts values of 0,1,...,n. By this formula, to ensure that p is less than 0.00625, m should be greater than 14. Therefore, the minimum number of correct trials required in a session is 15 (accuracy rate is about 75%). We keep only the subjects whose MI classification accuracy is higher than 75% to participate in further experiments.

According to the experimental results, 8 of the 10 subjects achieved the MI classification accuracy above 75%, as shown in Table 2 Among them, the three subjects with MI experience (A1, A2 and A3) had the highest accuracy rates of 95%, 100% and 95% respectively. The average MI accuracy and standard deviation of these eight subjects are shown as blue bars in Fig. 9.

Table 2.

The best results of the eight subjects in the MI-/SSVEP-based sessions

Subjects Gender Age MI accuracy (%) SSVEP accuracy (%)
A1 Male 25 95% 97.2%
A2 Male 24 100% 100%
A3 Female 23 95% 100%
A4 Male 23 90% 94.4%
A5 Female 23 85% 97.2%
A6 Male 24 90% 94.4%
A7 Male 23 80% 97.2%
A8 Male 24 85% 94.4%
Mean ± SD 90% ± 0.06 96.85% ± 0.02

Fig. 9.

Fig. 9

The average accuracies and standard deviations of the eight selected subjects in the MI/SSVEP sessions

SSVEP offline data analysis

To verify the reliability of the collected SSVEP signals, data epochs are extracted to SSVEP EEG data. Epochs of each subject are extracted. The Fast Fourier Transform (FFT) method is used to perform spectrum analysis. The frequency response curves of the EEG signal (using subject A3 as an example) under the six stimuli are shown in Fig. 10. The EEG signals have significant peak at the stimulation frequencies.

Fig. 10.

Fig. 10

Frequency response curve of the EEG signals for subjects at different stimulus frequencies. a 25 HZ, b 26 HZ, c 27 HZ, d 28 HZ, e 29 HZ, f 30 HZ

Threshold setting

To obtain the threshold for determining the existence of SSVEP, the CCA is applied to the MI and SSVEP data with the constructed 6 types of SSVEP reference signals, respectively. For MI data, each subject performs 3 trials, and each trial gets 10 comparisons. For SSVEP data, each subject performs 3 trials, and each trial has 6 comparisons. The average and standard deviation of the biggest correlation coefficient between MI and SSVEP data and the reference signal for each subject are shown in Fig. 11. Most of the biggest correlation coefficient between the MI data and reference signals are lower than 0.5, except subject A6 got 0.52 for one time. However, for SSVEP and reference signals, most of the biggest correlation coefficient are higher than 0.5, except subject A7 got 0.45 for one time. Therefore, the biggest correlation coefficient threshold of 0.5 is selected to determine if a subject is focusing on SSVEP stimuli or not.

Fig. 11.

Fig. 11

The average and standard deviation of MI and SSVEP the biggest correlation coefficient for each subject

Motorized wheelchair control experiment results

To validate the proposed system in real applications, it is applied to control a motorized wheelchair. The wheelchair control is divided into the following tasks as shown in Fig. 8.

  • Task 1: proceeding straight ahead in the corridor;

  • Task 2: passing through the entrance after a left turning;

  • Task 3: avoiding obstacles (chairs) in the room;

  • Task 4: rotating around obstacles (a chair and a table).

All subjects completed all the four tasks after some trials, with some collisions occurred for some subjects. The experimental results (‘O’ and ‘X’ to represent ‘Success’ and ‘Fail’, and the number means the number of collisions) and ITR for each subject in this experiment are illustrated in Table 3. Subjects A1 and A2 successfully completed all tasks without any collisions. A3, A6, and A8 had a scratch during either crossing the entrance or avoiding obstacles. A4 successfully completed the first two tasks, but scratched when avoiding obstacles and rotating around obstacles. A7 completed tasks 1 and 4, but a steering error occurred during obstacle avoidance and a collision occurred, which is relevant to the mechanical flaws of the wheelchair steering control system.

Table 3.

The collisions of the real wheelchair control experiment

Subjects Task 1 Task 2 Task 3 Task 4 ITR (bits/min)
A1 O O O O 230
A2 O O O O 210
A3 O X 1 O O 200
A4 O O X 1 X 1 175
A5 O X 1 O X 1 130
A6 O O X 1 O 160
A7 O O X 2 O 140
A8 O O X 1 O 175
Mean 177.5

Subjective measures

The subjects rated their ability to control the wheelchair to go straight with 9.3 ± 0.7 (on a scale from 1, low, to 10, high). To turning control of the wheelchair was rated 7.5 ± 1.2. Finally, the overall control ability compared with MI-BCI rating (9.6 ± 0.5) is higher than SSVEP-BCI (9 ± 0.5). The MI BCI has higher control ability due to its capacity of continuous control.

Comparison with the single modal BCIs

In our previous work, we investigated brain-controlled wheelchair system based on SSVEP (Zhang et al. 2020). Applying this system to the online brain-controlled wheelchair experiment in this paper, each subject successfully completed the Task 1. In the Task3 and Task4 (when the turn is continuously controlled), although the recognition accuracy of SSVEP was very high, the task failed because the subjects could not control the turning angle well. In the later stage of the experiment, the classification accuracy decreased due to visual fatigue. The online experiments achieved an average accuracy of 70.94% with an ITR of 130 bit/min.

Discussion and conclusion

A hybrid BCI system is composed essentially of two or more BCIs that are operated sequentially or simultaneously. The combination of at least one BCI and other assistive technologies also constitutes a hybrid BCI. In this work, we proposed a threshold discrimination method to distinguish between SSVEP and MI signals, to form a multi-modal BCI. Two BCIs are operated for controlling a wheelchair. The user could continuously control the wheelchair to turn left/right by MI-based BCI and the SSVEP-based BCI provides another 6 commands for the wheelchair control. The hybrid design presented here allows users to operate both BCIs continuously with two different purposes serving the common goal of wheelchair control. The method increased the number of control commands and improved control performance. In our experiments, most subjects were able to control a wheelchair by means of the proposed multi-modal BCI to complete challenging tasks designed based on real living environment. Though there was time variation among subjects, most subjects successfully completed the tasks after some trials. These tasks are not easy to complete even using a joystick.

According to the evaluation of the subjects after use, it was proved that the BCI system is robust and all subjects were well adapted to the system.

When the information transmission rate (ITR) is concerned, based on the time used in the experiment and the number of control commands issued by the system, the system outputs a control command every 1.5 s. Compared with the single-mode BCI system, this control system has obvious advantages. Therefore, the proposed method has a higher efficiency.

For the SSVEP, the stimulation frequency can be selected between 1 and 100 Hz, while significant spectrum peaks were found for the frequency in 4–30 Hz. The MI relevant frequencies are relatively low (for instance, the μ rhythm is in 8–13 Hz and β rhythm in 13–30 Hz). Therefore, the stimulation frequency selected by the system is in 25–30 HZ for clear peaks in SSVEP signal spectrum power and small overlap with MI signal frequencies.

Data analysis showed that in 240 experiments with 8 subjects, only one trial got the biggest correlation coefficients between the MI signal and the reference signal bigger than 0.5 (0.52) during SSVEP stimuli, and all the rest were well lower than 0.5 (below 0.45). Therefore, the selected threshold 0.5 is feasible.

The proposed method combined SSVEP and MI signals to increase the number of control commands and improve the performance of BCI, by an exclusive modal selection strategy. MI provided two separable tasks, and SSVEP provides six. If one task generates one control command, then a total of 8 control commands can be obtained. If one control command is generated on the completion of every two tasks, the control commands can be expanded to 24 by a simple coding strategy. It will be interesting and valuable to combine the detection of different modal signals through the threshold discrimination to further extend the control command set, which will be our work in the future.

Acknowledgements

This work was partially supported by the Natural Science Foundation of Tianjin (No. 18JCYBJC87700), the new generation artificial intelligence technology major project of Tianjin (18ZXZNSY00270).

Data availability statements

The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.

Footnotes

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Enzeng Dong, Email: dongenzeng@163.com.

Haoran Zhang, Email: zhanghaoran2016@qq.com.

Lin Zhu, Email: lyhzl1112@163.com.

Shengzhi Du, Email: dushengzhi@gmail.com.

Jigang Tong, Email: tjgtjut@163.com.

References

  1. Cao L, Li J, Ji H, Jiang C. A hybrid brain computer interface system based on the neurophysiological protocol and brain-actuated switch for wheelchair control. J Neurosci Methods. 2014;229:33–43. doi: 10.1016/j.jneumeth.2014.03.011. [DOI] [PubMed] [Google Scholar]
  2. Chen X, Wang Y, Gao S, Jung TP, Gao X. Filter bank canonical correlation analysis for implementing a high-speed SSVEP-based brain–computer interface. J Neural Eng. 2015;12:046008. doi: 10.1088/1741-2560/12/4/046008. [DOI] [PubMed] [Google Scholar]
  3. Cheng M, Gao X, Gao S, Member S, Xu D. Design and implementation of a brain–computer interface with high transfer rates. IEEE Trans Biomed Eng. 2002;49:1181–1186. doi: 10.1109/tbme.2002.803536. [DOI] [PubMed] [Google Scholar]
  4. Dong E, Li C, Li L, Du S, Belkacem AN, Chen C. Classification of multi-class motor imagery with a novel hierarchical SVM algorithm for brain–computer interfaces. Med Biol Eng Compu. 2017;55:1809–1818. doi: 10.1007/s11517-017-1611-4. [DOI] [PubMed] [Google Scholar]
  5. Dong E, Zhu G, Chen C, Tong J, Jiao Y, Du S. Introducing chaos behavior to kernel relevance vector machine (RVM) for four-class EEG classification. PLoS ONE. 2018;13:e0198786. doi: 10.1371/journal.pone.0198786. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Dong E, Zhou K, Tong J, Du S. A novel hybrid kernel function relevance vector machine for multi-task motor imagery EEG classification. Biomed Signal Process Control. 2020;60:1746–8094. [Google Scholar]
  7. Duan X, Xie S, Xie X, Meng Y, Xu Z. Quadcopter flight control using a non-invasive multi-modal brain computer interface. Front Neurorobot. 2019;13:1662–5218. doi: 10.3389/fnbot.2019.00023. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Fazel-Reza R, Allison BZ, Guger C, Sellers EW, Kübler A. P300 brain computing interface: current challenges and emerging trends. Front Neuroeng. 2012;17:5–14. doi: 10.3389/fneng.2012.00014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Fouad MM, Amin KM, El-Bendary N, Hassanien AE. Brain computer interface: a review. Berlin: Springer; 2015. p. 74. [Google Scholar]
  10. Franois-Benot V, Monique M. Steady-state visually evoked potentials: focus on essential paradigms and future perspectives. Prog Neurobiol. 2010;90:418–438. doi: 10.1016/j.pneurobio.2009.11.005. [DOI] [PubMed] [Google Scholar]
  11. Gao Q, Zhang Y, Wang Z, Dong E. Channel projection-based CCA target identification method for an SSVEP-based BCI system of quadrotor helicopter control. Comput Intell Neurosci. 2019;16:2361282. doi: 10.1155/2019/2361282. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Han CH, Muller KR, Hwang HJ. Enhanced performance of a brain switch by simultaneous use of EEG and NIRS data for asynchronous brain–computer interface. IEEE Trans Neural Syst Rehabil Eng. 2020;28:2102–2112. doi: 10.1109/TNSRE.2020.3017167. [DOI] [PubMed] [Google Scholar]
  13. Horki P, Solis Escalante T, Neuper C, Müller Putz G. Combined motor imagery and SSVEP based BCI control of a 2 DoF artificial upper limb. Med Biol Eng Compu. 2011;49:567–577. doi: 10.1007/s11517-011-0750-2. [DOI] [PubMed] [Google Scholar]
  14. Huang Q, Zhang Z, Yu T, He S, Li Y. An EEG-/EOG-based hybrid brain–computer interface: application on controlling an integrated wheelchair robotic arm system. Front Neurosci. 2019;22:1243. doi: 10.3389/fnins.2019.01243. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Jin J, Zhang H, Daly I, Wang X, Cichocki A. An improved P300 pattern in BCI to catch user's attention. J Neural Eng. 2017;14:036001. doi: 10.1088/1741-2552/aa6213. [DOI] [PubMed] [Google Scholar]
  16. Kim KT, Suk HI, Lee SW. Commanding a brain-controlled wheelchair using steady-state somatosensory evoked potentials. IEEE Trans Neural Syst Rehabil Eng. 2018;26:654–665. doi: 10.1109/TNSRE.2016.2597854. [DOI] [PubMed] [Google Scholar]
  17. Ko LW, Komarov O, Lin SC. Enhancing the hybrid BCI performance with the common frequency pattern in dual-channel EEG. IEEE Trans Neural Syst Rehabil Eng. 2019;27:1360–1369. doi: 10.1109/TNSRE.2019.2920748. [DOI] [PubMed] [Google Scholar]
  18. Kuhlman WN. EEG feedback training: enhancement of somatosensory cortical activity. Electroencephalogr Clin Neurophysiol. 1978;45:290–294. doi: 10.1016/0013-4694(78)90014-7. [DOI] [PubMed] [Google Scholar]
  19. Lafleur K, Cassady K, Dou DA, Shades K, Rogin E, He B. Quadcopter control in three-dimensional space using a noninvasive motor imagery-based brain–computer interface. J Neural Eng. 2013;10:046003. doi: 10.1088/1741-2560/10/4/046003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Lamti HA, Khelifa B, Hugel V. Mental fatigue level detection based on event related and visual evoked potentials features fusion in virtual indoor environment. Cogn Neurodyn. 2019;13:271–285. doi: 10.1007/s11571-019-09523-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Lebedev MA, Nicolelis M. Brain-machine interfaces: past, present and future. Trends Neurosci. 2006;29:536–546. doi: 10.1016/j.tins.2006.07.004. [DOI] [PubMed] [Google Scholar]
  22. Lee MH, Williamson J, Won DO, Fazli S, Lee SW. A high performance spelling system based on EEG-EOG signals with visual feedback. IEEE Trans Neural Syst Rehabil Eng. 2018;26:1443–1459. doi: 10.1109/TNSRE.2018.2839116. [DOI] [PubMed] [Google Scholar]
  23. Li Y, Pan J, Wang F, Yu Z. A hybrid BCI system combining P300 and SSVEP and its application to wheelchair control. IEEE Trans Biomed Eng. 2013;60:3156–3166. doi: 10.1109/TBME.2013.2270283. [DOI] [PubMed] [Google Scholar]
  24. Lin Z, Zhang C, Wu W, Gao X. Frequency recognition based on canonical correlation analysis for SSVEP-based BCIs. IEEE Trans Biomed Eng. 2006;53:2610–2614. doi: 10.1109/tbme.2006.886577. [DOI] [PubMed] [Google Scholar]
  25. Ma T, Li H. The hybrid BCI system for movement control by combining motor imagery and moving onset visual evoked potential. J Neural Eng. 2017;14:026015. doi: 10.1088/1741-2552/aa5d5f. [DOI] [PubMed] [Google Scholar]
  26. Mak JN, Wolpaw JR. Clinical applications of brain–computer interfaces: current state and future prospects. IEEE Rev Biomed Eng. 2009;2:187–199. doi: 10.1109/RBME.2009.2035356. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Middendorf M, McMillan G, Calhoun G, Jones KS. Brain–computer interfaces based on the steady-state visual-evoked response. IEEE Trans Rehabil Eng. 2000;8:211–214. doi: 10.1109/86.847819. [DOI] [PubMed] [Google Scholar]
  28. Pfurtscheller G, Neuper C. Motor imagery and direct brain–computer communication. Proc IEEE. 2001;89:1123–1134. [Google Scholar]
  29. Pfurtscheller G, Solis-Escalante T, Ortner R, Linortner P, Muller-Putz GR. Self-paced operation of an SSVEP-based orthosis with and without an imagery-based “brain switch:” a feasibility study towards a hybrid BCI. IEEE Trans Neural Syst Rehabil Eng. 2010;18:409–414. doi: 10.1109/TNSRE.2010.2040837. [DOI] [PubMed] [Google Scholar]
  30. Rebsamen B, Guan C, Zhang H, Wang C, Burdet E. A brain controlled wheelchair to navigate in familiar environments. IEEE Trans Neural Syst Rehabil Eng. 2010;18:590–598. doi: 10.1109/TNSRE.2010.2049862. [DOI] [PubMed] [Google Scholar]
  31. Thomas KP, Guan C, Lau CT, Vinod AP, Kai KA. A new discriminative common spatial pattern method for motor imagery brain–computer interfaces. IEEE Trans Biomed Eng. 2009;56:2730–2733. doi: 10.1109/TBME.2009.2026181. [DOI] [PubMed] [Google Scholar]
  32. Wang H, Li Y, Long J, Yu T, Gu Z. An asynchronous wheelchair control by hybrid EEG–EOG brain–computer interface. Cogn Neurodyn. 2014;8:399–409. doi: 10.1007/s11571-014-9296-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Xu M, Han J, Wang Y, Jung TP, Ming D. Implementing over 100 command codes for a high-speed hybrid brain–computer interface using concurrent P300 and SSVEP features. IEEE Trans Biomed Eng. 2020;67:3073–3082. doi: 10.1109/TBME.2020.2975614. [DOI] [PubMed] [Google Scholar]
  34. Xu L, Xu M, Jung TP, Ming D. Review of brain encoding and decoding mechanisms for EEG-based brain–computer interface. Cogn Neurodyn. 2021;4:1–16. doi: 10.1007/s11571-021-09676-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Yan W, Xu G. Brain–computer interface method based on light-flashing and motion hybrid coding. Cogn Neurodyn. 2020;14:697–708. doi: 10.1007/s11571-020-09616-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Zhang H, Dong E, Zhu L (2020) Brain-controlled wheelchair system based on SSVEP. In: 2020 Chinese automation congress (CAC)
  37. Zuo C, Jin J, Yin E. Novel hybrid brain–computer interface system based on motor imagery and P300. Cogn Neurodyn. 2020;14:253–265. doi: 10.1007/s11571-019-09560-x. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.


Articles from Cognitive Neurodynamics are provided here courtesy of Springer Science+Business Media B.V.

RESOURCES