Abstract
Event-related potential (ERP)-based brain–computer interfacing (BCI) is an effective method of basic communication. However, collecting calibration data, and classifier training, detracts from the amount of time allocated for online communication. Decreasing calibration time can reduce preparation time thereby allowing for additional online use, potentially lower fatigue, and improved performance. Previous studies, using generic online training models which avoid offline calibration, afford more time for online spelling. Such studies have not examined the direct effects of the model on individual performance, and the training sequence exceeded the time reported here.
The first goal of this work is to survey whether one generic model works for all subjects and the second goal is to show the performance of a generic model using an online training strategy when participants could use the generic model. The generic model was derived from 10 participant’s data. An additional 11 participants were recruited for the current study. Seven of the participants were able to use the generic model during online training. Moreover, the generic model performed as well as models obtained from participant specific offline data with a mean training time of less than 2 min. However, four of the participants could not use this generic model, which shows that one generic mode is not generic for all subjects. More research on ERPs of subjects with different characteristics should be done, which would be helpful to build generic models for subject groups. This result shows a potential valuable direction for improving the BCI system.
Keywords: Brain computer interface, P300, Online training, Generic model
1. Introduction
Brain–computer interfaces (BCIs) translate brain activity into command and control signals. Common BCI techniques and inputs include motor imagery (Pfurtscheller and Neuper, 2001), event-related potentials (Farwell and Donchin, 1988; Allison and Pineda, 2003; Hong et al., 2009; Jin et al., 2012; Kaufmann et al., 2011; Zhang et al., 2012), and steady state evoked potentials (Vidal, 1972).
Farwell and Donchin (1988) introduced the P300-based BCI. Today, improving the system’s usability by increasing online accuracy and information transfer rate (ITR) is a high priority for BCI research. Sophisticated calibration methods are paramount for high online accuracy and ITR, and clean EEG, with well-differentiated target and non-target activity, lends to training a robust classifier, necessary for efficient use of the system. One strategy for improving accuracy entails selecting provocative visual images to elicit pronounced target responses. Manipulating visual stimuli (e.g., motion and images of faces) to enhance the amplitude of evoked potentials affords more descriptive data for classification (Hong et al., 2009; Jin et al., 2012; Kaufmann et al., 2011; Zhang et al., 2012).
In almost all cases, ERP-based BCIs require offline calibration to train a classifier model (Farwell and Donchin, 1988; Hong et al., 2009; Jin et al., 2012; Kaufmann et al., 2011; Zhang et al., 2012). Reducing the duration of offline calibration would increase BCI usability, decrease overall fatigue, and increase the amount of time available for online communication purposes. Rivet et al. (2011) proposed an adaptive training session to diminish time allocated to offline calibration for ERP-based BCIs. Long et al. (2011) reported that online data could be used to improve an offline calibration model. Vidaurre et al. (2011) developed a novel method for online training of a motor imagery BCI based on unsupervised adaptation of LDA classifiers.
Lu et al. (2009) used an online training strategy and a generic model in order to optimize calibration for each individual. The generic model was used to obtain the identity of an online selection, which would then be used to train the online classifier. If the generic model incorrectly labeled a selection, the data provided to the online classifier would label desired selections as undesired, and undesired selections as desired. The erroneously labeled data would add noise to the classifier, which would result in decreased efficiency of the online system. Moreover, the samples obtained from the online process need to be saved in memory, requiring additional computational resources. The online training strategy presented in this paper was designed to reduce the time needed for calibration and computational resources. Eleven subjects used the generic model to test its online generalizability across participants. In the online training process, participants completed a copy-spelling task which provided correct labels for each selection, since target identity was predetermined. Furthermore, the online model was trained by one sample each time (Kuncheva and Plumpton, 2008; Vidaurre et al., 2011) eliminating the need to save previous samples in memory.
2. Methods
2.1. Participants
Eleven healthy participants (10 male and 1 female, aged 24–35, mean 29) participated in the study. Subjects nationalities and ages are presented in Table 1. All subjects were familiar with the Western characters used in the display. Subjects 1, 4, 5, 11 have experience on P300 BCI. Subjects 2, 3, 6, 7, 8, 9, 10 are naïve for BCI. The target stimuli constituted alphabetic characters which changed to a famous face (familiar to all participants). Kaufmann et al. (2011) reported that presenting images of famous faces could evoke the N400 response, improving the classification accuracy of ERP-based BCIs. Calibration models Five calibration models were tested to determine the optimal training method:
Table 1.
Subject information.
| Subject | S1 | S2 | S3 | S4 | S5 | S6 | S7 | S8 | S9 | S10 | S11 |
|---|---|---|---|---|---|---|---|---|---|---|---|
| Age | 27 | 29 | 30 | 35 | 34 | 25 | 35 | 26 | 27 | 34 | 26 |
| Gender | M | M | M | M | M | F | M | M | M | M | M |
| Nationality | C | K | L | C | C | C | V | K | J | B | J |
Subject gender is denoted either M (male) or F (female). Subject nationality is denoted, C (Chinese), K (Korean), L (Lithuanian), V (Vietnamese), J (Japanese), or B (Bengali).
Typical calibration: For each participant a model was derived from three runs, each containing five characters; online training was not utilized. This condition was the gold standard used to compare performance of the other conditions. Offline calibration time was 720 s.
Single run: This model included one run containing five characters per participant; online training was not utilized. This condition tests whether a single run is sufficient to operate the system. Offline calibration time was 240 s.
Generic model: A model derived from ten participant’s data (not enrolled in the current study) was used during online training. This model was used in place of models derived from each participant’s data. Calibration time was 0 s (no online training and offline calibration time).
Online training single run: This condition derived a model from one offline run of data collected from each participant and tests whether one run and the online training strategy can reduce the number of offline runs. Average calibration time was offline calibration time + average online training time: 240 s + 125.1 s = 365.1 s.
Online generic model: The generic model was used in conjunction with the online training strategy. This model tests whether multiple participants’ data can work as well as unique data when the online training strategy is also used. Average online training time was 108.3 s.
2.3. Stimuli and procedure
Participants sat approximately 105 cm in front of a monitor 30 cm tall (visual angle: 16.3 degrees) and 48 cm wide (visual angle: 25.7 degrees). During data acquisition, researchers instructed participants to relax and avoid unnecessary movement. The display portrays a 6 × 6 matrix comprised of gray English letters and symbols against a black background (see Fig. 1). During a stimulus event, target characters are replaced momentarily with face images, described as flashing.
Fig. 1.

The display during the online runs contains characters and face images in black and white. To avoid copyright infringement, faces are portrayed with censor boxes. (During the experiment censor boxes were not presented.)
Instead of grouping the flashed characters into rows and columns, we developed an alternative flash pattern approach (described in Jin et al., 2012).
2.4. Experiment set up and offline and online protocols
EEG signals were recorded with a g.USBamp and a g.EEGcap (Guger Technologies, Graz, Austria) with a sensitivity of 100 μV, band pass filtered between 0.1 Hz and 30 Hz, and sampled at 256 Hz. We recorded from EEG electrode positions F3, Fz, F4, Cz, Pz, Oz, P3, P4, P7, P8, O1, and O2 from the extended International 10–20 system. EEG was referenced at the right mastoid and grounded at the front electrode (FPz). Based on the report of Curran and Hancock (2007) electrode locations F3 and F4 were monitored to examine the N400.
A sub-trial is defined as one flash of one of a twelve flash pattern. A trial is complete when all 12 flashes have been presented. A trial block consists of 16 complete trials for offline testing, and target characters are uniform across trials. An offline run consists of five trial blocks. In each paradigm, participants complete three offline runs. Participants are given a 5 min break between each paradigm in the offline experiment. During online testing, the number of trials per trial block is variable, because the system adjusts the number of trials to optimize performance (see Section 2.8).
The study tested five classification conditions. The two offline methods were performed during session 1 (i.e., typical calibration and single run). During session 2 the participants completed the following three conditions: generic model, online training single run, and online generic model.
For conditions 1, 2 and 4, offline date recorded in this study was used to train the classifier. For conditions 3 and 5, generic model was used without using the offline date recorded in this study. In conditions 1, 2 and 3, participants spelt 20 characters in each session in online experiment. Online training was not used in conditions 1, 2 and 3. In conditions 4 and 5, online training was used. Participants spelt five characters (A, B, C, D, and E; see Fig. 1) to train the classifier online. In the online training stage, the adaptive strategy (see Section 2.6) was used after five trials had been presented to provide a more stable data sample for classifier training. After subjects finished spelling five characters, if the spelling accuracy was higher than or equal to 80%, the online training stage would end. Then, participants spelt another 20 characters using a single trial of flashes. If the spelling accuracy was lower than 80%, participants would be asked to repeat the task, until their error rate decreased to 20%, or until online training exceeded 10 min. If online training exceeded 10 min, the task would be stopped and no online result would be obtained. Eighty percent spelling accuracy was selected to ensure that participants could feasibly use this speller system. If the online training duration was longer than 10 min, the experiment was stopped. In this case, it was assumed that the participant would not be able to effectively use the system. The threshold has been tested on two subjects.
2.5. Feature extraction procedure
A third order Butterworth band pass filter was used to filter the EEG between 0.1 Hz and 12 Hz. After filtering, the EEG data were down-sampled from 256 Hz to 36.6 Hz by selecting every seventh sample point. One-thousand ms sub-trials were extracted from the data. The size of the feature vector is 12 × 36 (12 channels by 36 time points).
2.6. Classification scheme
Data acquired offline was used to train the standard linear discriminant analysis (LDA) classifier in conditions 1–3, typical calibration, single run, and generic model. For conditions 4–5 online training single run, and online generic model, the generic data trained the online version of the standard LDA. The online training strategy was based on the inverse of the common covariance matrix, updated with the Sherman–Morrison–Woodbury formula (Kuncheva and Plumpton, 2008; Vidaurre et al., 2011).
2.7. Practical bit rate
The practical bit rate (PBR) and raw bit rate (RBR) describe the speed and volume of character selection in an ERP-BCI. The PBR estimates system speed in a real-world setting by subtracting incorrect selections from total selections and time between selections from total online time during calculation. Unless otherwise stated, all analyses in this paper are based on PBR; we only present RBR to facilitate comparisons with other studies. These two bit rate measures differ from each other in two ways. First, the PBR incorporates the fact that every error requires two additional selections to correct the error (a backspace followed by the correct character). The practical bit rate is calculated as RBR * (1 − 2*P), where RBR is the raw bit rate and P is the online error rate of the system (Townsend et al., 2011). Second, the RBR and PBR also incorporate the time between selections (1 s). Raw bit rate calculated with selection time yields the online information transfer rate of P300 BCIs which use other error correction methods (Dal Seno et al., 2010) to perform a “Backspace”.
2.8. Adaptive system settings
The classifier output determines the number of trials per average for each character spelling. Next, the classifier identifies the target character based on data from all trials in the trial block. If the classifier decides on the same character after two successive trials, then additional flashes are not presented. The character selected by the classifier is then presented as feedback to the participant.
For example, assume that the classifier selects the letter “A” based on data from the first trial. Then the system presents a second trial. Data from the first and second trials are averaged, and the classifier selects a second letter. If the classifier selects “A” on the second selection, then “A” is presented to the participant as feedback. If the classifier did not select “A”, another trial begins. This process would continue until cha (n) = cha (n − 1) or until 16 trial blocks elapsed. After 16 trial blocks the classifier automatically selects the last section of classifier as feedback (Jin et al., 2011).
3. Result
Table 2 shows classification accuracy, raw bit rate, practical bit rate, mean trials for the average for each subject, and the time allocated to online training for each subject. Seven of the 11 subjects were able to use the generic model following online training. Although S8 and S9 could perform well using their own classifier model, the generic model with the online training strategy did not work for them. S10 and S11 did not perform well using their own model, and their performance with the generic model was similarly poor. S3 reported that the offline calibration process was too long and could not focus on the target character well during the offline experiment. The participant displayed poor performance using the classifier model derived from his own data; however, classification accuracy was increased with the online generic model.
Table 2.
Classification accuracy, raw bit rate, practical bit rate, number of trials averaged, and time for online training in each condition.
| S1 | S2 | S3 | S4 | S5 | S6 | S7 | Average | S8 | S9 | S10 | S11 | ||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Typical | ACC (%) | 85 | 85 | 55 | 90 | 80 | 75 | 90 | 80 ± 12.2 | 85 | 75 | 60 | 40 |
| RBR | 28.6 | 20.5 | 10.2 | 33.5 | 22.8 | 13.7 | 37.2 | 23.8 ± 10.0 | 24.9 | 20.9 | 14.6 | 3.2 | |
| PBR | 17.8 | 13.2 | 0.9 | 23.6 | 12.3 | 6.4 | 25.9 | 14.3 ± 8.9 | 15.7 | 9.4 | 2.6 | 0 | |
| AVT | 2.65 | 3.7 | 3.65 | 2.5 | 3 | 4.5 | 2.25 | 3.2 ± 0.8 | 3.05 | 2.95 | 2.95 | 7.1 | |
| Single run | ACC (%) | 70 | 55 | 55 | 55 | 50 | 55 | 85 | 60.7 ± 12.4 | 70 | 70 | 60 | 10 |
| RBR | 12.2 | 9.7 | 9.6 | 9.7 | 9.3 | 8.2 | 23.0 | 11.7 ± 5.1 | 15.3 | 8.7 | 9.2 | 0.4 | |
| PBR | 4.6 | 0.9 | 0.9 | 0.9 | 0 | 0.8 | 14.6 | 3.2 ± 5.2 | 5.6 | 3.2 | 1.7 | 0 | |
| AVT | 4.5 | 3.85 | 3.9 | 3.85 | 3.45 | 4.55 | 3.3 | 5.5 ± 0.5 | 3.6 | 6.35 | 4.65 | 4.5 | |
| Generic model | ACC (%) | 55 | 75 | 50 | 85 | 65 | 60 | 75 | 66.4 ± 12.5 | 15 | 5 | 40 | 25 |
| RBR | 7.4 | 15.6 | 6.5 | 24.9 | 10.6 | 11.3 | 23.7 | 11.3 ± 7.5 | 0.6 | 0 | 3.3 | 1.7 | |
| PBR | 0.7 | 7.2 | 0 | 16.7 | 3.0 | 2.1 | 10.5 | 5.7 ± 6.1 | 0 | 0 | 0 | 0 | |
| AVT | 5.0 | 3.95 | 4.95 | 3.05 | 4.6 | 3.8 | 2.6 | 4.0 ± 0.9 | 6.55 | 6.85 | 6.7 | 6.1 | |
| Online single run | ACC (%) | 80 | 55 | 55 | 70 | 55 | 70 | 85 | 67.1 ± 12.5 | 55 | 30 | ||
| RBR | 22.1 | 12.3 | 10.1 | 16.9 | 8.2 | 7.2 | 33.7 | 15.8 ± 9.5 | 8.8 | 3.7 | |||
| PBR | 12.0 | 10.8 | 0.9 | 6.1 | 0.8 | 2.8 | 20.5 | 7.7 ± 7.2 | 0.8 | 0 | |||
| AVT | 3.1 | 3.45 | 4.35 | 3.25 | 4.55 | 7.6 | 2.25 | 4.1 ± 1.7 | 4.25 | 3.75 | |||
| Time (s) | 221 | 92 | 179 | 86 | 92 | 126 | 80 | 125.1 ± 54.6 | 197 | 422 | |||
| Online generic model | ACC (%) | 80 | 80 | 70 | 90 | 85 | 85 | 85 | 82.1 ± 6.4 | ||||
| RBR | 26.3 | 19.8 | 15.1 | 25 | 24.9 | 24.1 | 24.1 | 22.8 ± 3.9 | |||||
| PBR | 14.0 | 21.7 | 5.5 | 18.2 | 15.7 | 15.2 | 15.2 | 15.1 ± 4.9 | |||||
| AVT | 2.6 | 3.15 | 3.65 | 3.35 | 3.05 | 3.15 | 3.15 | 3.2 ± 0.3 | |||||
| Time (s) | 80 | 182 | 161 | 86 | 83 | 83 | 83 | 108.3 ± 43.6 |
ACC, classification accuracy; RBR, raw bit rate; PBR, practical bit rate (bits/min); AVT, average number of trials used by the adaptive classifier; Time, duration of online training (condition 4 and 5 only); Sn, Subject. Average is the mean result of S1–S7 who achieved 80% accuracy or above using the generic model with online training strategy.
Table 2 shows the results for participants S8 and S9; participants who could not use the generic model and reach the threshold of 80% with the online generic model. Therefore, there were no online results for these subjects in the online generic model part. For participants S10 and S11, it was seen that results for subjects who could not use the generic model and reach the threshold of 80% for online generic model based control also could not reach the threshold of 80% for the online single run model. Therefore, there were no online results for these subjects with the online single run model and online generic model parts. Participants S10 and S11 could not perform well even using their own typical model. This means that they are likely to be illiterate regarding P300-BCIs.
Before statistically comparing classification accuracy and practical bit rates, data were statistically tested for normal distributions (One-Sample Kolmogorov Smirnov test) and sphericity (Mauchly’s test). The alpha level was adjusted according to Bonferoni with α = 0.02 (marginal significant), α = 0.01 (significant) and α = 0.002 (highly significant).
In Table 2 for participants 1–7, the results of subjects who achieved 80% accuracy or above using the generic model with the online training strategy are displayed. A one-way ANOVA derived classification accuracy comparisons (F(30) = 4.61, P < 0.01) and practical bit rate (F(30) = 4.36, P < 0.01) across the five strategies of participants 1–7. T-tests with Bonferoni correction compared practical bit rate and classification accuracy between each pair of strategies. The accuracy values obtained from the online generic model were marginally significantly higher than results obtained from online single runs (P < 0.02 for classification accuracy). The accuracy obtained from the online generic model were marginally significantly higher than the generic model (P = 0.012). The PBR obtained from the online generic model were significantly higher than the generic model (P < 0.01). Coupling the generic model with online training could afford accuracy and PBR equal to those obtained by the classifier built from the subject’s own data. The accuracy and PBR for the typical and online generic models were comparable (P = 0.69 for classification accuracy; P = 0.85 for practical bit rate). The accuracy and practical bit rate obtained from the typical model were marginally significantly higher than those of the single run (P < 0.02 for classification accuracy; P < 0.02 for practical bit rate).
Fig. 2 shows the ERP amplitudes of participants 8–10 and amplitudes from generic data. Participant 11 performed at a PBR of zero, essentially too low to utilize an ERP-based speller system. Thus, the amplitude of participant 11 was excluded. The P300 amplitudes averaged from target sub-trials of participants 8–10 were a lot weaker than that of the generic data on sites P7, P8, O1, Oz and O2. The ERP morphology exhibited by participants 8–10 deviates from the morphology shown in the generic data. Consequently, these participants speed and accuracy suffered with the generic model.
Fig. 2.
The ERP amplitudes of subject 8–10 compared to the ERP amplitudes of the generic data set.
4. Discussion
Five different strategies tested the applicability of an online training strategy using a generic model in eleven participants. Online results indicated that participant S8 obtained high accuracy and practical bit rate in the typical model condition. However, he could not use the generic model, even after addition of the online training strategy. Participants S10 and S11 demonstrated low accuracies and bit rates with the typical and online generic models. With such low levels of PBR and accuracy, these individuals were unable to utilize the ERP-based BCI within reasonable levels of efficiency. Participants who achieved high accuracy and PBR with the online generic model obtained comparable high classification accuracy and PBR to a model generated from their own data runs. In the typical calibration described in this paper, it took more than 12 min for offline calibration. The online generic model has a major benefit of saving more than 10 min training time and significantly shortened the calibration process (P < 0.01). The classifiers derived from three runs of offline data achieved higher accuracy than classifiers derived from a single offline run, as would be expected.
In Table 2 for participants 1–7, the accuracy obtained from the online generic model were marginally significantly higher than the generic model (P = 0.012) and the PBR obtained from the online generic model were significantly higher than the generic model (P < 0.01). This demonstrates that applying a generic model, combined with online training, yields significantly higher PBR than the generic model used alone. Coupling the generic model with online training could afford accuracy and PBR equal to those obtained by the classifier built from the subject’s own data. The accuracy and PBR for the typical and online generic models were comparable (P = 0.69 for classification accuracy; P = 0.85 for practical bit rate); however the online generic model was trained more quickly. The accuracy and practical bit rate obtained from the typical model were marginally significantly higher than those of the single run (P < 0.02 for classification accuracy; P < 0.02 for practical bit rate). Indicating that one run of data was not sufficient to adequately train the classifier model.
Generic models can save time during ERP-based BCI use. However, not all participants benefited from the generic models. Offline analyses indicates ERP amplitudes of participants unable to use the generic model are different from ERP amplitudes of generic data, especially on sites P7, P8, O1, Oz and O2 (see Fig. 2). Participants that successfully operate the system with the online generic model had similar EEG features to the generic data set (see Fig. 3). Analyzing compatibility between BCI users and generic data may predict success with a generic model.
Fig. 3.
The black solid line represents the ERP amplitude averaged from generic data, and the dashed line represents the ERP amplitude averaged from subjects 1–7. The ERPs in this figure are more similar compared to waveforms in Fig. 2, especially on site P7, P8, O1, Oz and O2.
In online experiment, since subjects spelt 20 characters continuously without any rest in our study, the result of accuracy was a little lower than that from our previous work (Jin et al., 2011, 2012). In our previous work, subjects had several minutes’ rest after spelling 5 characters. However, spelling characters without rest could test the robust of the BCI system better. From our result, seven of the participants could benefit from the generic models and obtain good accuracy. One of the drawbacks of our study is the small sample size. We did not get enough information to show the relationship between the personal characteristics of subjects and the generic models. We still did not know whether the characteristics of subjects or what kind of the characteristics of subjects could determine the ERP wave forms. Therefore, we still could not select proper generic models for subjects with different characteristics. In our study, the percentage of BCI illiterates is high. It may be mainly because of the small sample size. We only had the data from ten subjects to build the generic model. These subjects are all from China, right handed, four females and six males, and aged between 20 and 30 years old. This generic model has limitation: only subjects who have similar ERP wave forms could use this generic model (see Fig. 3). From Fig. 2, we can see that S8 and S9 have different ERP wave forms. A larger sample size is necessary for building the generic models. We should also build generic model for subject groups who have similar ERP wave forms with S8 or S9.
In future work, we should enlarge our sample size and study the relationship between the personal characteristics of subjects and the generic models. It will further validate the application of the generic models. We will try to build proper generic models for different subject groups according to the wave forms of their ERPs. On the other hand, users may use the BCI system for long time. However, subjects may perform worse they use the BCI system for long time. So, we will try to find new ways to improve the BCI system. For example, re-training the classifier mode online or warning subjects to have rest after users use this system for a certain time or according to the average times of the our adaptive system. We will study this kind of system in our future work to attempt to find the new approaches to the online training method and the generic model.
HIGHLIGHTS.
We survey whether one generic model works for all subjects.
We show the performance of a generic model using an online training strategy when participants could use the generic model.
Four of the subjects could not use this generic model, which shows that one generic mode is not generic for all subjects.
When generic model could be used by the subjects, the mean training time for generic model would be less than 2 min.
Acknowledgments
This work was supported in part by the Grant National Natural Science Foundation of China, under Grant Nos. 61074113 and 61203127 and supported part by Shanghai Leading Academic Discipline Project, Project Number: B504, NIBIB & NINDS, NIH (EB00856), NIDCD, NIH (1 R21 DC010470-01), NIDCD, NIH (1 R15 DC011002-01), and Fundamental Research Funds for the Central Universities (WH1114038).
References
- Allison BZ, Pineda JA. ERPs evoked by different matrix sizes: implications for a brain computer interface (BCI) system. IEEE Trans Neural Syst Rehabil Eng. 2003;11:110–3. doi: 10.1109/TNSRE.2003.814448. [DOI] [PubMed] [Google Scholar]
- Curran T, Hancock J. The FN400 indexed familiarity-based recognition of faces. Neuroimage. 2007;36:464–7. doi: 10.1016/j.neuroimage.2006.12.016. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dal Seno B, Matteucci M, Mainardil L. Online detection of P300 and error potentials in a BCI speller. Comput Intell Neurosci. 2010;2010 doi: 10.1155/2010/307254. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Farwell LA, Donchin E. Talking off the top of your head: toward a mental prosthesis utilizing event-related brain potentials. Electroencephalogr Clin Neurophysiol. 1988;70:510–23. doi: 10.1016/0013-4694(88)90149-6. [DOI] [PubMed] [Google Scholar]
- Hong B, Guo F, Liu T. Gao S 2009 N200-speller using motion-onset visual response. Clin Neurophysiol. 2009;120:1658–66. doi: 10.1016/j.clinph.2009.06.026. [DOI] [PubMed] [Google Scholar]
- Jin J, Allison BZ, Sellers EW, Brunner C, Horki P, Wang X, et al. An adaptive P300-based control system. J Neural Eng. 2011;8:036006. doi: 10.1088/1741-2560/8/3/036006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jin J, Allison BZ, Wang XY, Neuper C. A combined brain computer interface based on P300 potentials and motion-onset visual evoked potentials. J Neurosci Methods. 2012;205:265–76. doi: 10.1016/j.jneumeth.2012.01.004. [DOI] [PubMed] [Google Scholar]
- Kaufmann T, Schulz SM, Grünzinger C, Kübler A. Flahing characters with famous faces improves ERP-based brain–computer interface performance. J Neural Eng. 2011;8:056016. doi: 10.1088/1741-2560/8/5/056016. [DOI] [PubMed] [Google Scholar]
- Kuncheva LI, Plumpton CO. adaptive learning rate for online linear discriminant classifiers. Lect Notes Comput Sci. 2008;5342:510–9. [Google Scholar]
- Long J, Gu Z, Li Y, Yu T, Li F, Fu M. Semi-supervised joint spatio-temporal feature selection for P300-nased BCI speller. Cogn Neurodyn. 2011;5:387–98. doi: 10.1007/s11571-011-9167-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lu S, Guan C, Zhang H. Unsupervised brain computer interface based on inter-subject information and online adaptation. IEEE Trans Neural Syst Rehabil Eng. 2009;17:135–45. doi: 10.1109/TNSRE.2009.2015197. [DOI] [PubMed] [Google Scholar]
- Pfurtscheller G, Neuper C. Motor imagery and direct brain–computer communication. Proc IEEE. 2001;89:1123–34. [Google Scholar]
- Rivet B, Cecotti H, Perrin M, Maby E, Mattout J. Adaptive training session for a P300 speller brain–computer interface. J Physiol Paris. 2011;105:123–9. doi: 10.1016/j.jphysparis.2011.07.013. [DOI] [PubMed] [Google Scholar]
- Townsend G, LaPallo BK, Boulay CB, Krusienski DJ, Frye GE, Hauser CK, et al. A novel P300-based brain–computer interface stimulus presentation paradigm: moving beyond rows and columns. Clin Neurophysiol. 2011;121:1109–20. doi: 10.1016/j.clinph.2010.01.030. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Vidal jj. Toward direct brain–computer communication. Annu Rev Biophys Bioeng. 1972;2:157–80. doi: 10.1146/annurev.bb.02.060173.001105. [DOI] [PubMed] [Google Scholar]
- Vidaurre C, Kawanabe M, Bunau P, Blankertz B, Muller KR. Toward unsupervised adaptive of LDA for brain–computer interfaces. IEEE Trans Biomed Eng. 2011;58:587–97. doi: 10.1109/TBME.2010.2093133. [DOI] [PubMed] [Google Scholar]
- Zhang Y, Zhao Q, Jing J, Wang X, Cichocki A. A novel BCI based on ERP components sensitive to configural processing of human faces. J Neural Eng. 2012;9:026018. doi: 10.1088/1741-2560/9/2/026018. [DOI] [PubMed] [Google Scholar]


