Skip to main content
Heliyon logoLink to Heliyon
. 2024 Sep 3;10(17):e37343. doi: 10.1016/j.heliyon.2024.e37343

Improving inter-session performance via relevant session-transfer for multi-session motor imagery classification

Dong-Jin Sung a,b,1, Keun-Tae Kim a,c,1, Ji-Hyeok Jeong a,d, Laehyun Kim a, Song Joo Lee a,e,⁎⁎, Hyungmin Kim a,e,, Seung-Jong Kim b,⁎⁎⁎
PMCID: PMC11409124  PMID: 39296025

Abstract

Motor imagery (MI)-based brain-computer interfaces (BCIs) using electroencephalography (EEG) have found practical applications in external device control. However, the non-stationary nature of EEG signals remains to obstruct BCI performance across multiple sessions, even for the same user. In this study, we aim to address the impact of non-stationarity, also known as inter-session variability, on multi-session MI classification performance by introducing a novel approach, the relevant session-transfer (RST) method. Leveraging the cosine similarity as a benchmark, the RST method transfers relevant EEG data from the previous session to the current one. The effectiveness of the proposed RST method was investigated through performance comparisons with the self-calibrating method, which uses only the data from the current session, and the whole-session transfer method, which utilizes data from all prior sessions. We validated the effectiveness of these methods using two datasets: a large MI public dataset (Shu Dataset) and our own dataset of gait-related MI, which includes both healthy participants and individuals with spinal cord injuries. Our experimental results revealed that the proposed RST method leads to a 2.29 % improvement (p < 0.001) in the Shu Dataset and up to a 6.37 % improvement in our dataset when compared to the self-calibrating method. Moreover, our method surpassed the performance of the recent highest-performing method that utilized the Shu Dataset, providing further support for the efficacy of the RST method in improving multi-session MI classification performance. Consequently, our findings confirm that the proposed RST method can improve classification performance across multiple sessions in practical MI-BCIs.

Keywords: Session-transfer approach, Cosine similarity, Convolutional neural network, Brain-computer interface, Gait-related motor imagery

1. Introduction

Motor imagery (MI)-based brain-computer interfaces (BCIs) have emerged as a promising bridge between brain activity and external devices [1,2]. The MI paradigm involves imaging movement without physical execution, which leads to activations in the motor areas of the brain analogous to actual movement [3]. Because there are no gaze restrictions, MI-BCIs have been developed to provide practical interfaces for healthy individuals and those with physical impairments, allowing users to control devices by expressing their intentions related to movement. While various modalities have been employed to develop MI-BCIs, electroencephalography (EEG) was widely used due to its non-invasive, portable, and cost-effective attributes. Representative EEG-based MI-BCI applications include drone control [4], wheelchair navigation [5], and robotic exoskeleton control [6,7].

However, there remains a limitation that can adversely affect performance due to the non-stationarity of EEG signals [8]. Factors like intra-subject variability (often caused by changes in the users' state) and instrumental artifacts (resulting from electrode position changes or impedance variations) [8,9] contribute to significant variability in EEG data distribution between sessions of different days or times of the day (i.e. inter-session variability), where sessions refer to each distinct EEG acquisition process. Even for the same user, these session-specific variations can affect the performance of classifiers. Since they are trained on a given session's data, a different session with shifted data distributions can make the initially trained model suboptimal, which adversely affects classifier performance across sessions [10]. Therefore, a calibration process of training the classifier for each new session is required to ensure optimal performance [8].

To address this limitation, research was conducted with EEG processing algorithms to extract features and apply machine learning techniques. The most widely used feature extraction method is the common spatial pattern (CSP), a spatial filter that maximizes the variance between two classes in EEG signals. Various machine learning models have been employed with these CSP features to overcome inter-session variability. However, these traditional machine learning techniques are limited to their general lack of performance [11]. Over the past 2–3 years, advancements in computing power, driven by the development of graphics processing units (GPUs), have increased interest in deep learning [12,13]. Unlike traditional methods that require manual feature extraction, deep learning can extract complex features from EEG data without prior feature assumptions [14]. Therefore, deep learning-based domain adaptation techniques have emerged to tackle non-stationary EEG data by calibrating the distribution. These methods leverage annotated EEG data from previous sessions (i.e., source domain) to enhance deep learning model performance on unlabeled EEG data from new sessions (i.e., target domain) [15]. However, domain adaptation can be limited by their reliance on domain information, which leads to obscure decision boundaries, and their insufficient handling of relationships and dependencies among samples can restrict their effectiveness in capturing overall data variability [16]. Another method that is typically used to overcome non-stationarity is the instance transfer technique. Contrary to the domain adaptation, this method assigns weights or filter samples of the source domain according to defined criteria [17]. Therefore, the advantage of the instance transfer method is that it does not alter the inherent features or attributes of the EEG signals [18].

In this context, instance transfer techniques have been used to incorporate labeled trials from previous datasets to boost the informative trials in the target dataset [19]. However, directly transferring source data without selection carries the risk of negative transfer [20]. An instance selection approach based on active learning has been developed to overcome this issue, which selects data similar to the target subject's training data [[21], [22], [23]].

Furthermore, cosine similarity, a metric capable of computing both the magnitude and orientation of EEG features, has been applied in BCI applications to measure similarity [24]. While previous studies that have utilized cosine similarity have shown improvement in inter-subject transfer, there has been limited investigation into its application across multiple sessions. Addressing these variations is crucial for improving MI-BCI systems, as it directly impacts the consistency and reliability of performance over time [25].

In order to tackle this issue, we propose a cosine similarity-based session-transfer method to negate negative transfer and improve multiple-session MI classification. We utilize a CNN-based model to extract features from raw EEG data. Then, the cosine similarity function was employed to compute the similarity between the features of previous sessions’ data with the current session. Previous data with a similarity coefficient above an empirically chosen threshold are selected as relevant data and transferred to improve multi-session classification performance.

Hence, the contributions in this paper can be summarized as follows: 1) We introduce a relevant session-transfer method based on cosine similarity to improve classification performance by selecting instances for multi-session MI tasks; 2) We validate the effectiveness of our method using a large public MI dataset comprising 25 participants, each with 5 session data; 3) We also demonstrate the feasibility of our method using our own gait-related MI dataset, which includes both healthy participants and individuals with spinal cord injuries (SCI).

The remainder of the paper is structured as follows: In Section 2, we review relevant studies in MI-BCI. Section 3 details our approach to the inter-session scenario using cosine similarity-based session transfer and presents the experimental evaluation of the proposed method. The experimental results are presented in Section 4. In Section 5, we analyze these results, discuss the effectiveness of our method, and explore potential future research directions and limitations of our work. Finally, we summarize the significance of our findings in Section 6.

2. Related works

As the issue of variability due to the non-stationarity of EEG in MI-BCI has gained attention, many research groups have explored the use of widely employed CSP features alongside various machine learning methods. For instance, Arvaneh et al. applied CSP filters and linear discriminant analysis (LDA) classifier, incorporating a Kullback Leibler-based data space adaptation method to linearly transform EEG data and reduce distribution differences between the source and target spaces [10]. Bamdadian et al. proposed a spatial filter-based adaptive extreme learning machine (ELM) that updates the initial classifier from the calibration session using EEG data from the evaluation session [8]. Moreover, variations of the CSP algorithm, such as the filter bank common spatial pattern (FBCSP), have enhanced robustness by optimizing frequency bands. Demonstrations were conducted by Zhang et al., who used FBCSP with a progressive adaptation model leveraging recordings from previous sessions [26].

With the rapid development of deep learning, various methods have been adapted to reduce distribution discrepancies between source and target domains in MI-BCI systems. While these methods have primarily addressed inter-subject variability in MI [16,[27], [28], [29]], recent studies have also focused on mitigating inter-session variability. For instance, Zhang et al. proposed a Siamese deep domain adaptation framework for cross-session MI classification based on mathematical models from domain adaptation theory to enhance performance across different sessions [14]. Similarly, Hong et al. introduced a dynamic joint domain adaptation network using an adversarial learning strategy to learn domain-invariant feature representations. This approach improved MI task classification in the target domain by effectively leveraging information from the source session [30].

In addition to domain adaptation, instance transfer methods have emerged as effective strategies for addressing non-stationarity between sessions. For example, Zhang et al. progressively adapted a model with session-to-session transfer in an MI-BCI system [26]. To reduce the deterioration, a previous study has also evaluated recalibration strategies for detecting movement intentions, where they also reported a significant boost in performance when combining past sessions’ data [31]. This strategy was similarly successful in an offline study for the competition of the Cybathlon BCI event, where previous session data were combined with new session data for robust control [32]. Furthermore, studies have focused on enhancing MI [33] and motor attempt detection [19] for stroke rehabilitation through session-to-session transfer strategies, showing that adaptive EEG data transfer can significantly improve BCI performance in both patients and poor-performing participants.

Building on these approaches, cosine similarity has also been explored as a means to improve MI classification. Xu et al. proposed a regularized CSP algorithm utilizing cosine similarity to transfer selected source subject's covariance matrices, resulting in notable improvements in cross-subject MI classification [34]. Additionally, Li et al. employed cosine similarity in a graph convolutional network (GCN) to update the adjacency matrix, achieving better MI classification performance [35]. Despite their successful in cross-subject scenarios, these methods have not been applied to address the inter-session variability in MI task classification. Therefore, our proposed work aims to utilize cosine similarity and selectively transfer instances to improve multi-session MI classification.

3. Materials and methods

3.1. Dataset

For this study, a publicly available dataset for left- and right-hand MI tasks was initially utilized for investigation. Then, our own collected dataset focused on gait-related MI, encompassing two healthy participants and two individuals with spinal cord injury, was employed.

3.1.1. Shu dataset

This public MI-BCI dataset [36] comprises EEG recordings from 25 healthy participants (age 20–24, 12 females). Each participant underwent a total of 5 sessions, with an interval of 2–3 days between sessions. The MI tasks involved 100 trials of two tasks: grasping with the left and right hand. Participants were given video cues to indicate the start of a 4-s imagination period for each respective movement. The trials were recorded using a 32-channel EEG cap (Wuhan Greentech Technology Co., China) with a sampling frequency of 250 Hz, while ensuring that the electrode impedance remained below 20 kΩ. Baseline correction and removal of bad trials were performed to avoid unwanted artifacts. Additionally, the EEG data were band-pass filtered using a finite impulse response (FIR) filter with a frequency range of 0.5–40 Hz. The study for collecting this multiple independent session data received approval from the Shanghai Second Rehabilitation Hospital Ethics Committee (approval number: ECSHSRH 2018-0101) and was in accordance with the Declaration of Helsinki.

3.1.2. Gait-related MI dataset

We collected our gait-related MI EEG dataset, which includes 2 healthy participants (H1 and H2; all males, age 25 and 29 years) and 2 individuals (S1 and S2; all males, age 46 and 53 years) with spinal cord injuries (SCI). Both individuals with SCI experienced complete lower extremity sensory and motor deficiencies, classified as grade A according to the American Spinal Injury Association (ASIA) impairment scale. None of the healthy participants had a history of motor or psychological diseases. Before the experiment, participants were informed about the approved protocol by the Institutional Review Board (approval number: 2021-046) of the Korea Institute of Science and Technology (KIST), and written consent was obtained from each participant according to the Declaration of Helsinki.

During the experiment, participants were seated in front of a monitor to perform the required tasks as shown in Fig. 1a. The experimental protocol comprised repeated MI tasks referred to as ‘trial’. Fig. 1b depicts the overall procedure for conducting each trial. A sufficient description of the MI tasks to be performed in each trial ensured that the participants understood the task before starting. First, participants were provided with a mouse and instructed to single-click it when they were ready to begin. Once clicked, a fixation cross was displayed for 3 s, accompanied by a beep. Then, a random visual cue lasting 2 s was presented, indicating one of three gait-related MI tasks (Gait: upwards arrow, Rest: box, or Sit: downwards arrow). Each MI task consisted of 30 trials, resulting in a total of 90 trials within one session. Participants were instructed to mentally imagine the corresponding movement for each task for a duration of 5 s. After completing the MI task, a final beep signaled the end of the current trial and prepared participants for the next one.

Fig. 1.

Fig. 1

(a) Experimental environment of the MI experiment. (b) Shows the experimental protocol to collect our gait-related MI dataset.

In order to record the EEG signals, we used a 31-channel electrode arrangement based on the international 10–20 system, applied with ActiCap, and connected to the BrainProduct amplifier (Brain Products, Germany). The reference and ground electrodes were positioned on AFz and FCz, respectively. EEG data were recorded at a sampling rate of 500 Hz, and a notch filter at 60 Hz was applied to reduce power noise.

3.2. Signal processing

The concept of session-transfer involves using data from previous sessions for training and classifying the target session. As demonstrated in previous literature, adaptively transferring data across sessions has been shown to lead to improved performance [[37], [38], [39]]. Building on this idea, we extend the concept by selectively transferring relevant previous data with a high cosine similarity score [40] concerning the target session, which results in a more robust performance. The overall schematic of our proposed method is shown in Fig. 2. Further details regarding the proposed method, including preprocessing steps, feature extracting process, and comparative evaluation methods, are described below.

Fig. 2.

Fig. 2

Overall schematic of our proposed RST method. Features are extracted via MM-CNN from the current (Xc) and previous (Xp) session's windows of EEG data. For each feature exp(k) from the previous session, comparisons are made with the current session's features, followed by averaging similarity values across all n current features. Samples exceeding the predefined threshold α are subsequently updated into the training dataset.

3.2.1. Preprocessing in each dataset

In the Shu Dataset, the data have already undergone preprocessing with band-pass filters, and noise rejection above 100 μV has been applied [36]. For our experimental dataset, the EEG data was down-sampled from 500 Hz to 250 Hz. A band-pass filter ranging from 1 to 40 Hz, based on a finite impulse response filter, was applied to reject artifacts [41]. The Z-score normalization procedure was then conducted. Data augmentation was performed using a sliding window method for both datasets. Since the length of the MI data was 4 s and 5 s for the Shu Dataset and our dataset, respectively, we employed a window size of 3 s with a shift of 0.5 for the Shu Dataset, and a window size of 4 s with a shift of 0.5 for our experimental dataset.

3.2.2. Design of CNN model for feature extraction

Computing or analyzing raw EEG data is challenging due to its high dimensionality and the presence of unwanted noises (i.e., low SNR) [42,43]. Therefore, it is necessary to extract representative features from the recorded brain activities. We employed a multi-model (MM)-CNN, which has been validated in our previous study [44]. The MM-CNN comprises three parallel robust architectures that have been proven effective in previous works, namely, ShallowConvNet [45], DeepConvNet [45], and EEGNet [46]. The model is trained equivalently with the cross-entropy loss function, and the output of each model's fully connected layers is concatenated. This concatenated output is then passed through a fully connected layer followed by a Softmax activation [47] for classification. It is well-recognized that the convolutional layers of a CNN provide a rich representation of the given data [48,49]. In our study, we utilized only the flattened outputs of the convolutional layers of each model as features for each EEG window.

3.2.3. Cosine similarity computation

From the acquired features corresponding to each EEG session, the similarity function can be applied in order to sort out the relevant data. We used the cosine similarity function [40], which measures the similarity between two non-zero vectors A and B in an inner product space:

cosinesimilarity=cos(A,B)=ABAB (1)

The cosine similarity has been used in various fields, such as natural language processing, face recognition, and also in classification of signals [[50], [51], [52]]. A higher similarity results in a similarity coefficient closer to 1, while dissimilar data outputs values closer to −1. In order to compute the similarity between previous sessions and the current session's training data, features from each of the sliding windows of raw data were extracted and split by each respective class. Then, each of the previous session's features was computed against all the current session's features in each corresponding class. Therefore, the computed number of similarity coefficients for each of the previous session's features was equal to the total number of extracted features from the current session. Subsequently, the mean of the coefficients for each previous feature was used for comparison, where relevant data were defined as the corresponding data from the previous features that had the mean value of the cosine similarity above a certain threshold (α).

Given a window of raw current data's training set xcXc, the MM-CNN learns the embedding function Eθ=XRd that transfers a given window sample to its feature space of exc(i)=Eθ(xc) according to the corresponding ith class (i.e. MI task). Features are then extracted from the previous sessions' data Xp with the function Eθ, therefore obtaining exp(i)=Eθ(xp), where xp is a window of raw previous data (xpXp). The features of previous data exp(i)(k) are computed against the current session's features with the similarity function according to each class by (1):

simk=cos(exp(i)(k),exc(i)(j)),forj=1,,n (2)

where simk={sim1,sim2,,simn} correspond to the similarity values regarding the previous feature exp(i)(k), and n denotes the number of samples from the current session of the corresponding class. The raw data corresponding to the ith class of a previous feature was selected as relevant when the computed average value (simavg) of the cosine similarities against the current features is above a threshold α. Therefore, the relevant data can be defined as:

R(i)={r|1nj=1ncos(exp(i)(k),exc(i)(j))>α,fork=1,,landrxp} (3)

where l denotes the number of samples from the previous sessions of the corresponding class. In Fig. 2 we show an example highlighted in red of how if the average similarity for feature exp exceeds the predefined threshold α, the corresponding data xp(k) is deemed relevant and subsequently added to the training dataset. As no previous research has reported a predefined optimal threshold for this scenario, we tested α with values {0,0.1,0.2,0.3,0.4,0.5}. Negative coefficients were excluded, as negative cosine similarity indicates dissimilarity [40]. The value yielding the best performance was chosen as the empirically defined optimal threshold.

3.2.4. Performance evaluation

Due to the non-stationarity of EEG data, the inter-session variability necessitates new training data for calibration in every session [38,53]. Even with this calibration process implemented for each new session, unstable performance is still observed throughout multi-session data. Therefore, we apply a session-transfer technique based on the hypothesis that the data from previous sessions will be useful in the generalization of a classification model. The training and classification process was performed on the predefined MM-CNN, including its fully connected layers.

  • a)

    Self-calibrating (SC): Only the set of data acquired in the current session is utilized as training data.

  • b)

    Whole session-transfer (WST): The entirety of the data from previous sessions is used with the training data from the current session.

  • c)

    Relevant session-transfer (RST): The relevant data selected by (3) from previous sessions corresponding to each class are concatenated to the training process of the current session.

In order to validate our proposed method, a 10-fold cross-validation was conducted. EEG data were randomly divided into 10 subsets without overlap for the corresponding target session. For each evaluation, one subset was used as the test set, and the rest of the 9 subsets were used as the training set. This process was repeated 10 times, therefore allowing each data to be sorted as the test set. The accuracy, Kappa value, and F1-score across all 10 repetitions were averaged and designated as the measured performance. Model training was conducted with the Adam optimizer with a batch size of 64 for an epoch of 300, and the learning rate was set to 1×104 and stopped early to prevent overfitting. The amount of time taken for our session-transfer methods to train the model during each session was also recorded. This was also averaged from the repetitions of the 10-fold cross-validation for comparison.

Statistical analysis of the repeated measures of classification accuracy and training time for each participant according to the type of training method was performed after the preceding Shapiro-Wilk normality test and Levene's test to confirm the normality and variance distribution of the data. The repeated measures (RM) ANOVA test was performed to analyze classification accuracy, while Mauchly's test of sphericity was conducted to verify that the variance of the differences between the levels of the within-subjects factor is equal. The paired t-test with Bonferroni correction was performed as a post-hoc evaluation. For training time, the paired Wilcoxon signed rank test was performed to compare each session-transfer method. All statistical analyses were conducted using R (R Core Team, 2022).

4. Results

This section presents the experimental results obtained with our proposed RST method. We first investigated the optimal threshold value, which is the threshold that yields the best accuracy, to define relevant features, and subsequent RST results were based on the empirically defined optimal threshold. We then compared the performances of both session-transfer techniques against the SC method with the available datasets.

4.1. Accuracy for the Shu dataset

To optimize the performance of utilizing relevant data from past sessions, we conducted simulations for each result of RST with a range of defined thresholds. Fig. 3 indicates the averaged accuracies and the ratio of relevant data per threshold across each run of the 10-fold cross-validation. The threshold of 0.1 showed the best performance at 79.47 ± 0.6 %, with a relevant data ratio of 0.62 from previous sessions, and was thereby selected as the optimal threshold for the RST method. Compared to the threshold of 0, which had the largest ratio of relevant data of 0.68, there was a difference of 0.42 % in performance.

Fig. 3.

Fig. 3

Averaged accuracies and the ratio of relevant data per threshold for the Shu Dataset. The highest accuracy can be found with the threshold of 0.1.

The overall performance of the SC and session-transfer methods was evaluated. Fig. 4 depicts the averaged accuracy for the three methods. The proposed RST method exhibited the highest performance at 79.67 ± 7.9 %, which was significantly better than the SC method at 77.39 ± 8.8 % (p < 0.001) and the WST method at 78.66 ± 8.6 % (p < 0.01). While the WST method also showed higher accuracies compared to the SC method, the RST method consistently showed the best performance across all sessions compared to the SC and WST methods. Table 1 shows the accuracy, Kappa value, and F1-score of the RST method against the SC and WST methods. The most significant improvement compared to the SC and WST method was observed in Session 4 with an increase of 5.48 %, and 1.96 %. These results demonstrate that the RST method delivers robust performance across multiple sessions when compared to these basic methods.

Fig. 4.

Fig. 4

Results from the Shu Dataset with the averaged accuracy across the 25 participants for comparison between SC, WST, and RST. The error bars plot the standard error for each method. Statistical analysis was conducted using RM ANOVA, followed by a paired t-test with Bonferroni correction for post-hoc evaluation (*p < 0.05, **p < 0.01, ***p < 0.001).

Table 1.

Comparison in evaluation metrics per session compared to the RST method in the Shu dataset. The highest values are highlighted in bold.

Sessions
Session 2
Session 3
Session 4
Session 5
Accuracy Kappa F1-score Accuracy Kappa F1-score Accuracy Kappa F1-score Accuracy Kappa F1-score
SC 78.60(2.5) 0.56 (0.05) 0.80 (0.04) 76.81(1.8) 0.53 (0.03) 0.77 (0.03) 78.41 (2.1) 0.56 (0.04) 0.73 (0.03) 78.77 (1.4) 0.57 (0.03) 0.75 (0.03)
WST 78.27 (1.5) 0.56 (0.03) 0.78 (0.22) 76.42(1.9) 0.52 (0.04) 0.74 (0.03) 81.94 (1.1) 0.63 (0.02) 0.81 (0.02) 77.99 (2.2) 0.56 (0.04) 0.76 (0.03)
RST 78.10 (1.6) 0.56 (0.03) 0.80 (0.03) 78.23 (1.3) 0.57 (0.03) 0.78 (0.02) 83.90 (2.2) 0.68 (0.05) 0.83 (0.02) 79.46 (1.2) 0.59 (0.03) 0.79 (0.02)

Furthermore, the time taken to train the models was also investigated in each WST and RST method. The training time for the RST method was consistently shorter than the WST method (Fig. 5). Statistical analysis exhibited a significant decrease in time for our proposed method, where the averaged time taken for the WST method was 65.32 s and the RST method was 50.18 s.

Fig. 5.

Fig. 5

Training time for the Shu dataset according to each session where the session-transfer methods have been applied. Statistical significance was assessed using the paired Wilcoxon signed-rank test (***p < 0.001).

4.2. Accuracy for the gait-related MI dataset

The same process was employed to determine the optimal threshold for our gait-related MI data. Fig. 6 shows the results from our simulations, which also showed the best performance at a threshold of 0.1, consistent with the Shu Dataset. The average accuracy for the threshold 0.1 was 71.41 ± 2.6 %, with a corresponding relevant data ratio of 0.83. In contrast, the threshold of 0 had the highest ratio of 0.86 but achieved lower accuracies at 69.84 ± 2.5 % than the threshold of 0.1.

Fig. 6.

Fig. 6

Averaged accuracies and the ratio of relevant data per threshold for our gait-related dataset. The overall highest accuracy can be also found with a threshold of 0.1.

A comparison of the three methods for our dataset is shown in Fig. 7. While the SC method showed an average accuracy of 66.20 ± 9.4 %, the WST method achieved 70.49 ± 9.0 %, and the RST method reached 72.57 ± 9.8 %. Overall, the WST method established an increase of 4.3 %, whereas the RST method outperformed the other methods, with an increase of 6.4 % compared to the SC method and 2.1 % compared to the WST method. The most notable improvement was observed with S1, where the RST method exhibited an increase of up to 10.2 % compared to the SC method.

Fig. 7.

Fig. 7

Results from our gait-related dataset for each participant, with the average accuracy comparison between SC, WST, and RST.

Table 3 represents the overall average accuracy, Kappa value, and F1-score between sessions for our proposed method compared with the SC and WST methods. The most significant improvements for all evaluation metrics occurred in Session 4 against the SC method and Session 3 when compared with the WST method. These results confirm that the RST method can be effectively applied in a multi-classification scenario, even with participants who have SCI.

Table 3.

Comparison in evaluation metrics per session compared to the RST method in our gait-related dataset. The highest values are highlighted in bold.

Sessions
Session 2
Session 3
Session 4
Accuracy Kappa F1-score Accuracy Kappa F1-score Accuracy Kappa F1-score
SC 59.26 (11.7) 0.39 (0.18) 0.57 (0.15) 74.07 (8.0) 0.61 (12.0) 0.73 (0.08) 66.67 (20.3) 0.50 (0.30) 0.61 (0.25)
WST 70.37 (13.9) 0.56 (0.21) 0.67 (0.17) 68.52 (12.7) 0.53 (0.2) 0.66 (0.15) 73.15 (15.2) 0.60 (0.23) 0.70 (0.19)
RST 66.20 (10.9) 0.49 (0.16) 0.65 (0.13) 77.31 (15.6) 0.66 (0.23) 0.76 (0.17) 79.17 (14.5) 0.69 (0.22) 0.78 (0.15)

The averaged training time for each WST and RST method was also investigated, as shown in Fig. 8. A shorter training time when utilizing the RST method was shown except for Session 2, where the WST method was faster by 1.4 s.

Fig. 8.

Fig. 8

Training time for our gait-related dataset according to each session where the session-transfer methods have been applied.

5. Discussion

In this study, we proposed the Relevant Session-Transfer (RST) method to address inter-session variability and improve classification accuracy in multiple-session MI-based BCIs. Our hypothesis was that selecting relevant data could improve classification accuracy compared to both the conventional Self-Calibrating (SC) method and the Whole-Session-Transfer (WST) method. We evaluated the RST method using the Shu open dataset, which consists of 25 participants across 5 sessions, and observed statistically significant improvements of 2.29 % in accuracy, achieving 79.67 ± 7.9 %. Furthermore, the proposed method was also applied to our gait-related MI dataset spanning 4 sessions, where it demonstrated an improvement of 6.37 % in performance with an average accuracy of 72.57 ± 9.8 %.

To validate the effectiveness of the RST method, we initially applied it to the Shu dataset. Comparative results are shown in Table 2 in the case of the Shu dataset, where the state-of-the-art method of the fusion of wavelet time and image scattering-based support vector machines (WTIS-SVM), and the accuracy was 75.68 % [54]. The previously proposed MM-CNN was used in our method, and the accuracy was 77.39 % despite using a self-calibrating approach. This result aligns with our previous study [44], where MM-CNN exhibited better performance in open MI datasets, and validated its suitability for session-transfer approaches.

Table 2.

Comparative results on the Shu dataset, showing the performance of recent approaches and our MM-CNN model using SC, WST, and RST method.

Group Method Accuracy (%)
J. Ma et al. [36] CSP 57.33
FBCSP 64.44
EEGNet 68.85
deepConvNet 65.03
Pham [54] LSTM 59.66
Bi-LSTM 61.83
WIS-SVM 50.66
WTS-SVM 65.51
WTIS-SVM 75.68
Proposed MM-CNN 77.39
MM-CNN (WST) 78.66
MM-CNN (RST) 79.67

In both datasets, the instance transfer methods of WST and RST demonstrated superior performance compared to the SC method (Fig. 4, Fig. 7). Despite recalibration efforts for each new session, the SC method exhibited low performances, attributed to non-stationarity leading to distribution discrepancies across sessions [55]. Our findings indicate that incorporating data from previous sessions, as done in the WST method, clearly validates the positive effectiveness of session-transferring. This aligns with previous research, as it is known that larger volumes of training data can improve MI classification.

However, our results also underscore the importance of data selection strategies. While the use of all available data can generally improve classifier performance, our study suggests that this approach may sometimes introduce adverse effects. This is consistent with previous research that attempted to alleviate the negative transfer, which occurs due to differences between source and target data, by transferring only similar source data in an inter-subject scenario [21]. We suggest that directly transferring all available data could entail some adverse effects on the training process. The observed adverse effects could be explained by significant discrepancies between feature domains across sessions even for a single participant, which can potentially lead to compromised performance when assuming a continuous distribution across the data [56]. Therefore, we hypothesized that by employing a similarity function to screen data, which entails reducing the amount of training data, an optimal trade-off between data quantity and quality may produce meaningful improvements in multi-session MI classification.

As a result, the proposed RST method significantly enhanced classification performance (p < 0.01) for the Shu dataset, and also showed the highest performing method for our gait-related dataset, even with fewer data using a relevancy threshold of 0.1, achieving selected data ratios of 0.62 and 0.83 for each dataset (Fig. 3, Fig. 6). Session-wise analysis from both datasets (Table 1, Table 3) revealed that while the WST method showed immediate improvements in the second session, the proposed RST method ultimately delivered the best results as the sessions progressed. The superior performance of the RST method corroborates previous research, highlighting the essential aspect of data similarity in the instance of transfer learning [23]. This improvement with similar data could also be attributed to the greater likelihood of achieving better generalizability to the target, in our case, the current session of interest, as opposed to retaining features that diverge from the target which could result in inferior performance [56]. It was also notable that the optimal threshold for similarity was 0.1 for both the Shu and Gait-related Datasets. One possible explanation is that the quantity and quality of the data were crucial factors in ensuring optimal results. While data quantity affects training, a properly reduced amount of high-quality data can improve performance [57]. This is also in line with studies reporting that accuracy can be comparable or even significantly increased [33,58] even when training data was limited in both healthy and impaired groups. Based on these findings, we believe that the empirically defined threshold of 0.1 provided an optimal balance between data quantity and quality for both datasets, enhancing classification performance. Also, the overall training time for the RST method was shorter than the WST method in both datasets (Fig. 5, Fig. 8). These results are promising as a reduction in training time alongside stable performance is required for the long-term practical use of BCI [59]. Moreover, the results obtained from individuals with SCI (Fig. 7) support the feasibility of extending our session-transfer method from healthy participants to those with impairments. This is particularly significant because individuals with SCI are among the groups that can benefit most from MI-BCI technology. Although studies [60,61] have shown that these individuals can exhibit different brain states during MI, often leading to performance degradation, our results demonstrate that the RST method successfully improved multiple-session classification for this group. Therefore, based on our overall findings, our proposed session-transfer method could help overcome obstacles hindering the real-world applicability of BCI [62].

Multiple types of distance and similarity functions could be utilized for data selection. While it is intuitive that distance is a complementary concept to similarity, it is unclear how each distance metric will correlate to the used cosine similarity in different applications [63,64]. Moreover, it is important to consider the data distribution when utilizing both the distance and similarity metrics [65]. As our study's objective was to validate the feasibility of the use of cosine similarity in the inter-session problem of MI-BCI, we believe that further investigation on comparing various metrics for data selection may be the next step in our research. Therefore, we plan a deeper investigation of utilizing various distance and similarity metrics, conducting experiments on data distribution for EEG data selection for the field of BCI in future studies. Also, as our gait-related dataset's size was limited, we plan to expand our experiments with a larger population of individuals with SCI, increase the number of sessions, and consider factors such as gender and age, to further validate our methods. Moreover, future investigations would explore the efficacy of other similarity functions and address the issue of intra-session variability or apply our method to the cross-dataset problem [66] to advance the practical use of multi-session MI-BCI.

6. Conclusion

This study demonstrated the effectiveness of the proposed RST method in improving MI classification across multiple sessions. The results underscored that transferring selected relevant data can improve classification performance even with a shorter training time compared to baseline methods. Moreover, our method exhibited applicability to individuals with SCI. By ensuring consistent performance over time, the RST method enhances the feasibility of MI-based BCIs in real-world scenarios. This promises to benefit a broad range of users, promoting independence and improving the quality of life for impaired individuals while assisting the healthy population.

CRediT authorship contribution statement

Dong-Jin Sung: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Software, Writing – original draft. Keun-Tae Kim: Conceptualization, Funding acquisition, Investigation, Methodology, Validation. Ji-Hyeok Jeong: Data curation, Investigation, Validation. Laehyun Kim: Funding acquisition. Song Joo Lee: Project administration, Supervision, Writing – review & editing. Hyungmin Kim: Investigation, Project administration, Supervision, Writing – review & editing. Seung-Jong Kim: Supervision, Writing – review & editing.

Declaration of competing interest

We wish to confirm that there are no known conflicts of interest associated with this publication and there has been no significant financial support for this work that could have influenced its outcome.

We confirm that the manuscript has been read and approved by all named authors and that there are no other persons who satisfied the criteria for authorship but are not listed. We further confirm that the order of authors listed in the manuscript has been approved by all of us.

We confirm that we have given due consideration to the protection of intellectual property associated with this work and that there are no impediments to publication, including the timing of publication, with respect to intellectual property. In so doing we confirm that we have followed the regulations of our institutions concerning intellectual property.

We further confirm that any aspect of the work covered in this manuscript that has involved either experimental animals or human patients has been conducted with the ethical approval of all relevant bodies and that such approvals are acknowledged within the manuscript.

We understand that the Corresponding Author is the sole contact for the Editorial process (including Editorial Manager and direct communications with the office). He/she is responsible for communicating with the other authors about progress, submissions of revisions and final approval of proofs. We confirm that we have provided a current, correct email address which is accessible by the Corresponding Author and which has been configured to accept email from hk@kist.re.kr.

Acknowledgments

This work was supported in part by the Institute of Information and Communications Technology Planning and Evaluation (IITP) grant funded by the Korean Government (Development of Non-Invasive Integrated BCI SW Platform to Control Home Appliances and External Devices by User's Thought via AR/VR Interface) under Grant 2017-0-00432, and in part by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2021R1C1C2008446).

Contributor Information

Song Joo Lee, Email: songjoolee@kist.re.kr.

Hyungmin Kim, Email: hk@kist.re.kr.

Seung-Jong Kim, Email: sjkim586@korea.ac.kr.

References

  • 1.Meng J., Zhang S., Bekyo A., Olsoe J., Baxter B., He B. Noninvasive electroencephalogram based control of a robotic arm for reach and grasp tasks. Sci. Rep. 2016;6 doi: 10.1038/srep38565. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Tiwari N., Edla D.R., Dodia S., Bablani A. Brain computer interface: a comprehensive survey. Biologically inspired cognitive architectures. 2018;26:118–129. [Google Scholar]
  • 3.Papaxanthis C., Schieppati M., Gentili R., Pozzo T. Imagined and actual arm movements have similar durations when performed under different conditions of direction and mass. Exp. Brain Res. 2002;143:447–452. doi: 10.1007/s00221-002-1012-1. [DOI] [PubMed] [Google Scholar]
  • 4.LaFleur K., Cassady K., Doud A., Shades K., Rogin E., He B. Quadcopter control in three-dimensional space using a noninvasive motor imagery-based brain–computer interface. J. Neural. Eng. 2013;10 doi: 10.1088/1741-2560/10/4/046003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Saeedi S., Chavarriaga R., Millán J.d.R. Long-term stable control of motor-imagery BCI by a locked-in user through adaptive assistance. IEEE Trans. Neural Syst. Rehabil. Eng. 2016;25:380–391. doi: 10.1109/TNSRE.2016.2645681. [DOI] [PubMed] [Google Scholar]
  • 6.Frisoli A., Loconsole C., Leonardis D., Banno F., Barsotti M., Chisari C., Bergamasco M. A new gaze-BCI-driven control of an upper limb exoskeleton for rehabilitation in real-world tasks. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 2012;42:1169–1179. [Google Scholar]
  • 7.Choi J., Kim K.T., Jeong J.H., Kim L., Lee S.J., Kim H. Developing a motor imagery-based real-time asynchronous hybrid BCI controller for a lower-limb exoskeleton. Sensors. 2020;20:7309. doi: 10.3390/s20247309. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Bamdadian A., Guan C., Ang K.K., Xu J. 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) IEEE; 2013. Improving session-to-session transfer performance of motor imagery-based BCI using adaptive extreme learning machine; pp. 2188–2191. [DOI] [PubMed] [Google Scholar]
  • 9.Kaplan A.Y., Fingelkurts A.A., Fingelkurts A.A., Borisov S.V., Darkhovsky B.S. Nonstationary nature of the brain activity as revealed by EEG/MEG: methodological, practical and conceptual challenges. Signal Process. 2005;85:2190–2212. [Google Scholar]
  • 10.Arvaneh M., Guan C., Ang K.K., Quek C. EEG data space adaptation to reduce intersession nonstationarity in brain-computer interface. Neural Comput. 2013;25:2146–2171. doi: 10.1162/NECO_a_00474. [DOI] [PubMed] [Google Scholar]
  • 11.Perez-Velasco S., Santamaria-Vazquez E., Martinez-Cagigal V., Marcos-Martinez D., Hornero R. EEGSym: overcoming inter-subject variability in motor imagery based BCIs with deep learning. IEEE Trans. Neural Syst. Rehabil. Eng. 2022;30:1766–1775. doi: 10.1109/TNSRE.2022.3186442. [DOI] [PubMed] [Google Scholar]
  • 12.Al-Saegh A., Dawwd S.A., Abdul-Jabbar J.M. Deep learning for motor imagery EEG-based classification: a review. Biomed. Signal Process Control. 2021;63 [Google Scholar]
  • 13.Craik A., He Y., Contreras-Vidal J.L. Deep learning for electroencephalogram (EEG) classification tasks: a review. J. Neural. Eng. 2019;16 doi: 10.1088/1741-2552/ab0ab5. [DOI] [PubMed] [Google Scholar]
  • 14.Zhang X., Miao Z., Menon C., Zheng Y., Zhao M., Ming D. Priming cross-session motor imagery classification with a universal deep domain adaptation framework. Neurocomputing. 2023;556 [Google Scholar]
  • 15.Azab A.M., Toth J., Mihaylova L.S., Arvaneh M. A review on transfer learning approaches in brain–computer interface. Signal processing and machine learning for brain-machine interfaces. 2018:81–98. [Google Scholar]
  • 16.Zhang D., Li H., Xie J., Li D. MI-DAGSC: a domain adaptation approach incorporating comprehensive information from MI-EEG signals. Neural Network. 2023;167:183–198. doi: 10.1016/j.neunet.2023.08.008. [DOI] [PubMed] [Google Scholar]
  • 17.Hui Q., Liu X., Li Y., Xu S., Zhang S., Sun Y., Wang S., Chen X., Zheng D. Proceedings of the 20th ACM Conference on Embedded Networked Sensor Systems. 2022. Riemannian geometric instance filtering for transfer learning in brain-computer interfaces; pp. 1162–1167. [Google Scholar]
  • 18.Zhang K., Xu G., Chen L., Tian P., Han C., Zhang S., Duan N. Instance transfer subject-dependent strategy for motor imagery signal classification using deep convolutional neural networks. Comput. Math. Methods Med. 2020;2020 doi: 10.1155/2020/1683013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Cao L., Chen S., Jia J., Fan C., Wang H., Xu Z. An inter- and intra-subject transfer calibration scheme for improving feedback performance of sensorimotor rhythm-based BCI rehabilitation. Front. Neurosci. 2020;14 doi: 10.3389/fnins.2020.629572. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Rosenstein M.T., Marx Z., Kaelbling L.P., Dietterich T.G. 2005. To Transfer or Not to Transfer, NIPS 2005 Workshop on Transfer Learning. [Google Scholar]
  • 21.Hossain I., Khosravi A., Nahavandhi S. 2016 International Joint Conference on Neural Networks (IJCNN) IEEE; 2016. Active transfer learning and selective instance transfer with active learning for motor imagery based BCI; pp. 4048–4055. [Google Scholar]
  • 22.Hossain I., Khosravi A., Hettiarachchi I., Nahavandi S. Multiclass informative instance transfer learning framework for motor imagery-based brain-computer interface. Comput. Intell. Neurosci. 2018:2018. doi: 10.1155/2018/6323414. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Liang Z., Zheng Z., Chen W., Pei Z., Wang J., Chen J. Manifold embedded instance selection to suppress negative transfer in motor imagery-based brain–computer interface. Biomed. Signal Process Control. 2024;88 [Google Scholar]
  • 24.Garima N. Goel, Rathee N. Modified multidimensional scaling on EEG signals for emotion classification. Multimed. Tool. Appl. 2023:1–22. [Google Scholar]
  • 25.Abu-Rmileh A., Zakkay E., Shmuelof L., Shriki O. Co-adaptive training improves efficacy of a multi-day EEG-based motor imagery BCI training. Front. Hum. Neurosci. 2019;13:362. doi: 10.3389/fnhum.2019.00362. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Zhang Z., Foong R., Phua K.S., Wang C., Ang K.K. Modeling EEG-based motor imagery with session to session online adaptation. Annu Int Conf IEEE Eng Med Biol Soc. 2018;2018:1988–1991. doi: 10.1109/EMBC.2018.8512706. [DOI] [PubMed] [Google Scholar]
  • 27.Wei F., Xu X., Jia T., Zhang D., Wu X. IEEE Trans Neural Syst Rehabil Eng; 2023. A Multi-Source Transfer Joint Matching Method for Inter-subject Motor Imagery Decoding. [DOI] [PubMed] [Google Scholar]
  • 28.Zhang D., Li H., Xie J. MI-CAT: a transformer-based domain adaptation network for motor imagery classification. Neural Network. 2023;165:451–462. doi: 10.1016/j.neunet.2023.06.005. [DOI] [PubMed] [Google Scholar]
  • 29.Li H., Zhang D., Xie J. MI-DABAN: a dual-attention-based adversarial network for motor imagery classification. Comput. Biol. Med. 2023;152 doi: 10.1016/j.compbiomed.2022.106420. [DOI] [PubMed] [Google Scholar]
  • 30.Hong X., Zheng Q., Liu L., Chen P., Ma K., Gao Z., Zheng Y. Dynamic joint domain adaptation network for motor imagery classification. IEEE Trans. Neural Syst. Rehabil. Eng. 2021;29:556–565. doi: 10.1109/TNSRE.2021.3059166. [DOI] [PubMed] [Google Scholar]
  • 31.Lopez-Larraz E., Ibanez J., Trincado-Alonso F., Monge-Pereira E., Pons J.L., Montesano L. Comparing recalibration strategies for electroencephalography-based decoders of movement intention in neurological patients with motor disability. Int. J. Neural Syst. 2018;28 doi: 10.1142/S0129065717500605. [DOI] [PubMed] [Google Scholar]
  • 32.Hehenberger L., Kobler R.J., Lopes-Dias C., Srisrisawang N., Tumfart P., Uroko J.B., Torke P.R., Muller-Putz G.R. Long-term mutual training for the CYBATHLON BCI race with a tetraplegic pilot: a case study on inter-session transfer and intra-session adaptation. Front. Hum. Neurosci. 2021;15 doi: 10.3389/fnhum.2021.635777. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Ang K.K., Guan C. EEG-based strategies to detect motor imagery for control and rehabilitation. IEEE Trans. Neural Syst. Rehabil. Eng. 2017;25:392–401. doi: 10.1109/TNSRE.2016.2646763. [DOI] [PubMed] [Google Scholar]
  • 34.Xu Y., Wei Q., Zhang H., Hu R., Liu J., Hua J., Guo F. Transfer learning based on regularized common spatial patterns using cosine similarities of spatial filters for motor-imagery BCI. J. Circ. Syst. Comput. 2019;28 [Google Scholar]
  • 35.Li Y., Zhong N., Taniar D., Zhang H. MCGNet(+): an improved motor imagery classification based on cosine similarity. Brain Inform. 2022;9:3. doi: 10.1186/s40708-021-00151-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Ma J., Yang B., Qiu W., Li Y., Gao S., Xia X. A large EEG dataset for studying cross-session variability in motor imagery brain-computer interface. Sci. Data. 2022;9:531. doi: 10.1038/s41597-022-01647-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Li Y., Kambara H., Koike Y., Sugiyama M. Application of covariate shift adaptation techniques in brain-computer interfaces. IEEE Trans. Biomed. Eng. 2010;57:1318–1324. doi: 10.1109/TBME.2009.2039997. [DOI] [PubMed] [Google Scholar]
  • 38.Zhang S., Zheng D., Tang N., Chew E., Lim R.Y., Ang K.K., Guan C. IEEE; 2022. Online adaptive CNN: a session-to-session transfer learning approach for non-stationary EEG; pp. 164–170. (2022 IEEE Symposium Series on Computational Intelligence (SSCI)). [Google Scholar]
  • 39.Seeland A., Tabie M., Kim S.K., Kirchner F., Kirchner E.A. 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC) IEEE; 2017. Adaptive multimodal biosignal control for exoskeleton supported stroke rehabilitation; pp. 2431–2436. [Google Scholar]
  • 40.Xia P., Zhang L., Li F. Learning similarity with cosine similarity ensemble. Inf. Sci. 2015;307:39–52. [Google Scholar]
  • 41.Zapala D., Zabielska-Mendyk E., Augustynowicz P., Cudo A., Jaskiewicz M., Szewczyk M., Kopis N., Francuz P. The effects of handedness on sensorimotor rhythm desynchronization and motor-imagery BCI control. Sci. Rep. 2020;10:2087. doi: 10.1038/s41598-020-59222-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.McFarland D.J., Wolpaw J.R. EEG-based brain–computer interfaces, current opinion in Biomedical. Engineering. 2017;4:194–200. doi: 10.1016/j.cobme.2017.11.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Jeng P.Y., Wei C.S., Jung T.P., Wang L.C. Low-Dimensional subject representation-based transfer learning in EEG decoding. IEEE J Biomed Health Inform. 2021;25:1915–1925. doi: 10.1109/JBHI.2020.3025865. [DOI] [PubMed] [Google Scholar]
  • 44.Jeong J.-H., Sung D.-J., Kim K.-T., Lee S.J., Kim D.-J., Kim H. 2023 11th International Winter Conference on Brain-Computer Interface (BCI) IEEE; 2023. Subject-transfer with subject-specific fine-tuning based on multi-model CNN for motor imagery brain-computer interface; pp. 1–3. [Google Scholar]
  • 45.Schirrmeister R.T., Springenberg J.T., Fiederer L.D.J., Glasstetter M., Eggensperger K., Tangermann M., Hutter F., Burgard W., Ball T. Deep learning with convolutional neural networks for EEG decoding and visualization. Hum. Brain Mapp. 2017;38:5391–5420. doi: 10.1002/hbm.23730. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Lawhern V.J., Solon A.J., Waytowich N.R., Gordon S.M., Hung C.P., Lance B.J. EEGNet: a compact convolutional neural network for EEG-based brain-computer interfaces. J. Neural. Eng. 2018;15 doi: 10.1088/1741-2552/aace8c. [DOI] [PubMed] [Google Scholar]
  • 47.Agarap A.F. Deep learning using rectified linear units (relu), arXiv preprint arXiv:1803.08375. 2018 [Google Scholar]
  • 48.Wu X., He R., Sun Z., Tan T. A light CNN for deep face representation with noisy labels. IEEE Trans. Inf. Forensics Secur. 2018;13:2884–2896. [Google Scholar]
  • 49.Lun X., Yu Z., Chen T., Wang F., Hou Y. A simplified CNN classification method for MI-EEG via the electrode pairs signals. Front. Hum. Neurosci. 2020;14:338. doi: 10.3389/fnhum.2020.00338. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Zhang T., Kishore V., Wu F., Weinberger K.Q., Artzi Y. Bertscore: evaluating text generation with bert. arXiv preprint arXiv:1904.09675. 2019 [Google Scholar]
  • 51.Nguyen H.V., Bai L. Asian Conference on Computer Vision. Springer; 2010. Cosine similarity metric learning for face verification; pp. 709–720. [Google Scholar]
  • 52.Freedberg M., Reeves J.A., Hussain S.J., Zaghloul K.A., Wassermann E.M. Identifying site- and stimulation-specific TMS-evoked EEG potentials using a quantitative cosine similarity metric. PLoS One. 2020;15 doi: 10.1371/journal.pone.0216185. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.Sburlea A.I., Montesano L., Minguez J. Continuous detection of the self-initiated walking pre-movement state from EEG correlates without session-to-session recalibration. J. Neural. Eng. 2015;12 doi: 10.1088/1741-2560/12/3/036007. [DOI] [PubMed] [Google Scholar]
  • 54.Pham T.D. IEEE Trans Neural Syst Rehabil Eng; 2023. Classification of Motor-Imagery Tasks Using a Large EEG Dataset by Fusing Classifiers Learning on Wavelet-Scattering Features. [DOI] [PubMed] [Google Scholar]
  • 55.Shenoy P., Krauledat M., Blankertz B., Rao R.P., Muller K.R. Towards adaptive classification for BCI. J. Neural. Eng. 2006;3:R13–R23. doi: 10.1088/1741-2560/3/1/R02. [DOI] [PubMed] [Google Scholar]
  • 56.Ng H.W., Guan C. Deep unsupervised representation learning for feature-informed EEG domain extraction. IEEE Trans. Neural Syst. Rehabil. Eng. 2023;31:4882–4894. doi: 10.1109/TNSRE.2023.3339179. [DOI] [PubMed] [Google Scholar]
  • 57.Yin K., Lim E.Y., Lee S.-W. GITGAN: generative inter-subject transfer for EEG motor imagery analysis. Pattern Recogn. 2024;146 [Google Scholar]
  • 58.Pan L., Wang K., Xu L., Sun X., Yi W., Xu M., Ming D. Riemannian geometric and ensemble learning for decoding cross-session motor imagery electroencephalography signals. J. Neural. Eng. 2023;20 doi: 10.1088/1741-2552/ad0a01. [DOI] [PubMed] [Google Scholar]
  • 59.Krauledat M., Schröder M., Blankertz B., Müller K.-R. Reducing calibration time for brain-computer interfaces: a clustering approach. Adv. Neural Inf. Process. Syst. 2006;19 [Google Scholar]
  • 60.Pfurtscheller G., Linortner P., Winkler R., Korisek G., Muller-Putz G. Discrimination of motor imagery-induced EEG patterns in patients with complete spinal cord injury. Comput. Intell. Neurosci. 2009;2009 doi: 10.1155/2009/104180. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 61.Muller-Putz G.R., Daly I., Kaiser V. Motor imagery-induced EEG patterns in individuals with spinal cord injury and their impact on brain-computer interface accuracy. J. Neural. Eng. 2014;11 doi: 10.1088/1741-2560/11/3/035011. [DOI] [PubMed] [Google Scholar]
  • 62.Fairclough S.H., Lotte F. Grand challenges in neurotechnology and system neuroergonomics. Front Neuroergon. 2020;1 doi: 10.3389/fnrgo.2020.602504. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 63.Gupta M.K., Chandra P. Effects of similarity/distance metrics on k-means algorithm with respect to its applications in IoT and multimedia: a review. Multimed. Tool. Appl. 2022;81:37007–37032. [Google Scholar]
  • 64.Ontañón S. An overview of distance and similarity functions for structured data. Artif. Intell. Rev. 2020;53:5309–5351. [Google Scholar]
  • 65.Yu J., Amores J., Sebe N., Radeva P., Tian Q. Distance learning for similarity estimation. IEEE Trans. Pattern Anal. Mach. Intell. 2008;30:451–462. doi: 10.1109/TPAMI.2007.70714. [DOI] [PubMed] [Google Scholar]
  • 66.Xie Y., Wang K., Meng J., Yue J., Meng L., Yi W., Jung T.P., Xu M., Ming D. Cross-dataset transfer learning for motor imagery signal classification via multi-task learning and pre-training. J. Neural. Eng. 2023;20 doi: 10.1088/1741-2552/acfe9c. [DOI] [PubMed] [Google Scholar]

Articles from Heliyon are provided here courtesy of Elsevier

RESOURCES