Skip to main content
Journal of NeuroEngineering and Rehabilitation logoLink to Journal of NeuroEngineering and Rehabilitation
. 2026 Mar 3;23:121. doi: 10.1186/s12984-026-01924-9

Motor imagery BCI enables more practical and user-friendly exoskeleton control than smartwatch for users with spinal cord injury: a preliminary study

Keun-Tae Kim 1,#, Ji-Hyeok Jeong 2,3,#, Dong-Jin Sung 2,4, Ji-Yoon Lee 2, Laehyun Kim 2, Dong-Joo Kim 3,5, Seung-Jong Kim 4, Hyungmin Kim 2,6,, Song Joo Lee 2,6,
PMCID: PMC13067528  PMID: 41776620

Abstract

Background

Non-invasive brain-computer interface (BCI) technology has been widely studied for enabling individuals with spinal cord injury (SCI) to control assistive robotic devices, such as lower-limb exoskeletons. Although BCI-based control of lower-limb exoskeletons has been investigated in individuals with SCI, comparisons with conventional interfaces, such as smartwatches, remain limited. In this study, we developed a BCI system that enables individuals with SCI to control a lower-limb exoskeleton through voluntary walking-related motor imagery.

Methods

This study involved 5 individuals with SCI performing a lower-limb exoskeleton control based on the BCI and a smartwatch as a conventional controller. For the BCI, the participants control the exoskeleton by gait-related MI tasks, including imagining walking forward and sitting down. EEG signals were recorded and processed through a session-transfer approach based on a dual-domain convolutional neural network. To compare the BCI and the smartwatch, all participants performed the exoskeleton control experiments in the same course and completed a usability evaluation.

Results

Experimental results showed that participants perceived our BCI system as more practical and user-friendly than a smartwatch for controlling the exoskeleton during crutch-assisted walking. A usability evaluation conducted after the real-time control experiment showed that the BCI system was perceived as more satisfactory than the smartwatch.

Conclusions

This study suggests that the MI-BCI system may offer greater practicality and stability compared to smartwatch-based control for lower limb exoskeleton in individuals with SCI. Consequently, our results may not only enhance the quality of life for individuals with SCI but also broaden the potential for developing BCI applications in real-world environments.

Keywords: Brain-computer interface (BCI), Lower-limb exoskeleton, Spinal cord injury, Gait-related motor imagery (MI), Electroencephalography (EEG)

Introduction

Human locomotion is a fundamental aspect of physical independence, but neurological disorders or traumatic injuries can disrupt neuromuscular communication, often resulting in lower-limb paralysis. According to the World Health Organization (WHO), approximately 500,000 new cases of spinal cord injury (SCI) occur each year worldwide [1], affecting more than 15.4 million people worldwide, with 90% of cases caused by traumatic events such as falls and traffic accidents [2]. More than 60% of individuals with SCI are unemployed globally, primarily due to impaired mobility and reduced opportunities for social participation, contributing to a significant economic and societal burden [3].

Significant efforts have been made to enhance the mobility and walking capacity of individuals with SCI. The wheelchair is a common method of assisting mobility. However, as technologies continue to evolve, we are now in an era where wearable robots are being developed to assist with walking for those with SCI. In the present clinical context, wearable robots are primarily employed in the rehabilitation of individuals with incomplete SCI or neurological disorders, such as stroke, who exhibit limited control of their body movements [4]. This is likely because walking is a complex task that necessitates the integration of sensory and motor functions to achieve desired movements. In particular, the individuals have to use crutches on both the left and right sides to optimize stability and mobility. Therefore, despite the completion of several preliminary studies to evaluate the acceptability and feasibility of lower-limb exoskeletons for individuals with SCI, using crutches often prevents individuals with more severe motor conditions from utilizing these robots in their daily activities [4, 5]. Currently developed methods for controlling lower-limb exoskeletons commonly involve using either a joystick or a smartwatch, as these provide an intuitive method for the user to direct the robot following their intentions [4, 6]. However, it can be reasonably deduced that this method of robot control may not be optimal for individuals who are unable to stand independently or maintain the necessary posture while operating the joystick or smartwatch. Therefore, several study groups proposed alternative methods of robot control that utilize sensors, including those that detect vision [7, 8], motion [9, 10], or biosignals such as electroencephalography (EEG) [1114] and electromyography (EMG) [1517].

In particular, EEG-based brain-computer interfaces (BCIs) to control external devices, including lower-limb exoskeletons, have also been explored. However, most successful experiments have involved visual stimulation-based BCI systems, such as steady-state visual evoked potentials (SSVEPs) [1114], which may not be optimal in ambulatory settings. Recently, motor imagery (MI), defined as the mental simulation of specific movement tasks, has emerged as a prominent paradigm in the domain of BCIs following advances in signal processing algorithms [1821]. In our previous study, MI was shown to allow for potentially more intuitive control of lower-limb exoskeletons because it has no visual limitations on the user and uses the user’s spontaneous intentions [22]. Yu et al. have also tested the feasibility of controlling lower-limb exoskeletons using MI in healthy individuals and reported successful results [23]. Our recent study also demonstrated the effectiveness of our algorithm in classifying MI with high performance in healthy individuals as well as in individuals with SCI [24]. However, there is the open question of whether the MI-controlled lower-limb exoskeleton is superior to the typically controlled lower-limb exoskeletons using a smartwatch. Therefore, in our study, we aimed to verify the effectiveness of EEG-based BCI using spontaneous gait-related MI for controlling the lower-limb exoskeleton, comparing it with existing controllers, i.e., smartwatches, in individuals with SCI.

Our study has two main contributions as the preliminary study. First, we investigate whether the usability of the MI-based lower-limb exoskeleton control is superior to that of the smartwatch-based control, to enable real-world use beyond laboratory settings. We also conducted a usability evaluation using questionnaires administered to potential end users of BCI technology, aiming to validate the practicality and usability of our proposed system. Second, we evaluate the performance of a session-transfer approach, which demonstrated superior performance in our previous study [24], in a real-time exoskeleton control environment.

Materials and methods

Participants

A total of five individuals with lower limb spinal cord injury (SCI) participated in the study (4 males and 1 female), with an age range of 45–62 years (mean ± SD: 51.4 ± 6.5 years). The ASIA impairment grades of the participants included three individuals classified as Grade A and two as Grade B (Table 1). All participants were provided with a detailed explanation of the experimental procedures and signed informed consent forms approved by the Institutional Review Board of the Korea Institute of Science and Technology (KIST).

Table 1.

Physical information

S1 S2 S3 S4 S5
Age 52 45 48 50 62
Gender M M F M M
Neurological level L1 T10 L1 T12 T11 ~ 12
ASIA Scale A A B A B
Time Since Injury 16y 3y 18y 2y 15y
Rehabilitation period 2y 1y 2y 6 m 7 m 1y

Brain-computer interface for exoskeleton control

Gait-related MI experimental protocol

To enable the use of the BCI system, we employed an experimental protocol designed to elicit EEG during motor imagery tasks. This protocol consisted of multiple segments called “trials,” each of which involved the repeated execution of a single motor imagery task. The order of motor imagery tasks was randomized, and the number of repetitions per class was balanced across classes.

In the experimental protocol, participants performed motor imagery based on visual cues presented on a monitor positioned at eye level. When participants were ready, they initiated a trial by pressing a designated mouse button, after which a fixation cross was displayed for 3 s. Then, a randomly selected visual cue—an upward arrow (Gait), a square (Rest), or a downward arrow (Sitting)—was shown for 2 s. As soon as the cue disappeared, participants engaged in the corresponding motor imagery task for 5 s. A beep marked the end of the task, allowing participants to prepare for the next trial (Fig. 1).

Fig. 1.

Fig. 1

Experimental protocol of motor imagery

The protocol included a total of 90 randomly mixed trials, with 30 repetitions each for the two motor imagery tasks and the resting state. Visual cues were presented using a custom-built program developed in PsychoPy, and an event-marking module (BBTK USB TTL, The Black Box ToolKit Ltd., Sheffield, UK) was used to synchronize EEG recording with stimulus presentation.

EEG measurement and preprocessing

EEG data were recorded using an EEG amplifier (ActiCamp, Brain Products GmbH, Germany) at a sampling rate of 500 Hz, following the MI EEG recording methodology. For the study’s objective of practicality, we used 31 EEG channels as an appropriate configuration for MI classification while reducing setup time and participant burden [25]. A total of 31 EEG electrodes were placed according to the international 10–20 system (Fp1, Fp2, F7, F3, F4, F8, FC5, FC1, FC2, FC6, T7, C3, Cz, C4, T8, TP9, CP5, CP1, CP2, CP6, TP10, P7, P3, Pz, P4, P8, PO9, O1, Oz, O2, and PO10). The reference electrode is FCz, and the ground electrode is AFz. EEG signals were downsampled to 250 Hz to optimize training efficiency and processed with a finite impulse response (FIR) filter in the 4–40 Hz range to remove artifacts [26]. To expedite convergence and normalize input data for the CNN, Z-score normalization was applied on a per-channel basis to the artifact-free EEG data [27]. Finally, to augment the dataset, a sliding window approach was applied by cropping 4-second segments with a 1-second shift, resulting in a twofold increase in the number of training samples. This augmentation technique increased the diversity of the training data and helped prevent model overfitting [28]. Except for augmentation, the data processing steps were the same for offline and online.

Dual domain CNN for EEG processing

For EEG signal classification, we implemented a parallel convolutional neural network, termed Dual-Domain CNN (DD-CNN) (Fig. 2), which processes time-domain and phase-domain representations of EEG signals in parallel and fuses their features at a later stage [29]. We specifically selected this architecture [30] because the time and phase domains share the same temporal resolution, unlike spatial or spectral transformations that often rely on session-specific parameters or alter the input data structure. This shared temporal structure allows the model to efficiently process both feature streams in parallel using a unified convolutional architecture, and it may contribute to improved computational efficiency and stability for session-transfer learning. Each trial was represented as a 4D tensor of shape (1, 2, 31, 2,000), where 2 corresponds to the time and phase domains, 31 denotes the number of EEG channels, and 2,000 represents the number of temporal samples (4 s at 500 Hz). In each domain stream, the network first applies a temporal convolution with 40 filters and a kernel size of 25 (stride 1, valid padding), which preserves the 31-channel spatial convolution across all EEG channels using 40 filters with a kernel size of 31. Batch normalization and an element-wise squaring nonlinearity are then applied. To reduce the temporal resolution, the network employs average pooling with a pool size of 75 and a stride of 15 with valid padding. Subsequently, a logarithmic activation is applied, followed by dropout regularization with a rate of 0.5 to prevent overfitting [26]. Each stream concludes with a dense layer producing a domain-specific prediction via a softmax activation. These two outputs are then concatenated and passed through a fully connected layer with 100 units, followed by another dropout layer and a final dense softmax layer that outputs the class probabilities for binary classification tasks (e.g., gait vs. rest or gait vs. sitting) [27].

Fig. 2.

Fig. 2

Dual domain CNN structure for EEG processing

All convolutional and dense layers are regularized using max-norm constraints, set to 2.0 for convolutional layers and 0.5 for dense layers. The model was trained using the categorical cross-entropy loss function and optimized with the Adam optimizer. Default settings were used for the internal parameters of the batch normalization layers [31].

For comparison with other models, we additionally implemented commonly used EEG decoding approaches such as Filter Bank Common Spatial Patterns with linear discriminant analysis (FBCSP-LDA) [32], EEGNet [27], ShallowConvNet [26], and DeepConvNet [26], which served as comparators to DD-CNN. All models used identical preprocessing, the same channel montage and data splits, and were trained/evaluated under the same 10-fold participant-wise cross-validation protocol for offline classification accuracy comparison.

Session-transfer learning approach

In the Online BCI test process, if the participant deemed the DD-CNN model trained with the same-day session data unsuitable for real-time classification, an alternative model developed using the Session-Transfer Learning approach was employed for additional testing and robot control. In this approach, fine-tuning-based Session-Transfer Learning was applied to MI-BCI EEG data in previous sessions of the same participant. These sessions were conducted on different days while maintaining the same experimental protocol and system configuration. A classifier trained with this method was additionally prepared. Specifically, the initialized DD-CNN model was first trained on data from these previous sessions, and the trained model parameters were then fine-tuned using MI-BCI EEG data collected on the test session day.

Online BCI classifier selection procedure

In our experiment, we provided users with various classifiers derived from both self-training and the session-transfer approaches. In the self-training approach, the initialized DD-CNN was trained using only EEG data from the same-day session (Fig. 3). In the session-transfer approach, the initialized DD-CNN was first trained on EEG data from previous sessions (on another day) of the same participant, conducted under the same experimental protocol and system configuration, and the parameters were then fine-tuned using the EEG data collected on the day of the experiment. To allow participants to determine whether the trained models were suitable for the actual online control environment, ten classifiers were trained for each approach using 10-fold cross-validation, and their offline classification accuracies were calculated. Participants then performed the Self-Optimization Preference (SOP) procedure to select the classifier that best reflected their intention in the online control environment, choosing between the self-training approach and the session-transfer approach (Fig. 3).

Fig. 3.

Fig. 3

Flowchart of the online BCI classifier selection procedure combining subjective and objective criteria. SOP #1 and SOP #2 represent subjective satisfaction-based decision steps, in which participants evaluate the responsiveness and consistency of real-time feedback with their motor imagery intention. The Online BCI Test represents an objective performance-based selection step, where classifiers are selected only if participants successfully generate valid control commands and complete a predefined virtual navigation task. a The self-training approach. b The session-transfer approach. * subjective satisfaction, ** objective metric

In the course of the SOP procedure, participants subjectively evaluated each classifier through self-report to determine whether it responded following their intended motor imagery. First, participants monitored the real-time visual feedback during MI. If the feedback did not match their intention, they rejected the classifier via a triple eye-blink. When a classifier appeared to reflect their intention accurately, they reported it to the researchers and proceeded to the objective verification stage, termed the Online BCI Test. In this test, participants attempted to navigate a virtual course by generating actual control commands (Stand, Gait, Sit) via the command buffer. Only classifiers that enabled successful completion of this virtual course were finally selected. Otherwise, the classifier was rejected, and the participants returned to the classifier decision phase to perform the SOP procedure on the remaining classifiers. Participants typically performed the SOP procedure on a total of 20 classifiers, testing 10 classifiers from each approach (self-training and session-transfer) obtained from 10-fold cross-validation. If no suitable classifier was found among the 20, the motor imagery experimental protocol was repeated.

Experimental protocol for individuals with SCI

The experiment was conducted over multiple days in the following steps (Fig. 4):

Fig. 4.

Fig. 4

Experimental environment for the representative individual with SCI. A Measurement for exoskeleton setup, B Weight movement with exoskeleton, C EEG recording by gait-related motor imagery, D Online BCI Adaptation for select optimized classifiers, E Exoskeleton robot control for getting used to movements, and F Usability evaluation about the BCI for exoskeleton control

  1. Measurement

The lower-limb exoskeleton robot was first fitted to the participants’ lower leg and thigh measurements, and the participants were transferred from a wheelchair to a chair equipped with a lower-limb exoskeleton robot. At the start of each experiment, participants underwent fitting adjustments and performed safety stretches.

  • 2)

    Weight movement

Participants were asked to train their weight-shifting ability to prepare for walking with the lower-limb exoskeleton robot. As individuals with SCI lack lower-limb sensation, they practiced maintaining balance while standing with the exoskeleton and shifting their weight from side to side. This training was repeated 5–6 times.

  • 3)

    MI EEG recording

To perform MI-BCI, participants sat on a comfortable chair and wore a 32-channel EEG cap. The data collection protocol for gait-related MI-BCI was as follows:

  • (i)

    The protocol consisted of 30 trials for each of the three tasks (Gait MI, Sit MI, and rest), with the order of the tasks presented in a pseudorandomized sequence (Fig. 1).

  • (ii)

    Visual instructions were presented using PsychoPy software [33] in conjunction with an event-marking module (BBTK USB TTL, The Black Box ToolKit Ltd., Sheffield, UK). Participants performed MI tasks in response to visual cues displayed on a monitor at eye level.

  • (iii)

    When participants were ready to perform the MI task, they initiated the process by pressing a designated mouse button. A fixation cross appeared on the monitor for 3 s after the button press, followed by an auditory signal. Subsequently, a random cue—an upward arrow (walking forward), a box (rest), or a downward arrow (sitting)—was displayed for 2 s. Participants were instructed to perform the corresponding MI task for 5 s after the cue disappeared. After completing the MI task, a second auditory signal prompted participants to prepare for the next trial.

  • 4)

    Online BCI adaptation

To implement the CNN-based classifier in a real-time MI BCI system, participants performed the online BCI classifier selection procedure (Fig. 3). This phase was designed not only to identify the optimal model but also to help participants adapt to the real-time MI BCI system and ensure their understanding of robot control and course navigation. During this process, participants interacted with a closed-loop State Machine and learned to issue appropriate commands, such as standing, walking, and sitting, according to the robot’s current state. Specifically, participants navigated a virtual course by controlling the robot simulation without physical mounting using the provisionally selected classifier, which allowed them to verify system performance and become proficient in the command generation logic required for the subsequent physical control task.

  • 5)

    Comparison of BCI- and smartwatch-based robot control

Our exoskeleton control experiment consisted of standing up, sitting down, and walking tasks (Fig. 5). Figure 5 summarizes the task sequence for both control modes. Figure 5A illustrates the BCI control mode flow in which a triple eye blink activates State #1 (stand vs. rest) and State #2 (gait vs. sit), whereas Fig. 5B illustrates the smartwatch control mode flow in which the participant taps the smartwatch to deliver stand-up, gait start/stop, and sit-down commands. The participants performed two experiments: one controlling the exoskeleton with BCI and one using a smartwatch as a conventional controller. As seen in Fig. 6, the first experiment with BCI began with participants seated after putting on the exoskeleton (RoboWear 10, NT Robot, Seoul, Republic of Korea). The BCI was activated into the BCI #1 State, which enabled either a stand-up or rest command, through a triple eye blink (Fig. 7A), and the participants imagined a gait motion to initiate the standing-up action. Due to their long-term experience in a wheelchair, the participants had difficulty imagining the movements of standing and walking separately. Therefore, gait motion imagery was used instead of standing-up motion imagery. The BCI system analyzed the EEG signals in real-time (Fig. 7B) and classified them into one of two classes: gait or rest. Commands were executed using a buffer mechanism rather than simple consecutive counts. For each classification, the buffer for the target class increased by one, whereas a detection of the alternative class decreased the buffer by three (Fig. 7C). After standing up with the exoskeleton’s support, the participant reactivated the BCI via another triple eye blink to enter the BCI #2 State, which allowed the selection between gait and sitting down. During this, raw EEG data were classified using a CNN-based decoder with a 4-second window and a 0.5-second step. Following our previous study [22], we adopted command stack buffers to minimize potential safety risks based on a single false detection of the movement intention. When the buffer count reached ten, the corresponding command was transmitted from the control computer to the exoskeleton via Bluetooth (Fig. 6). Since only gait or sitting down movements were possible in the standing state, the BCI system was designed to classify between these two motor intentions based on the user’s EEG signals. With a 4-second window and a 0.5-second step, the minimum time required to reach the buffer threshold of ten was 8.5 s. Using this MI BCI-based exoskeleton control system, participants followed a protocol to evaluate its performance. They performed tasks sequentially, including standing up, walking, stopping, and sitting down, while the time taken and classification results were recorded for evaluation. To ensure safety, assistants were on standby to prevent accidents. The actual walking control was conducted in one or two sessions. Additionally, participants completed the course walking evaluation protocol using the conventional lower-limb exoskeleton robot control method with a smartwatch-based interface, starting in a sitting position while recording the time taken.

Fig. 5.

Fig. 5

Exoskeleton control scenario. Participants controlled the exoskeleton via a smartwatch and a BCI in the same control scenarios involving standing, walking, and sitting. A BCI control mode. B Smartwatch control mode

Fig. 6.

Fig. 6

BCI system for control of the lower-limb exoskeleton. EEG signals were acquired wirelessly and the computer performed real-time signal processing and translation into control signals sent to the exoskeleton, using bluetooth communication

Fig. 7.

Fig. 7

The design of a BCI system for controlling the exoskeleton based on triple-eye blink and EEG signal processing. The intention to control the exoskeleton robot was recognized by EEG and triple-eye blinking. A The BCI was started through triple-eye blinking, B The gait-related motor imagery was performed, and the EEG was analyzed in real-time. In the representative spectral topographies, condition-specific sensorimotor activation patterns were revealed, and C the exoskeleton was controlled when the same result of 10 was continuously obtained

  • 6)

    Usability evaluation

To compare the proposed system with the conventional lower-limb exoskeleton robot control method, participants completed a usability evaluation. Among various usability evaluations, quality of use encompasses the user’s perceived efficiency, satisfaction, and effectiveness while performing tasks using the system or product within various real-world contexts, including social, physical, and technical environments. In our study, efficiency was measured using task execution time and the Borg Rating of Perceived Exertion Scale (RPE) [34]. The RPE is one of the most widely used tools to measure perceived effort, fatigue, or exercise intensity. It reflects how hard the body is working, based on the individual’s physical sensations such as increased heart rate, breathing, perspiration, and muscle fatigue. The Borg RPE scale ranges from 6 to 20, with well-established reliability and validity. The satisfaction was measured using the Difficulty Rating Scale (DRS), Acceptability Rating Scale (ARS), and System Usability Scale (SUS). The DRS measures how easy or difficult it is to perform a given task in a physical environment. It uses a 7-point bipolar scale ranging from − 3 (very difficult) to + 3 (very easy). The ARS assesses the level of acceptability of a particular environmental condition [35]. It uses the same 7-point scale as the DRS, ranging from − 3 (completely unacceptable) to + 3 (completely acceptable). While its psychometric properties have not been rigorously evaluated, there is preliminary evidence of construct validity compared to other functional measures. The System Usability Scale (SUS) is a widely used tool for evaluating the usability of various products and services [36]. SUS consists of 10 statements, each rated on a 5-point Likert scale. The individual item scores are converted using a specific scoring scheme, summed, and then multiplied by two to convert the total from a 0–50 scale to a 0–100 scale. A SUS score above 68 is generally considered above average, while a score below 68 indicates subpar usability. Higher SUS scores reflect greater user satisfaction with the usability of the product.

Additionally, the Korean version of the Psychosocial Impact of Assistive Devices Scale (PIADS) was used to examine the psychosocial effects of the assistive technology device on its users. The PIADS is a self-reported questionnaire consisting of 26 items designed to assess the effects of assistive devices on functional independence, well-being, and quality of life. The three subscales of the PIADS—competence, adaptability, and self-esteem—are based on factor analyses of responses from multiple studies [37]. Each item is scored on a scale from − 3 (most negative impact), 0 (no perceived impact), to + 3 (most positive impact).

Statistical analysis

Statistical analyses were conducted using RStudio [38]. Given the limited number of participants and the ordinal or non-normally distributed nature of several variables, two-condition comparisons used the exact Wilcoxon signed-rank test [39]. Repeated-measures comparisons across multiple classifiers were assessed with the Friedman test [40], and Kendall’s W [41] was reported as an effect size. When the omnibus test was significant, post hoc pairwise Wilcoxon signed-rank tests with Holm adjustment [42] were performed. Classifier performance for MI-BCI was evaluated using 10-fold cross-validation, and classification results were reported as mean ± 1 standard deviation (SD). Additionally, to reduce the potential for misinterpretation based on mean values alone, we also reported average + 1SD values and effect size measures using the matched pairs rank-biserial correlation r with 95% confidence intervals [43].

Results

Exoskeleton control time

Figure 8 shows the total control time for the BCI and smartwatch methods in the aforementioned scenario. Although BCI control required a longer execution time (149.2 ± 75.5 s) compared to smartwatch control (92.1 ± 15.1 s), the difference was not statistically significant according to the Wilcoxon signed-rank test (p = 0.125, r = 0.87) [44]. When the total execution time was further decomposed by scenario stage, in the BCI mode, most of the time was spent during the MI-based command initiation stages. Initiating the stand-up command required 39.4 ± 40.7 s (21.6 ± 12.3%), followed by 15.0 ± 1.0 s (11.8 ± 4.5%) for the standing execution stage. Initiating the walking command required 30.9 ± 47.1 s (15.9 ± 15.4%), while the walking execution stage required 32.4 ± 12.9 s (26.8 ± 12.8%). The stop-walking command required 3.4 ± 2.9 s (2.9 ± 2.3%), and initiating the sit-down command required 20.8 ± 13.5 s (15.2 ± 9.6%), followed by 7.2 ± 0.4 s (5.8 ± 2.4%) for the sitting execution stage. Collectively, these command initiation stages accounted for 52.7% of the total BCI-mode execution time. In contrast, for the smartwatch mode, initiating the stand-up command required 7.0 ± 6.4 s (7.0 ± 4.9%), followed by 13.6 ± 2.9 s (15.4 ± 4.7%) for the standing execution stage. Initiating the walking command required 12.8 ± 6.1 s (13.8 ± 6.1%), while the walking execution stage required 36.1 ± 4.0 s (39.7 ± 5.9%). The stop-walking command required 3.8 ± 1.1 s (4.1 ± 0.9%), and initiating the sit-down command required 8.0 ± 3.6 s (8.9 ± 4.3%), followed by 7.8 ± 0.4 s (8.6 ± 1.1%) for the sitting execution stage. Collectively, the command initiation stages accounted for about 29.6% of the total smartwatch execution time. This result underscores the need to evaluate usability from a broader perspective, as execution time alone may not capture the practical challenges experienced by users. In particular, participants with SCI encountered significant difficulties when using the smartwatch-based control. Because they depended on crutches for stability, operating the smartwatch with one hand proved to be highly impractical. The smartwatch control experiment was conducted under conditions that eliminated fall risk, with therapists standing by on both sides. Even then, participants had to either barely manage to touch the smartwatch themselves or rely on therapist assistance to perform the control. In the case of participant S1, who was unable to touch the smartwatch themselves due to difficulty in making movements, the therapist nearby touched the smartwatch on their behalf, and participant S2 was unable to maintain balance, so the therapist had to stabilize the exoskeleton while the participant touched the smartwatch. In the post-experiment usability questionnaire, all participants reported that controlling the exoskeleton via a smartwatch was nearly impossible while using crutches. Despite attempts at one-handed operation, they consistently required external support to maintain balance. These findings suggest that BCI-based exoskeleton control may be more practical and feasible for individuals with SCI who depend on assistive devices such as crutches. More details can be found in the supplementary materials regarding the results of the usability evaluation.

Fig. 8.

Fig. 8

Total control time by BCI and smartwatch. The total time spent controlling the exoskeleton with BCI and smartwatch, respectively. * indicates smartwatch control times recorded when therapist assistance was required, either for manual button actuation on the smartwatch or for physical stabilization during touch input

Classification accuracies for BCI

Comparison with other models

To contextualize the performance of the proposed model, we compared DD-CNN against commonly used baselines (FBCSP-LDA, EEGNet, and DeepConvNet) using the MI EEG data collected on the day of the BCI-based exoskeleton control evaluation. All models were trained and evaluated using the self-training approach under identical preprocessing and the same 10-fold cross-validation protocol. Table 2 reports participant-wise accuracies (mean±SD, %) for.

Table 2.

Offline classification accuracy across models (Mean±SD%)

Participants FBCSP-LDA EEGNet DeepConvNet DD-CNN
Time-domain Phase-domain Dual-domain
Gait & rest
 S1 70.0 ± 14.8 75.0 ± 10.4 85.8 ± 9.7 84.2 ± 10.0 84.2 ± 9.0 86.7 ± 13.2
 S2 77.5 ± 14.2 70.8 ± 13.2 75.8 ± 12.7 78.3 ± 9.8 75.8 ± 8.8 84.2 ± 8.3
 S3 80.0 ± 15.3 81.7 ± 11.0 64.2 ± 13.1 89.2 ± 10.4 90.8 ± 15.6 93.3 ± 10.2
 S4 43.3 ± 9.5 62.5 ± 14.3 60.0 ± 9.5 65.0 ± 11.7 65.0 ± 11.2 67.5 ± 10.7
 S5 75.8 ± 16.2 80.0 ± 18.1 75.0 ± 17.1 82.5 ± 9.2 83.3 ± 16.7 84.2 ± 12.1
 Average 69.3 ± 15.0 74.0 ± 7.7 72.2 ± 10.3 79.8 ± 9.2 79.8 ± 14.1 83.2 ± 9.5
Gait & sitting
 S1 68.3 ± 24.2 75.8 ± 6.2 71.7 ± 8.1 86.7 ± 9.0 85.0 ± 6.6 87.5 ± 7.1
 S2 59.2 ± 24.0 57.5 ± 10.0 59.2 ± 9.2 60.8 ± 8.8 59.2 ± 12.7 60.8 ± 8.8
 S3 62.5 ± 12.6 71.7 ± 5.8 62.5 ± 10.6 73.3 ± 15.6 75.8 ± 13.9 74.2 ± 9.2
 S4 40.8 ± 9.2 59.2 ± 9.2 55.8 ± 6.9 55.8 ± 11.2 59.2 ± 7.3 59.2 ± 10.0
 S5 77.5 ± 16.2 77.5 ± 12.5 75.0 ± 19.3 85.8 ± 16.7 85.0 ± 13.5 85.8 ± 13.1
 Average 61.7 ± 13.6 68.3 ± 9.4 64.8 ± 8.2 72.5 ± 14.1 72.8 ± 13.0 73.5 ± 13.4

two tasks (Gait & Rest; Gait & Sitting). DD-CNN yielded the highest average accuracy for both tasks (83.16±9.52% and 73.50±13.36%, respectively). A nonparametric repeated measures test across models was significant for both tasks (Gait & Rest: Friedman p=0.0065, Kendall’s W=0.71; Gait & Sitting: Friedman p=0.0087, Kendall’s W=0.68), indicating large between-model differences. Although post hoc pairwise Wilcoxon tests comparing DD-CNN with the other methods did not reach significance after Holm adjustment, most unadjusted p-values were 0.062, suggesting a marginal trend favoring DD-CNN. Consistent with this trend, DD-CNN achieved the highest classification accuracy for every participant. In addition, to validate the effectiveness of the proposed dual-domain fusion strategy, we compared the classification accuracy of the DD-CNN with those of the single-domain models (time-domain and phase-domain) using the experimental dataset (Table 2). The single-domain results exhibited participant dependent performance variability, in which some participants achieved higher classification accuracy in the time-domain, whereas others performed better in the phase-domain. In contrast, the dual-domain of DD-CNN consistently demonstrated more stable and better classification performance compared with the time-domain and phase-domain models. For the Gait & Sitting, although the difference was not statistically significant (Friedman p=0.327, Kendall’s W=0.22), the dual-domain model still achieved the highest average accuracy, further supporting the conclusion that fusing complementary features compensates for the limitations of individual domains.

Comparison of training approaches and self-optimization process validation

In this section, we compared the self-training and session-transfer approaches used as candidate classifiers that users could choose from within the proposed SOP procedure. As described in method and Fig. 3, self-training used only EEG data collected in the same-day session, whereas session-transfer used a model pre-trained on the participant’s previous-session data and then fine-tuned using the EEG data collected on the day of the experiment. Through the SOP procedure, participants selected and used the classifier that they felt best reflected their intended commands, rather than the one with the highest objective performance. In offline classification, the self-training approach showed higher classification performance for MI-BCI data than the session-transfer approach for most participants and classifiers, except for the Gait & Sitting classifiers of participants 3 and 4 (Table 3). This result largely aligned with the classifiers selected through the SOP method (Table 4). Statistical analysis using the Wilcoxon signed-rank test revealed a marginal trend favoring the self-training approach over the session-transfer approach in the Gait & Rest condition (p = 0.058, r = 1.0), while no significant difference was observed for the Gait & Sitting condition (p = 0.812, r = 0.2). These findings indicate that the proposed SOP method is a valid approach for real-time applications and further suggest the potential for selecting classifiers for online use based on training results obtained in an offline environment.

Table 3.

Offline classification accuracy across approach (Mean±SD%)

Participants Gait & rest Gait & sitting
Self-training Session-transfer Self-training Session-transfer
S1 86.67±13.15 85.83±9.66 87.5±7.08 85.83±9.66
S2 84.17±8.29 83.33±8.78 60.83±8.83 60.0±19.95
S3 93.3±10.24 92.5±8.29 74.17±9.17 75.83±11.42
S4 67.5±10.72 65.0±15.61 59.17±9.98 63.3±8.96
S5 84.17±12.08 80.0±12.55 85.83±13.06 77.5±17.15
Average 83.16±9.52 81.33±10.22 73.5±13.36 72.49±10.66
Table 4.

Selected method by SOP

Participants Gait & rest Gait & sitting
Self-training Session-transfer Self-training Session-transfer
S1 O O
S2 O O
S3 O O
S4 O O
S5 O O

False negative rate for controlling the exoskeleton

In terms of the False Negative Rate (FNR) for controlling the lower-limb exoskeleton [22], the following results were obtained (Table 5). For the task of issuing a command to stand up, classification as a positive class (i.e., standing command based on imagined walking) was considered correct, whereas classification as a negative class (i.e., resting state) was considered an incorrect response, resulting in an FNR of 0.4355. For the task of issuing a command to walk, classification as a positive class (i.e., walking command based on imagined walking) was considered correct, while classification as a negative class (i.e., sitting state) was considered incorrect, resulting in an FNR of 0.0836. Finally, for the task of issuing a command to sit, classification as a positive class (i.e., sitting command based on imagined sitting) was considered correct, whereas classification as a negative class (i.e., walking state) was considered incorrect, yielding an FNR of 0.3636.

Table 5.

False negative rate

Participants Average FNR Stand Up Gait Sitting down
S1 0.3902 0.6207 0.2500 0.3000
S2 0.1733 0.2143 0.0769 0.2286
S3 0.3421 0.6453 0.0000 0.3810
S4 0.2965 0.3896 0.0000 0.5000
S5 0.2689 0.3077 0.0909 0.4082
Average 0.2942 0.4355 0.0836 0.3636

Usability evaluation

In this study, effectiveness was evaluated in terms of user acceptance of the technology (Tables 6 and 7). Effectiveness was further analyzed based on task success and function completion. Two participants made Task Error − 4 (user performance error stated in Table 6) during stages 3 and 6. The reason was that, while standing, both hands were needed to hold the crutches for balance, making it difficult to operate the smartwatch. Even when attempting to perform the task by using one hand to touch the smartwatch, external support was required to maintain balance. This situation was consistent across all participants.

Table 6.

Error code description

Error code number Description
1 Task performance
2 Mechanical failure
3 Software-related error
4 User task performance error
5 Inadequate user understanding
6 Lack of user training
7 Cognitive error by the user
8 Unclassified

Table 7.

Effectiveness of BCI and smartwatch mode

Stage Task Sub. BCI Mode Smartwatch mode
Success Error trial Error num Success Error trial Error num
1 Initiate the stand-up command from the chair S1 O O
S2 O O
S3 O O
S4 O O
S5 O O
2 Successfully stand up from the chair S1 O O
S2 O O
S3 O O
S4 O O
S5 O O
3 Initiate the robot’s walking command S1 O X 1 4
S2 O X 1 4
S3 O X 1 4
S4 O X 1 4
S5 O X 1 4
4 Walk more than three gait cycles while supported with crutches S1 O O
S2 O O
S3 O O
S4 O O
S5 O O
5 Initiate the stop command for the robot S1 O O
S2 O O
S3 O O
S4 O O
S5 O O
6 Initiate the sit-down command toward the chair S1 O X 1 4
S2 O X 1 4
S3 O X 1 4
S4 O X 1 4
S5 O X 1 4
7 Successfully sit down on the chair S1 O O
S2 O O
S3 O O
S4 O O
S5 O O

Furthermore, in this study, the average RPE for the BCI mode was 9.4 ± 2.3, and for the smartwatch mode was 12 ± 1.7, respectively. The average RPE was lower than that of the smartwatch (p = 0.125, r = − 0.87). And, the total average DRS and average ARS for the BCI mode were 1 ± 1.0 and 1.8 ± 0.9, and the smartwatch mode was 0.2 ± 0.6 and 0.1 ± 0.7, respectively. Both scores of the BCI mode were higher than the smartwatch mode (DRS: p = 0.125, r = 0.87; ARS: p = 0.0625, r = 1.0). In the case of the SUS, as the BCI mode showed 69.5 ± 19.8, we can confirm that the BCI has greater user satisfaction (p = 0.0625, r = 1.0) (Table 8). Lastly, the average PIADS score for the BCI mode for exoskeleton control was 1.92 ± 0.47, indicating a highly positive impact, whereas the average PIADS score for the smartwatch mode was 0.34 ± 0.52, suggesting a minimal impact (p = 0.0625, r = 1.0).

Table 8.

System usability scale of exoskeleton control via BCI mode and smartwatch mode

Mode S1 S2 S3 S4 S5 Average±1SD
BCI Score 35 72.5 76 84 80 69.5 ± 19.8
Smartwatch Score 32.5 57.5 42 54 66 50.4 ± 13.2

Discussion

In this study, we developed a gait-related MI-BCI system to control a lower-limb exoskeleton robot for individuals with SCI and evaluated its practicality and efficiency in comparison with a conventional smartwatch-based control method. In a simulated real-world usage scenario, participants performed a series of movements in the order of sitting–standing–walking–sitting, and the performance differences between the control methods were assessed. According to the results of the self-reported survey, although the MI-BCI system required more time to complete the tasks compared to the smartwatch-based method (Fig. 8), it received higher evaluations in terms of user safety and practical suitability in real-world environments for individuals with SCI. Conventional exoskeleton robots generally lack full autonomy and rely on passive external input devices such as smartwatches or joysticks, making it difficult for individuals with SCI who require crutches for support to operate the robots. In contrast, the MI-BCI system demonstrated that the exoskeleton could be intuitively controlled by the user’s voluntary intention. This finding is clinically meaningful in that it can enhance both the safety and efficiency of complex mobility tasks, such as gait assistance, for individuals with SCI. Furthermore, the study may contribute to improving social participation and quality of life for these individuals. Due to physical mobility limitations, individuals with SCI often experience social isolation, and autonomous mobility support devices can help promote their engagement in social activities. The MI-BCI-based exoskeleton system proposed in this study offers a foundational technological platform to achieve these goals.

Notably, this study validated the real-time control performance of the MI-BCI system for exoskeleton operation and compared the effectiveness of session-transfer and self-training strategies, offering insight into suitable training methods for online control environments (Table 3). While previous studies demonstrated that session-transfer approaches yield better performance in offline settings, participants in this study tended to prefer self-training classifiers during the SOP task in the actual online control environment (Table 4). This suggests that although session-transfer approaches may offer higher classification accuracy in offline scenarios, various factors in this experiment-such as the participants’ accumulated experience with exoskeleton usage, walking, and MI-based BCI across sessions, have influenced classifier performance. These results indicate that self-training approaches may better adapt to such environmental variability. Therefore, the findings highlight the need to refine session-transfer approaches or adopt hybrid learning strategies to improve user adaptability and real-time responsiveness of BCI systems in practical applications.

The FNR measured in the experiment varied depending on the command type (Table 5). The FNR for the “stand” command was the highest at 0.4355, which can be interpreted as the result of a conservative classification threshold aimed at ensuring user stability during the standing motion. In contrast, the FNR for the “walk” command was the lowest at 0.0836, indicating that the walking command based on motor imagery was recognized relatively reliably. The FNR for the “sit” command was intermediate at 0.3636, which may be attributed to the potential instability associated with the sitting motion, prompting the system to adopt a more cautious classification criterion. These results suggest that gait-related MI while sitting was a more challenging task for the participants, whereas gait-related MI while standing was relatively easier. This finding implies that such task-dependent differences should be considered when developing an intuitive BCI-based control system.

According to the results of the usability evaluation (Tables 6 and 7 and 8), the MI-BCI-based exoskeleton control system was usable even when participants relied on crutches, which are commonly required by individuals with SCI. Although the BCI-based control took on average 1.6 times longer to execute than the smartwatch-based control (Fig. 8), all participants preferred the BCI system in terms of practical usability. Notably, the smartwatch-based control time may have been artificially shortened for some participants (e.g., S1 and S2), as manual button actuation or physical stabilization during touch input was required when maintaining balance was difficult. In contrast, the BCI control was completed the BCI control independently throughout the protocol, which may make the mean delay of the BCI condition appear more pronounced, especially in cases like S3. Wilcoxon signed-rank tests showed no statistically significant differences in usability-related measures, including the Borg Rating of Perceived Exertion (RPE) scale [34] (p = 0.125, r = − 0.87) and the Difficulty Rating Scale (DRS) (p = 0.125, r = 0.87). However, several other indicators, such as the Acceptability Rating Scale (ARS) (p = 0.0625, r = 1.0), the System Usability Scale (SUS) (p = 0.0625, r = 1.0), and the psychosocial impact (PIADS), showed marginal significance (p = 0.0625, r = 1.0) favoring the BCI-based approach. Importantly, smartwatch-based control required users to release one hand from the crutch to press the button, which may introduce instability and increase falling risk, a critical safety concern for individuals with SCI. In contrast, the MI-BCI system enabled hands-free operation without relying on visual attention, supporting safer and more cognitively accessible ambulation.

A limitation of this study is the small number of participants. Due to the practical difficulties of recruiting individuals with chronic lower-limb SCI who can participate in multiple motor-imagery sessions, the study was conducted with only five participants. This small sample size may reduce the generalizability of the findings and may not fully reflect the substantial inter-individual variability commonly observed in BCI performance. In addition, the participants in our study showed heterogeneous clinical characteristics. The time since injury ranged from 2 years to 18 years, and the participants’ ages were concentrated between 45 and 62 years (with a mean of 51.4 years). Rehabilitation duration also varied. While participant S4 had undergone rehabilitation for only 7 months, the remaining participants had rehabilitation periods ranging from 2 to 2.5 years. According to previous studies [45, 46], most neurological and motor recovery after SCI occurs within the first 6–9 months post-injury. Therefore, despite differences in rehabilitation duration among our participants, additional recovery beyond this window is expected to be minimal, and a major impact on the experimental results is unlikely. Nevertheless, this remains an important consideration that should be examined more thoroughly in future work. To address these limitations, future studies should include a larger and more diverse cohort, including broader age ranges, varied injury levels, and different rehabilitation stages, to more fully assess the adaptability and clinical applicability of the proposed system.

In our study, we utilized RPE and SUS, which are the most consistently used to measure perceived effort, fatigue, or exercise intensity [47]. The RPE is one of the most widely used tools to measure perceived effort, fatigue, or exercise intensity. The SUS is commonly used to assess overall usability and user satisfaction when evaluating BCI [48, 49] and robotic systems [50, 51]. Furthermore, because one of the main contributions of this study was to compare the usability of MI-based lower-limb exoskeleton control with smartwatch-based control, we included the DRS to evaluate the difficulty of performing specific tasks. In fact, these assessments are generally used measures and may not be specifically tailored for individuals with SCI. To our knowledge, standardized and widely accepted usability scales designed specifically for SCI users interacting with exoskeleton or BCI systems remain limited. A more precise comparison between smartwatches and BCIs is needed to develop a usability evaluation method specific to SCIs, and it needs to be applied through future research. At the same time, although the preliminary study was based on a small sample size, our study reflects a real-world walking context that can be overlooked in laboratory-based evaluations and highlights the practical value of BCI technology in the over-ground walking with canes scenario. The innovation of this application scenario lies in revealing limitations of conventional interfaces that may be masked in laboratory settings. Existing BCI-exoskeleton research has predominantly been conducted in highly controlled environment, such as treadmills or Body Weight Support Systems (BWSS) [52, 53]. In these stable settings, manual control methods such as smartwatches or joysticks are often feasible and safe, which can make BCI appear as an alternative interface rather than a solution driven by practical necessity. In contrast, our study focuses on the “over-ground walking scenario with canes, which reflects common real-world safety practices for individuals with SCI. The key contribution of this scenario mining lies in identifying a boundary condition in which manual interfaces may become impractical or unsafe. In this context, the user’s hands are primarily engaged in maintaining balance with canes, and manual command inputs may require releasing a cane or compromising postural stability [54, 55]. By examining this scenario, our study helps to clarify a practical use case in which hands-free MI-BCI control is driven by safety and usability considerations, rather than by novelty alone. This helps bridge the gap between controlled laboratory evaluations and real-world mobility assistance, and clarifies the practical relevance of BCI in safety-critical over-ground walking scenarios.

Accordingly, the MI-BCI system exhibits slower response times compared to smartwatch-based control, which may contribute to increased user fatigue during real-time operation. Consistent with the stage timing analysis, the additional time in the BCI mode is largely concentrated in the MI command initiation stages (standing, walking, and sitting; 52.7% of the total), whereas the walking interval itself is comparable between modes. This indicates that the dominant delay is associated with MI intention confirmation under our decision count mechanism, and therefore responsiveness may be improved by tuning the confirmation threshold within a safety preserving range and shortening the decision window without meaningful performance degradation. To address this, improvements are needed in lightweight signal processing algorithms, classifier accuracy, and hardware components such as electrodes and amplifiers. In particular, the development of well-designed visual and auditory feedback systems could help reduce users’ cognitive load and enhance training efficiency. To achieve this, it is essential to implement adaptive learning algorithms capable of responding to user state fluctuations, such as fatigue and decreased concentration, and to incorporate hybrid interface systems that integrate multiple biosignals, including EEG, EMG, EOG, and IMU. Such systems can compensate for degraded signal quality and enable more stable and precise exoskeleton control.

In future research, efforts should focus on developing personalized BCI systems tailored to the varying conditions and needs of individuals with SCI, supported by large-scale clinical evaluations. Incorporating adaptive algorithms that account for predictable changes in user state will enhance system reliability. Additionally, long-term usability studies are necessary to assess issues such as electrode degradation, user fatigue, and device maintenance during extended use. These studies should incorporate continuous feedback from real users to refine system interfaces and functionality.

Furthermore, in a real environment, reducing the severity of secondary injuries caused by falls is crucial. In our experimental setup, the non-invasive BCI system used a lightweight elastic EEG cap without any rigid head-mounted structures or tethered cables that could increase injury risk in the event of a fall. Additionally, emergency stop functions and therapist supervision with a safety harness were implemented throughout all trials. For these reasons, we estimate that the BCI system does not pose a higher physical injury risk than the smartwatch interface during potential falls under the current configuration. That said, a quantitative comparison between BCI-related and smartwatch-related injury severity, particularly in real-world ambulatory environments, would indeed provide valuable clinical insights. Therefore, the possibility of falls and the severity of injuries caused by falls in smartwatch-based and BCI-based exoskeleton robot walking environments will be compared as further research.

Conclusion

This study demonstrated that a motor imagery-based brain-computer interface (MI-BCI) system can offer greater stability compared to smartwatch-based control for lower limb exoskeleton in individuals with spinal cord injury (SCI). Although the MI-BCI system required more time to control the exoskeleton robot than the conventional smartwatch, it allowed participants to walk while keeping both hands free. In everyday situations, using a smartwatch may require users to briefly let go of a cane or handrail, which can introduce an increase in the feeling of instability and potentially increase the risk of falling. For this reason, even though the MI-BCI approach is slower, the ability to operate the exoskeleton without using the hands may offer meaningful safety advantages and help support greater independence and quality of life for individuals with SCI. In the real-time control environment, participants preferred the self-training classifier over the session-transfer approach during the SOP task, which was consistent with the better performance observed in offline classification results. This can be interpreted as a noteworthy outcome considering system stability and user convenience in practical applications. These advancements are expected to establish a foundation for the widespread use of MI-BCI-based exoskeleton technology in the daily lives of individuals with SCI.

Acknowledgements

We are thankful to Seung Wan Yang at mlp for his assistance in conducting the usability tests.

Author contributions

KKT, LK, DJK, SJK, HK, and SJL designed the study. KKT, JHJ, DJS, and JYL performed the data collection and the data analysis. KKT, JHJ, and HK, SJL wrote the manuscript for publication. All authors read and approved the final manuscript.

Funding

This work was supported in part by the Institute of Information and Communications Technology Planning and Evaluation (IITP) grant funded by the Korean Government (Development of Non-Invasive Integrated BCI SW Platform to Control Home Appliances and External Devices by User’s Thought via AR/VR Interface), Project Number: 2017-0-0043, and in part by National Research Foundation funded by the Korean government (Ministry of Science and ICT), Project Number: RS-2024-00417959, and in part by the Korea Institute of Science and Technology (KIST) intramural grant (26E0172).

Data availability

The datasets used and analysed during the current study are available from the corresponding author on reasonable request.

Declarations

Ethics approval and consent to participate

This study was conducted in accordance with the ethical principles of the Declaration of Helsinki. Ethical approval was obtained from the Institutional Review Board (IRB) of the Korea Institute of Science and Technology (KIST) under approval number KIST IRB 2021-046. All participants were provided with detailed information about the experimental procedures, objectives, and potential risks before their participation. Written informed consent was obtained from all participants prior to the commencement of the study.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Footnotes

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Keun-Tae Kim and Ji-Hyeok Jeong have contributed equally to this work.

Contributor Information

Hyungmin Kim, Email: hk@kist.re.kr.

Song Joo Lee, Email: songjoolee@kist.re.kr.

References

  • 1.Quadri SA, Farooqui M, Ikram A, Zafar A, Khan MA, Suriya SS, et al. Recent update on basic mechanisms of spinal cord injury. Neurosurg Rev. 2020;43:425–41. [DOI] [PubMed] [Google Scholar]
  • 2.Alizadeh A, Dyck SM, Karimi-Abdolrezaee S. Traumatic spinal cord injury: an overview of pathophysiology, models and acute injury mechanisms. Front Neurol. 2019;10:282. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Singh A, Tetreault L, Kalsi-Ryan S, Nouri A, Fehlings MG. Global prevalence and incidence of traumatic spinal cord injury. Clin Epidemiol. 2014;309–31. [DOI] [PMC free article] [PubMed]
  • 4.Siviy C, Baker LM, Quinlivan BT, Porciuncula F, Swaminathan K, Awad LN, et al. Opportunities and challenges in the development of exoskeletons for locomotor assistance. Nat Biomed Eng. 2023;7(4):456–72. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Esquenazi A, Talaty M, Packel A, Saulino M. The ReWalk Powered Exoskeleton to Restore Ambulatory Function to Individuals with Thoracic-Level Motor-Complete Spinal Cord Injury. Am J Phys Med Rehab. 2012;91(11):911–21. [DOI] [PubMed] [Google Scholar]
  • 6.Chen YN, Wu YN, Yang BS. The neuromuscular control for lower limb exoskeleton- a 50-year perspective. J Biomech. 2023;158. [DOI] [PubMed]
  • 7.Bao W, Villarreal D, Chiao J-C, editors. Vision-based autonomous walking in a lower-limb powered exoskeleton. 2020 IEEE 20th International Conference on Bioinformatics and Bioengineering (BIBE); IEEE. 2020.
  • 8.Oguntosin VW, Mori Y, Kim H, Nasuto SJ, Kawamura S, Hayashi Y. Design and validation of exoskeleton actuated by soft modules toward neurorehabilitation—vision-based control for precise reaching motion of upper limb. Front NeuroSci. 2017;11:352. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Heo Y, Choi H-J, Lee J-W, Cho H-S, Kim G-S. Motion-Based Control Strategy of Knee Actuated Exoskeletal Gait Orthosis for Hemiplegic Patients: A Feasibility Study. Appl Sci. 2023;14(1):301. [Google Scholar]
  • 10.Villa-Parra A, Delisle-Rodríguez D, López-Delis A, Bastos-Filho T, Sagaró R, Frizera-Neto A. Towards a robotic knee exoskeleton control based on human motion intention through EEG and sEMGsignals. Procedia Manuf. 2015;3:1379–86. [Google Scholar]
  • 11.Ahmad N, Ghazilla RAR, Azizi MZHM. Steady state visual evoked potential based bci as control method for exoskeleton: A review. Malaysian J Public Health Med. 2016;16(Sppl 1):86–94. [Google Scholar]
  • 12.An Z, Wang F, Wen Y, Hu F, Han S. A real-time CNN–BiLSTM-based classifier for patient-centered AR-SSVEP active rehabilitation exoskeleton system. Expert Syst Appl. 2024;255:124706. [Google Scholar]
  • 13.Kwak N-S, Müller K-R, Lee S-W. A lower limb exoskeleton control system based on steady state visual evoked potentials. J Neural Eng. 2015;12(5):056009. [DOI] [PubMed] [Google Scholar]
  • 14.Wang F, Wen Y, Bi J, Li H, Sun J. A portable SSVEP-BCI system for rehabilitation exoskeleton in augmented reality environment. Biomed Signal Process Control. 2023;83:104664. [Google Scholar]
  • 15.Al-Quraishi MS, Elamvazuthi I, Daud SA, Parasuraman S, Borboni A. EEG-based control for upper and lower limb exoskeletons and prostheses: A systematic review. Sensors. 2018;18(10):3342. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Fleischer C, Wege A, Kondak K, Hommel G. Application of EMG signals for controlling exoskeleton robots. 2006. [DOI] [PubMed]
  • 17.Gui K, Liu H, Zhang D. A practical and adaptive method to achieve EMG-based torque estimation for a robotic exoskeleton. IEEE/ASME Trans Mechatron. 2019;24(2):483–94. [Google Scholar]
  • 18.Altaheri H, Muhammad G, Alsulaiman M, Amin SU, Altuwaijri GA, Abdul W, et al. Deep learning techniques for classification of electroencephalogram (EEG) motor imagery (MI) signals: A review. Neural Comput Appl. 2023;35(20):14681–722. [Google Scholar]
  • 19.Arpaia P, Esposito A, Natalizio A, Parvis M. How to successfully classify EEG in motor imagery BCI: a metrological analysis of the state of the art. J Neural Eng. 2022;19(3):031002. [DOI] [PubMed] [Google Scholar]
  • 20.Padfield N, Zabalza J, Zhao H, Masero V, Ren J. EEG-based brain-computer interfaces using motor-imagery: Techniques and challenges. Sensors. 2019;19(6):1423. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Sharma N, Sharma M, Singhal A, Vyas R, Malik H, Afthanorhan A et al. Recent trends in EEG based Motor Imagery Signal Analysis and Recognition: A comprehensive review. IEEE Access. 2023.
  • 22.Choi J, Kim KT, Jeong JH, Kim L, Lee SJ, Kim H. Developing a motor imagery-based real-time asynchronous hybrid BCI controller for a lower-limb exoskeleton. Sensors. 2020;20(24):7309. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Yu G, Wang J, Chen W, Zhang J, editors. EEG-based brain-controlled lower extremity exoskeleton rehabilitation robot. 2017 IEEE International Conference on Cybernetics and Intelligent Systems (CIS) and IEEE Conference on Robotics, Automation and Mechatronics (RAM); IEEE. 2017.
  • 24.Sung D-J, Kim K-T, Jeong J-H, Kim L, Lee SJ, Kim H et al. Improving inter-session performance via relevant session-transfer for multi-session motor imagery classification. Heliyon. 2024;10(17). [DOI] [PMC free article] [PubMed]
  • 25.Montoya-Martínez J, Vanthornhout J, Bertrand A, Francart T. Effect of number and placement of EEG electrodes on measurement of neural tracking of speech. PLoS ONE. 2021;16(2). [DOI] [PMC free article] [PubMed]
  • 26.Schirrmeister RT, Springenberg JT, Fiederer LDJ, Glasstetter M, Eggensperger K, Tangermann M, et al. Deep Learning With Convolutional Neural Networks for EEG Decoding and Visualization. Hum Brain Mapp. 2017;38(11):5391–420. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Lawhern VJ, Solon AJ, Waytowich NR, Gordon SM, Hung CP, Lance BJ. EEGNet: a compact convolutional neural network for EEG-based brain-computer interfaces. J Neural Eng. 2018;15(5). [DOI] [PubMed]
  • 28.He C, Liu JL, Zhu YS, Du WC. Data Augmentation for Deep Neural Networks Model in EEG Classification Task: A Review. Front Hum Neurosci. 2021;15. [DOI] [PMC free article] [PubMed]
  • 29.Jeong JH, Kim KT, Lee SJ, Kim DJ, Kim H. CNN-based Subject-Transfer Approach for Training Minimized Lower-Limb MI-BCIs. 10th International Winter Conference on Brain-Computer Interface (BCI). 2022.
  • 30.Jeong JH, Choi JH, Kim KT, Lee SJ, Kim DJ, Kim HM. Multi-Domain Convolutional Neural Networks for Lower-Limb Motor Imagery Using Dry vs. Wet Electrodes Sens. 2021;21:19. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Kingma DP, Ba J, Adam. A method for stochastic optimization. arXiv preprint arXiv:14126980. 2014.
  • 32.Lin CL, Chen LT. Improvement of brain-computer interface in motor imagery training through the designing of a dynamic experiment and FBCSP. Heliyon. 2023;9(3). [DOI] [PMC free article] [PubMed]
  • 33.Peirce J, Gray JR, Simpson S, MacAskill M, Höchenberger R, Sogo H, et al. PsychoPy2: Experiments in behavior made easy. Behav Res Methods. 2019;51:195–203. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Borg G. Borg’s perceived exertion and pain scales. Human kinetics; 1998.
  • 35.Danford G, Steinfeld E. search of methods for measuring enabling environments. Measuring Enabling Environments New York: Kluwer Academic/Plenum; 1999. [Google Scholar]
  • 36.Brooke J. SUS-A quick and dirty usability scale. Usability evaluation Ind. 1996;189(194):4–7. [Google Scholar]
  • 37.Jutai J, Day H. Psychosocial impact of assistive devices scale. Technology & Disability; 1996.
  • 38.Lee MH. Data analysis with RStudio: An easygoing introduction. Biometrics. 2021;77(4):1502–3. [Google Scholar]
  • 39.Conover WJ. Practical nonparametric statistics. Wiley; 1999.
  • 40.Friedman M. The use of ranks to avoid the assumption of normality implicit in the analysis of variance. J Am Stat Assoc. 1937;32(200):675–701. [Google Scholar]
  • 41.Kendall MG, Smith BB. The problem of m rankings. Ann Math Stat. 1939;10(3):275–87. [Google Scholar]
  • 42.Holm S. A Simple Sequentially Rejective Multiple Test Procedure. Scand J Stat. 1979;6(2):65–70. [Google Scholar]
  • 43.Kerby DS. The simple difference formula: An approach to teaching nonparametric correlation. Compr Psychol. 2014;3:11. IT. 3.1. [Google Scholar]
  • 44.Rosner B, Glynn RJ, Lee ML. The Wilcoxon signed rank test for paired comparisons of clustered data. Biometrics. 2006;62(1):185–92. [DOI] [PubMed] [Google Scholar]
  • 45.Kirshblum S, Snider B, Eren F, Guest J. Characterizing natural recovery after traumatic spinal cord injury. J Neurotrauma. 2021;38(9):1267–84. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Steeves J, Kramer J, Fawcett J, Cragg J, Lammertse D, Blight A, et al. Extent of spontaneous motor recovery after traumatic cervical sensorimotor complete spinal cord injury. Spinal Cord. 2011;49(2):257–65. [DOI] [PubMed] [Google Scholar]
  • 47.Crea S, Beckerle P, De Looze M, De Pauw K, Grazi L, Kermavnar T, et al. Occupational exoskeletons: A roadmap toward large-scale adoption. Methodology and challenges of bringing exoskeletons to workplaces. Wearable Technol. 2021;2:e11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Cano S, Soto J, Acosta L, Peñeñory VM, Moreira F. Using Brain-Computer Interface to evaluate the User eXperience in interactive systems. Comput Methods Biomech Biomedical Engineering: Imaging Visualization. 2023;11(3):378–86. [Google Scholar]
  • 49.Pasqualotto E, Federici S, Simonetta A, Olivetti Belardinelli M. Usability of brain computer interfaces. Everyday Technology for Independence and Care: IOS; 2011. pp. 481–8. [Google Scholar]
  • 50.Hussain M, Kong Y-K, Park S-S, Shim H-H, Park J. Exoskeleton Usability Questionnaire: a preliminary evaluation questionnaire for the lower limb industrial exoskeletons. Ergonomics. 2024;67(9):1198–207. [DOI] [PubMed] [Google Scholar]
  • 51.La Bara LMA, Meloni L, Giusino D, Pietrantoni L. Assessment methods of usability and cognitive workload of rehabilitative exoskeletons: a systematic review. Appl Sci. 2021;11(15):7146. [Google Scholar]
  • 52.Donati ARC, Shokur S, Morya E, Campos DSF, Moioli RC, Gitti CM et al. Long-Term Training with a Brain-Machine Interface-Based Gait Protocol Induces Partial Neurological Recovery in Paraplegic Patients. Sci Rep. 2016;6. [DOI] [PMC free article] [PubMed]
  • 53.Do AH, Wang PT, King CE, Chun SN, Nenadic Z. Brain-computer interface controlled robotic gait orthosis. J Neuroeng Rehabil. 2013;10. [DOI] [PMC free article] [PubMed]
  • 54.Asselin PK, Avedissian M, Knezevic S, Kornfeld S, Spungen AM. Training Persons with Spinal Cord Injury to Ambulate Using a Powered Exoskeleton. J Vis Exp. 2016(112). [DOI] [PMC free article] [PubMed]
  • 55.Chang SH, Afzal T, Group TSCE, Berliner J, Francisco GE. Exoskeleton-assisted gait training to improve gait in individuals with spinal cord injury: a pilot randomized study. Pilot Feasibility Stud. 2018;4:62. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The datasets used and analysed during the current study are available from the corresponding author on reasonable request.


Articles from Journal of NeuroEngineering and Rehabilitation are provided here courtesy of BMC

RESOURCES