Skip to main content
Sensors (Basel, Switzerland) logoLink to Sensors (Basel, Switzerland)
. 2023 Mar 3;23(5):2798. doi: 10.3390/s23052798

EEG-Based BCIs on Motor Imagery Paradigm Using Wearable Technologies: A Systematic Review

Aurora Saibene 1,2,*,, Mirko Caglioni 1,, Silvia Corchs 2,3,, Francesca Gasparini 1,2,
Editor: Chang-Hwan Im
PMCID: PMC10007053  PMID: 36905004

Abstract

In recent decades, the automatic recognition and interpretation of brain waves acquired by electroencephalographic (EEG) technologies have undergone remarkable growth, leading to a consequent rapid development of brain–computer interfaces (BCIs). EEG-based BCIs are non-invasive systems that allow communication between a human being and an external device interpreting brain activity directly. Thanks to the advances in neurotechnologies, and especially in the field of wearable devices, BCIs are now also employed outside medical and clinical applications. Within this context, this paper proposes a systematic review of EEG-based BCIs, focusing on one of the most promising paradigms based on motor imagery (MI) and limiting the analysis to applications that adopt wearable devices. This review aims to evaluate the maturity levels of these systems, both from the technological and computational points of view. The selection of papers has been performed following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA), leading to 84 publications considered in the last ten years (from 2012 to 2022). Besides technological and computational aspects, this review also aims to systematically list experimental paradigms and available datasets in order to identify benchmarks and guidelines for the development of new applications and computational models.

Keywords: electroencephalogram (EEG), brain–computer interface (BCI), motor imagery (MI), wearable devices

1. Introduction

From the pioneering work of Hans Berger, which recorded the first human electroencephalographic signal (EEG) in 1924 [1], the research devoted to detecting and analyzing brain waves has increased exponentially over the years, especially in medical contexts for both diagnostic and health care applications.

The automatic recognition and interpretation of brain waves permit the development of systems that allow subjects to interact and control devices through brain signals and thus provide new forms for human–machine interactions through systems called brain–computer interfaces (BCIs).

Several applications have been developed, especially for assistive and rehabilitative purposes [2]. However, in recent decades, the rapid development of neurotechnologies, particularly wearable devices, has opened new perspectives and applications outside the medical field, including education, entertainment, civil, industrial, and military fields [3]. Among the different BCI paradigms, motor imagery (MI) deserves particular attention, given that it can be used for a variety of applications and knowing that the research community has achieved promising results in terms of performance [4].

Starting from these premises, this paper systematically reviews EEG-based MI-BCIs by following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) suggestions [5].

The main research question of this review paper is as follows:

RQ: Are wearable technologies mature for EEG-based MI-BCI applications in uncontrolled environments?

To properly answer this question, four different sub-questions are considered as follows:

  1. RQ1: Is there a significant amount of EEG-based MI-BCI studies using wearable technologies in the literature that implies a promising future development of this research field, especially in uncontrolled environments and outside the medical and clinical settings?

  2. RQ2: Are there common pipelines of processing that can be adopted from signal acquisition to feedback generation?

  3. RQ3: Are there consolidated experimental paradigms for wearable EEG-based MI-BCI applications?

  4. RQ4: Are there datasets available for the research community to properly compare classification models and data analysis?

To face these questions, the work is structured as follows. Section 2 describes how the 84 papers for the proposed systematic review have been selected, following the PRISMA suggestions. Section 3 reports basic knowledge concepts of electroencephalographic signals, brain–computer interfaces, motor imagery, and wearable technologies to provide a proper background for the comprehension of the next sections. To properly identify the contribution of this work, an overview of other survey articles present in the literature and concerning BCI systems is reported in Section 4. Section 5 is the core of this review paper, reporting systematically the motor imagery brain–computer interface wearable systems found in the state of the art, in terms of applications and employed technologies. A detailed description of signal processing, feature engineering, classification, and data analysis is also reported with the description of all the datasets and experimental paradigms adopted. Particular attention has been given in some sections to those papers that can be reproducible, either because the analyzed dataset is available, or because the computational models adopted are described with proper technical details. All information gathered from the 84 publications considered will be made available as supplementary material, organized in a detailed table (Table S1).

Finally, the answers to the presented research questions have been provided in Section 6, taking into consideration the detailed analyses of the different aspects discussed in the previous sections. Conclusions and future perspectives are presented in Section 7.

2. Systematic Review Search Method

The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) [5] was employed to conduct this systematic review.

2.1. Eligibility Criteria

The papers included in the systematic review needed to present studies on EEG-based BCIs considering motor imagery paradigms and the usage of wearable technologies for signal acquisition. Moreover, the following inclusion criteria were applied:

  • Studies published in the last 10 years (from 1 January 2012 to 22 June 2022);

  • Studies published as journal articles, conference proceedings, and dataset reports.

Papers falling in the following criteria were excluded:

  • Non-English articles;

  • Studies published as meeting abstracts, book chapters, posters, reviews, and Master’s and PhD dissertations.

Following these criteria, studies were initially identified by the search engines defined in Section 2.2 by applying their filters, when present. Afterward, a manual scanning was conducted on titles, keywords, abstracts, and data description sections of the publications by dividing the studies extracted from the search engines equally among the review authors.

2.2. Information Sources

Seven search engines have been used to collect works on EEG-based BCIs concerning motor imagery paradigms and wearable technologies: IEEE Xplore, Mendeley, PubMed, ScienceOPEN, Sematic Scholar, Scopus, and Web of Science.

Google Scholar was also consulted to retrieve more information on some of the publications outputted from the other search engines and papers that were already stored in private repositories of the different research groups. For the final set of reviewed papers, the last consultation date will be reported for completeness. Table 1 reports information for each search engine specifying the authors (with initials) that used it and its last consultation date.

Table 1.

Search engine information summary. The authors names are reported with their initials.

Search Engine Author Last Consultation Date
IEEE Xplore F.G. 22 June 2022
Mendeley F.G. 22 June 2022
PubMed S.C. 22 June 2022
ScienceOPEN A.S. 21 June 2022
Semantic Scholar A.S. 21 June 2022
Scopus A.S. 22 June 2022
Web of Science S.C. 22 June 2022
Google Scholar M.C. and A.S. 20 June 2022

2.3. Search Strategy

The works included for manual screening were the outcome of the search conducted by querying the information sources described in Section 2.2 with the following keywords: (EEG OR eeg OR electroencephalographic OR electroencephalography) AND (BCI OR brain–computer interface OR bci) AND (wearable OR wireless) AND (motor imagery OR motor-imagery OR MI).

However, before considering this final group of keywords and to understand the interest of the research community on the specific topic of EEG-based MI-BCIs with wearable technologies, we restricted the queries to a combination of the provided keywords, obtaining the results reported in Table 2. Notice that besides the first field identifying the search engines, the others report the main keywords of the considered subsets.

Table 2.

Summary of the searches considering keyword subsets. The fields present the search engines and the main keywords related to the considered subsets.

Search Engine EEG and BCI EEG, BCI,
and Wearable
EEG, BCI, and MI EEG, BCI, Wearable, and MI
IEEE Xplore 3584 121 546 13
Mendeley 10,723 286 3118 60
PubMed 2777 57 877 14
ScienceOPEN 2659 22 303 4
Semantic Scholar 12,200 1879 3630 259
Scopus 10,733 670 2909 74
Web of Science 6564 180 2491 40

Notice that Google Scholar is not reported, being used only to retrieve information on papers present in private repositories or additional details on the scraped ones.

Observing the table and making a comparison between the refined searches with respect to the more general one on EEG and BCI-related papers, it appears that

  • A mean (std) of 4.76% (4.98%) works of the EEG and BCI search are present in the EEG and BCI and wearable field;

  • A mean (std) of 26.01% (9.38%) works of the EEG and BCI search are present in the EEG and BCI and MI field;

  • A mean (std) of 0.71% (0.65%) works of the EEG and BCI search are present in the EEG and BCI and wearable and MI field, the target of the present survey.

Therefore, the interest in EEG-based MI-BCIs with wearable technologies has yet to have broader dissemination in the research community. Wearable technologies seem to be discussed, while the field of MI is extremely prolific.

All the queries followed the eligibility criteria described in Section 2.1, but where not searched for duplicates and manually screened. These procedures were applied only to the last target query.

2.4. Search Outcome

Following the previously described search strategies and filtering the results according to the established inclusion and exclusion criteria, 84 papers were included in the present review.

Figure 1 depicts the flow diagram obtained by following the PRISMA guidelines. Notice that the diagram is divided into two main sections. The first concerns the works retrieved through the search engines listed in Section 2.2, while the second pertains to the studies in the research group’s personal repositories.

Figure 1.

Figure 1

Flow diagram obtained by following the PRISMA guidelines.

Duplicates were removed from the papers outputted by the search engines before the screening. Afterward, abstracts were manually checked by the authors ensuring that they present references to the keywords reported in Section 2.3. The number of abstracts to check were equally distributed among the authors. Reports were not retrieved if they were (i) already present in the personal repositories, (ii) not available online, or (iii) they were review papers, posters, or thesis. Finally, the papers were excluded after a thorough reading of the works if unrelated to the BCI field and/or not presenting analyses on wearable devices.

Notice that each paper was read by at least two authors.

For completeness, Figure 2 depicts the number of papers remaining after the screening process per each considered year. According to the plot, most of the 84 works were published between 2019 and 2020, highlighting the rise in interest of the EEG community towards MI-BCI systems employing wearable technologies and thus further justifying our interest in the reviewed topic.

Figure 2.

Figure 2

Number of papers remaining after the scraping process per each considered year (1 January 2012–22 June 2022).

Notice that the scarce number of papers related to the 2022 label may not be representative of the hypothesized publication trend, since the scraping on 2022 publications stopped on 22 June.

Detailed notes on the reviewed papers are included in Table S1 as supplementary material to the present review. The notes are organized to provide a clear reference to the papers and to follow the core section of this review (Section 5), analyzing (i) field of applications, (ii) employed technologies, (iii) signal processing and analysis methodologies, (iv) BCI feedback, and (v) dataset information.

3. Background Information

This section is devoted to the explanation of some basic knowledge related to the target of this survey. Therefore, an overview of EEG signals, BCIs, MI, and wearable technologies is provided.

3.1. Electroencephalogram

In this review, the considered applications are based on the use of the electroencephalogram (EEG) as a non-invasive technology to acquire brain signals. In fact, the EEG is able to record the brain’s electric potentials deriving from the activation of the cerebral cortex in response to neural processes [6], which could be spontaneous or evoked by external stimuli [7]. The resulting signal is a time series characterized by time, frequency (Section 3.1.1), and spatial information, and acquired with non-invasive sensors (called electrodes or channels) placed on the scalp of a subject [8,9,10,11]. Moreover, the electrode positioning usually follows standard placement systems, which define the distance between adjacent electrodes taking into consideration specific anatomical landmarks, such as the distance from the nasion to the inion [12]. The most commonly used systems are the 10-20 and 10-10 International ones [12,13], which are depicted in Figure 3 and Figure 4. Notice that the electrodes are identified by letters, which refer to the brain area pertaining to a specific electrode, and a number (odd for the left hemisphere, even for the right one). The midline placement is identified by a z.

Figure 3.

Figure 3

10-20 International system (adapted from https://commons.wikimedia.org/w/index.php?curid=10489987 accessed on 31 January 2023). The letter correspondences are as follows: frontopolar (Fp), frontal (F), central (C), parietal (P), occipital (O), temporal (T). Auricular (A) electrodes are also included.

Figure 4.

Figure 4

10-10 International system (adapted from https://commons.wikimedia.org/w/index.php?curid=96859272 accessed on 31 January 2023). The letter correspondences are as follows: frontopolar (Fp), AF between Fp and F, frontal (F), FC between F and C, central (C), CP between C and P, parietal (P), PO between P and O, occipital (O), temporal (T), FT between F and T, TP between T and P. The system also presents the nasion (Nz), inion (Iz), left and right pre-auricular point (LPA and RPA).

Therefore, the sensors are usually placed following standard locations that respect the brain’s anatomical structures. However, it could happen that the neural activations are not recorded uniformly or have irregular samples [14], and that the volume conduction effect may provide indirect and imprecise recordings [15].

Moreover, the EEG signals are [16] easily affected by noise, and thus usually have low Signal-to-Noise Ratio (SNR) [7,17].

EEG data are also heterogeneous,

  • being non-stationary signals varying across time [18];

  • being subject-specific, due to the natural physiological differences between subjects [16];

  • varying in the same subject depending on their physiological and psychological conditions, and changing from trial to trial [19];

  • being influenced by experimental protocols and environmental conditions [7,16].

Therefore, coupling the EEG signals with specific brain activities is a difficult and ambiguous task [20].

In the following, details on the frequency information characterizing the EEG signals and the noise affecting them will be given to provide a better overview of these peculiar data.

3.1.1. EEG Rhythms

As previously introduced, the EEG signal is characterized by frequency information, which is provided by different frequency bands, called rhythms, associated with specific brain activities and functions [21]. Table 3 provides a brief overview of the EEG rhythms and their frequency ranges.

Table 3.

EEG rhythms overview. The frequency ranges and the occurrence of the EEG rhythms are reported.

Rhythm Frequency Range (Hz) Occurrence
δ ≤4 infants, deep sleep
θ 4–8 emotional stress, drowsiness
α 8–13 relaxed awake state
μ 8–13 motor cortex functionalities
β 13–30 alert state, active thinking/attention, anxiety
γ ≥31 intensive brain activity

Starting from the lowest frequency band, the delta (δ) rhythm presents slow waves typical of infants or predominant during deep sleep [12,22]. The theta (θ) rhythm is instead elicited by emotional stress and is present in sleepy adults [12,21,22]. During a relaxed but awake state, the alpha (α) rhythm is elicited [21]. Notice that α and mu (μ) rhythms share similar frequency components, but μ is related to the motor cortex functionalities [22] and is usually non-sinusoidal [23]. The influence of the μ band on motor imagery tasks will be discussed in detail in Section 3.2.

Concerning the beta (β) rhythm, it is present in alert states due to active thinking or attention and may also be an effect of anxiety [12,21]. Finally, the gamma (γ) rhythm is intertwined with more complex cognitive functions, such as sensory stimulus recognition and two senses combination [12,22].

Knowing the EEG intrinsic frequency characteristics, it is possible to exploit them to design specific experimental paradigms and make assumptions on EEG data analyses.

3.2. Motor Imagery

Motor imagery (MI) is the imagined rehearsal of an actual movement [24,25]. During motor imagination, a person seems to consciously access motor representation, i.e., the intention of making a movement and a preparation for it, which is usually an operation performed unconsciously [26,27].

Moreover, the imagination can be performed using a first (internal) or third (external) person perspective [28]. In the first case, the person should feel like they are performing the imagined movement; in the external perspective, the person should feel like watching themselves while performing the movement [28].

Furthermore, numerous research studies have found that motor imagery and representation present the same functional relationships and mechanisms [23,26,29,30], which results in the activation of the same brain areas when performing the actual movement and imagining it [27,31,32].

In fact, analyzing the EEG signals recorded from the primary sensorimotor cortex during the executed or imagined movement of specific body parts, variations in amplitude, mainly of the μ and β rhythms, can be observed [29,30]. These non-phase-locked-to-the-event variations are called Event-Related Desynchronization and Synchronization (ERD/ERS), corresponding to a decrease or increase in the rhythm’s amplitude, respectively, [33].

Before performing a movement, the μ and β rhythms are subject to an ERD [34,35]. Instead, the deactivation of the motor cortex due to movement stopping elicits an ERS on the β frequency band [36].

These pieces of evidence justify the supporting role of MI training in motor execution improvement [27] and generally in the enhancement of neuroplasticity [24,37], i.e., the brain’s ability to change in response to new conditions [38]. Moreover, the MI-related phenomena can be easily exploited to provide an MI-based control of a BCI system [34].

However, the MI ability is subjective due to each individual difference, and thus needs to be assessed before being exploited in experiments/applications, or to be trained [28,31]. Particularly in the field of MI-based BCIs, proper motor imagery task completion may require a long time, and Kaiser et al. [39] affirm that a good BCI control is usually achieved when the subject is at least able to perform 70% of the required tasks accurately.

3.3. Brain–Computer Interfaces

In recent years, neurotechnologies faced great developments and have brought important solutions in the collection and analysis of physiological data in several fields. In particular, they allowed an advance in the identification and treatment of neurological diseases [40] in the medical sector.

Starting from these premises, Brain–Computer Interfaces (BCIs) were born and progressed to provide online brain–machine communication systems [41] that allow the control of devices or applications by recording and analyzing brain waves.

The life cycle of current BCI systems is based on a standard architecture, defined by three main modules (Figure 5): the signal acquisition, processing, and application modules [42].

Figure 5.

Figure 5

BCI system standard life cycle. The three main modules are reported, i.e., the signal acquisition, data processing, and application modules.

The signal acquisition module deals with the input of the BCI system and thus is responsible for recording the physiological signals, which are amplified and digitized. Afterward, these data become the inputs of the data processing module, which processes the signal in order to convert it into commands for an external device or application. Moreover, this module mainly performs (i) signal preprocessing, with the aim of increasing the signal-to-noise ratio [43] by removing artifacts; (ii) feature engineering to extract characterizing and significant information from the signal; and (iii) classification to translate the signals and their features into machine-readable commands.

Finally, the application module provides feedback to the BCI user.

Considering all the EEG-based BCI applications, distinctions can be made among motor imagery (described in Section 3.2), external stimulation paradigm, Error-Related Potentials, inner speech recognition, and hybrid paradigms [44,45].

The external stimulation paradigm refers to the fact that brain waves can be modified by external auditory, visual or somatosensory stimuli. Examples of this paradigm are the Event-Related Potential (ERP) and the steady-state visual evoked potential (SSVEP). ERPs are characterized by components that identify a certain shape of the signal wave. One of the most studied ERPs in the BCI field is the P300, a component related to a positive signal deflection that creates a peak starting at 300 ms when an unexpected stimulus appears [46].

The SSVEP is instead a phenomenon consisting of a periodic component of the same frequency that occurs when a subject looks at a light source that flickers at a constant frequency. This occurs at frequencies in the range of 5–20 Hz and is mainly detectable in the occipital and temporal lobes [47]. The SSVEP paradigm has a great advantage: since the relative stimuli are exogenous, it is a no-training paradigm and thus should not require long training times. However, SSVEP may lead to fatigue or trigger epileptic seizures [44], and thus, enhancements to the BCI systems based on this paradigm should be developed [48].

When an error or a mismatch arises between the user performing a task and the response provided by the BCI system, an Error-Related Potential occurs (ErrP). ErrPs are potentials that represent the user’s perception of an occurrence of an error in the BCI system. ErrPs are reliably detected on a single-trial basis, and thus have immense potential in real-time BCI applications [49].

Moreover, new paradigms are progressively appearing in the literature concerning inner speech recognition, which is devoted to the identification of an internalized process where a person thinks in pure meanings, generally associated with auditory imagery of their own inner voice [45].

Even thought systems based on these paradigms seem to have a great number of advantages, BCI systems usually present some limitations, and thus, hybrid BCIs could be considered to overcome them [50]. Hybrid paradigms are usually employed by exploiting two or more of the described paradigms. In fact, as demonstrated by several studies [51,52,53], the combination of two or more paradigms can lead to a significant improvement in the performance of the BCI system. For example, in [51], the authors combine P300 and SSVEP to create a high-speed speller BCI system with more than 100 command codes.

As previously stated, EEG-based BCI systems are born for medical purposes, from neurorehabilitation to prevention, up to the identification of pathologies and diagnoses. For example, EEG signals are used for the identification and prevention of epileptic seizures, and systems have been developed allowing their high detection and prediction accuracy, as well as a better localization of epileptic foci [40]. BCI systems are also used in the neurorehabilitation field, for example, for the treatment of patients who suffer from motor disabilities after a stroke [54,55,56] or who are affected by Parkinson’s disease [57,58]. In particular, motor imagery-based BCIs have proven to be an effective tool for post-stroke rehabilitation therapy through the use of different MI-BCI strategies, such as functional electric stimulation, robotics assistance, and hybrid virtual reality-based models [55]. BCIs are also used to detect health issues such as tumors, seizure disorders, sleep disorders, dyslexia, and brain swelling such as encephalitis [59].

In addition to these medical and neurological rehabilitation uses, there are many other fields of application of BCIs [59,60,61,62].

For example, the marketing sector uses EEG data to evaluate advertisements in terms of consumer attention and memory [59].

Moreover, the study of brain signals in the field of education has provided more insights into the degree of studied information and how the studying experience could be tailored to a single student. This could provide improvement in students’ skills in real-life scenarios, instead of focusing only on information memorization. Moreover, the students could better develop competencies such as adaptive thinking, sense making, design mindset, transdisciplinary approaches, and computational skills [63].

Further development of BCI systems is also represented by their integration into the world of the Internet of Things. In fact, BCIs could be assistants able to analyze data such as mental fatigue, levels of stress, frustration, and attention [64], while being embedded in smart devices.

Therefore, the study areas that make use of BCI systems are diverse and increasingly numerous. Outside the medical sector, EEG-based BCI technologies have been receiving more attention due to the development of non-medical wearable EEG devices. In fact, these systems guarantee access to a wide range of users, given their high portability and low cost.

Generally, BCI technologies need to go under the scrutiny of experts to provide appropriate user-centered systems and allow a proper insight into their clinical usefulness and practicality [65]. These necessities can be detrimental in terms of efficient BCI system management costs, even though the initial technologies may be sufficiently low-cost [65].

3.4. Wearable Technologies

As introduced in the previous Section 3.3, considering the new demands from the general public and the need to move from laboratories and research centers to in-home and real-life environments, EEG-based BCI technologies are moving to low-cost, portable, and easy-to-use for non-expert solutions.

Traditional laboratory EEG devices usually have helmet structures with holes in which the sensors (electrodes) are installed. The helmet is placed on the patient’s scalp, and each electrode is connected to the recording tool by using cables. This may translate into a very large amount of time for preparing a test. e.g., as described by Fiedler et al. [66] usually a 256-channel device requires long times to connect each electrode to the registration system and for patient preparation. This also makes it impossible for the patient to wear the device at any time of the day at home or anywhere else. Fiedler et al. [66] also propose a 256 dry electrode cap that counteracts these issues.

Starting from this awareness and to overcome the problem of the discomfort of wired instruments, EEG wearable models were born. Wearable devices represent an evolution of the classic EEG tools. They are smaller devices that can be used potentially in any place and at any time. In fact, Casson et al. [67] defined wearable devices as follows:

the evolution of ambulatory EEG units from the bulky, limited lifetime devices available today to small devices present only on the head that can record EEG for days, weeks, or months at a time.

By removing the cumbersome recording units and connecting wires and replacing them with microchip electrodes with embedded amplifiers, quantizers, and wireless transmitters, it was possible to achieve the goal of creating devices that recorded and sent data wirelessly.

Although wired EEGs are more stable and can transmit more data in less time [68], wearable devices are more comfortable and can be used wherever, thanks to their portability. Moreover, it is possible to avoid the artifacts caused by the movement of cables and electrodes of wired EEG by using wearable EEG devices.

Notice that the electrodes of a wearable device need an adequate electrical connection with the scalp of the subject. The currently available devices are equipped with three types of electrodes: dry, wet (gel-based or saline solution-based), and semi-dry. Dry electrodes of an EEG do not need to use a gel nor a saline solution to be connected to the scalp. This implies a greater simplicity of positioning, also reducing the setup time at the expense of good signal conduction. Wet sensors can be divided into two categories: gel-based electrodes or saline solution-based electrodes. Gel-based electrodes make use of an electroconductive gel applied on each electrode. Gels generally reduce movement and skin surface artifacts by creating a more stable conductive connection between the electrode and the scalp compared to saline solutions [69]. However, the time for preparing the subject could increase, and the subject may be uncomfortable due to the gel residues on the hairs and scalp. The cap and electrodes should also be carefully cleaned after usage.

On the other hand, some devices have electrodes imbued with a conductive saline solution in order to help make low-impedance electrical contact between the skin and the sensors [68].

It is also important to consider that wet electrodes-based devices have some limitations [69]: the skin must be prepared to reduce the impedance of the scalp, and the procedure can be annoying or sometimes painful. Moreover, if too much gel is used, it could affect signal transmission by causing a short circuit between two adjacent electrodes.

On the opposite, semi-dry electrodes require only a small amount of electrolyte fluid, combining the advantages of both wet and dry electrodes while addressing their respective drawbacks. Their setup is as fast and convenient as their dry counterparts [70].

Even though the evolution of wearable technologies seems to be rapidly escalating, numerous issues remain to be addressed, especially if the final goal of these EEG wearable devices is to be used in any context. Firstly, the non-expert user may find it difficult to wear the electrodes correctly. As a result, the electrodes may be damaged or not correctly record the signal. A good device should guide the novice user to a fast and effective positioning on the scalp, indicating whether the contact of the electrodes is stable or not. On the other hand, it is essential to make devices smaller and less bulky, while making them more resistant to artifacts [71]. Another sore point concerns the battery life, which is still too low for the instrument to be comfortably used outside research centers. Most wearable devices have been realized for general functions and applications that lack support for signal processing and feedback generation [72], resulting in low-quality signal processing.

In light of these limitations, new designs are emerging for new generations of wearable EEG devices. Thanks to several studies showing that the EEG signal can be recorded on the forehead [73] and behind the ears [74,75], tattoo-like devices are emerging. They offer better wearability and versatility [76]. In fact, they can be connected directly to the skin through adhesive materials without the use of gels or external supports. With the advantage of having features such as ultra-thin thickness, ultra-softness, and high adherence to the skin, the electrodes can comply with skin deformation, providing a stable signal quality [77]. The downside of using such devices is that they may not be able to cover all areas of interest, such as hair-covered areas, making them unsuitable for some uses of EEG monitoring.

To access more details on the evolution of hardware components, we refer the readers to [71,78].

4. Overview of Survey Articles on EEG-Based BCIs

In this section, an overview of survey articles present in the literature and concerning BCI systems is reported to provide a general assessment of the topics related but not superimposed to the target of the present paper.

In fact, most of the analyzed surveys address different application research areas and experimental paradigms. None of them focus exclusively on MI tasks, except the works by Palumbo et al. (2021) [79], who consider MI tasks in the sole field of wheelchair control, and the review study by Al-Saegh et al. (2021) [4], that concerns the use of deep neural networks in the context of MI EEG-based BCIs.

In particular, Palumbo et al. [79] provide a systematic survey of EEG-based BCIs for wheelchair control through motor imagination by including 16 papers published since 2010. The authors focused on (i) the MI paradigms and the type of commands provided to move the wheelchair, (ii) the employed EEG system (presenting sensors for other biomedical signal recording and the number of positioned electrodes) and the wheelchair components, and (iii) the EEG signal management procedures. The authors want to especially provide a clear assessment of the limitations of current biomedical devices when the end user is affected by any kind of motor disability. Moreover, they point out the main challenges arising when having to face the development of an efficient and reliable BCI to control a wheelchair. Firstly, multiple commands are required to allow correct control of the wheelchair, and thus a multi-objective problem characterizes the system. However, adding more commands may affect the performance of the BCI both in terms of accuracy and time consumption. Secondly, the BCI performance is ultimately dependent on the user, who may fail to perform the MI tasks. Finally, wheelchair control requires constant concentration on the task and thus increases the users’ mental workload.

Considering instead the work of Al-Saegh et al. [4], the use of deep neural networks in the context of EEG-based MI-BCIs is surveyed. The authors retrieved 40 papers published between 1 January 2015 and 31 March 2020. An analysis of the employed datasets was performed, and the authors found information on 15 datasets, of which 7 are publicly available. They notice that they vary significantly in terms of electrodes, subjects, number of MI tasks, and trials. However, most of the datasets seem to rely on experimental paradigms concerning the MI of the right/left hand, feet, and tongue. Moreover, the authors provide an assessment of the most used frequency ranges, extracted features, deep network architectures, and input formulations, highlighting the variety of deep learning (DL) model designs.

In what follows, the review articles are reported according to their topics (experimental paradigms and applications, technological aspects, signal processing, and analyses) and in chronological order of publication.

Considering experimental paradigms and applications, a comprehensive survey of different BCI experimental paradigms can be found in [44], published in 2019.

A general survey on EEG technologies and their application is instead presented by Soufineyestani et al. [68] (2020).

Moving to the technological aspects, a brief review of wearable technologies for smart environments is presented in [80], where the authors dedicate two sections to devices and applications for BCI systems. Considering these last topics, the technologies available at the time of the review (2016) are precisely listed, and advancements in EEG devices are easily detectable, especially considering the use of dry sensors and the presence of products for the general public. This information provides a good starting point to compare wearable technologies.

Instead, a detailed overview of the hardware components of wearable EEG devices was provided by [71] in 2019.

The survey by TajDini et al. [81] (2020) focuses on wireless sensors and, in particular, on the assessment of consumer-grade EEG devices for non-medical research. The authors compare 18 products in terms of sensor type (dry, wet, semi-dry), number of channels, sampling rate, accessibility to raw data, operation time, and price. The analysis of the literature is explored based on the different application domains: cognition (emotion recognition and classification, attention, mental workload, memory), BCI (ERP, SSVEP, MI, and other), educational research, and gaming.

The review by Portillo-Lara et al. [82] (2021) starts with an overview of the neurophysiological mechanisms that underlie the generation of EEG signals and then focuses on the state-of-the-art technologies and applications of EEG-based BCIs. Different electrode interfaces and EEG platforms are analyzed and compared in terms of electrode type and density, functionality, portability, and device performance. The advantages and disadvantages of different electrode designs are enumerated. The technical specifications of 18 commercially available EEG platforms are also compared in terms of electrodes, channel count, sampling rate, weight, battery life, resolution, and price. Both medical and non-medical uses are reviewed in the article.

Instead, the review by Jamil et al. [83] (2021) aims to identify the main application areas that use EEG-based BCI devices and the most common types of EEG-based equipment, considering both wired and wireless devices. They present a systematic review using four search engines (PubMed, IEEE, Scopus, and ScienceDirect). The search strings used were (BCI OR Brain–computer interface OR BMI OR brain–machine interface) AND (EEG OR electroencephalogram) AND (rehab* OR assist* OR adapt*). The inclusion criteria were limited to publication years 2016–2020. After the screening, 238 articles were selected and classified according to the following four research areas: education, engineering, entertainment, and medicine. They found that the medical area is the most frequently used (80%). Wired devices were used in the studies by 121/238 articles, while the remaining 117 reviewed manuscripts employed wireless technologies.

Concerning signal processing and analyses, the feature extraction techniques widely used in the literature were reviewed in 2019 by Aggarwal et al. [43]. Moreover, a survey on EEG data collection and management (processing, feature exaction, and classification) is provided by Reaves et al. [40] (2021), considering 48 papers from high-impact journals. The authors also include lists of devices and publicly available datasets.

A comprehensive review was presented by Gu et al. [84] in 2021. The authors provide a broad overview of BCI systems and their application areas. Moreover, they make a general presentation of invasive, partially, and non-invasive brain imaging techniques and subsequently focus on EEG-based BCIs. Their review is organized to provide a consistent survey on (i) advances in sensors/sensing technologies, (ii) signal enhancement and real-time processing, (iii) machine learning (especially transfer learning and fuzzy models) and deep learning algorithms for BCI applications, and (iv) evolution of healthcare systems and applications in BCIs (e.g., concerning epilepsy, Parkinson’s/Alzheimer’s disease, and neurorehabilitation). Their analysis was performed on about 200 papers, considering publication years between 2015 and 2019.

5. EEG-Based MI-BCIs through Wearable Systems

In this section, we discuss and analyze the main issues and characteristics of the works considered for the present review. A detailed table that summarizes the collected information is made available as supplementary material (Table S1).

5.1. BCI Application and Feedback

As a first assessment of the reviewed papers, an overview of the applications and feedback types of these EEG-based MI-BCI systems is provided in this section.

The main aim is to better understand the wide application of BCIs and the shift of experimental paradigms due to the need for consumer-grade applications in real-life environments.

By analyzing the reviewed papers (Table S1), it is clear that the field of application and the feedback are extremely interconnected.

Moreover, motor imagery seems to be particularly used to provide control for external devices, especially for rehabilitation purposes. Therefore, the feedback usually consists of movements of robots, wheelchairs, prostheses, drones, and exoskeletons.

There are cases in which the feedback is visually provided on a monitor or exploiting virtual reality systems [85].

Moreover, feedback can help in modulating the MI ability of users. For instance, Jiang et al. [86] deal with the creation of a BCI system that uses discrete and continuous feedback in order to improve practicability and training efficiency. The results show that continuous feedback successfully improves imagery ability and decreases the control time.

Starting an in-depth analysis of the application fields, 30 of 84 research studies have medical purposes [87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,115].

Considering the stroke rehabilitation field, Mattia et al. [87] built a BCI system to enhance post-stroke hand functional motor recovery by using ecological feedback, i.e., the visual representation of the patient’s hands, as a feedback tool for the user. A BCI system for recovering people who suffer from post-stroke hand disability has also been proposed by [116]. Firstly, an offline analysis was performed. Afterward, the online system was initially tested on healthy subjects and then on 10 stroke patients affected by a hand disability. The subjects received virtual feedback by controlling a virtual exoskeleton.

Another post-stroke rehabilitation BCI system is presented by Barria et al. [88], who propose a paradigm consisting of motor imagination with visual stimulation and motor imagination with visual-haptic inducement to control an ankle exoskeleton.

Moving to other motor disorders, [89] assumes that people with motor disorders need the help of a caregiver to start a BCI. Therefore, they aim to identify movement intention to initiate BCI systems. Instead, [90] examined neurofeedback manipulation to ensure self-regulation of brain activity as a potential treatment for post-traumatic stress disorder, considering both offline and online analyses. The feedback was shown through a video game, and the participants were asked to manipulate their brain activity to control the game.

Finally, Looned et al. [91] demonstrated the feasibility of a system that assists individuals with neurological disorders to, for example, drink a glass of water independently. The application of this system is implemented through the movement of a robotic arm that assists the movement of the human arm.

Li et al. [92] created a hybrid BCI system to control a lower extremity exoskeleton. By combining EEG data with electromyographic data, they developed an exoskeleton able to help subjects while climbing stairs.

BCI applications are also developed in the entertainment, game, and device (vehicle, robots) control areas [117,118,119,120,121,122,123,124,125,126,127,128].

In particular, [117] focused on the BCI performance in a competitive multi-user condition. In fact, users had to control a humanoid robot in a race against an AI. The authors believed that the game conditions could help the users maintain high motivation and thus increase the effectiveness of the BCI system. However, they found out that there is no significant difference between competitive multi-user conditions and single-user conditions.

Device control is also proposed in [118]. A robotic quadcopter was intended to be controlled in three-dimensional physical space by a BCI user. After a training phase with a virtual drone, subjects modulated their sensorimotor rhythms and controlled a physical drone. Visual feedback was provided via a forward-facing camera on the hull of the drone.

Instead, Alanis et al. [119] created an immersive BCI system. In fact, the users could control the movement of a humanoid robot in a first-person perspective, as if the movement of the robot was their own.

Another application is the one presented by Xu et al. [120], who proposed a motor imagery EEG-based continuous teleoperation robot control system with tactile feedback. The imagination of the user’s hand movements was translated into a continuous two-dimensional control signal and transmitted to the remote robotic arm (using the TCP/IP protocol), allowing it to move remotely through a wireless connection. The user received feedback through a vibrotactile stimulus based on the tactile information of the robotic arm. The authors demonstrated that vibrotactile stimulation can improve operator telepresence and task performance.

For completeness, Figure 6 reports the reviewed paper distribution according to different fields of application and aims.

Figure 6.

Figure 6

Reviewed paper distribution according to different fields of applications and aims.

With respect to the previously discussed solutions, the reported papers mainly focus on rehabilitation systems (17.9%), assistive BCIs (17.9%), and entertainment and device control solutions (14.3%).

The remaining works provide

5.2. Employed Technologies

This section is devoted to the revision of the technologies employed by the reviewed papers. Particular attention is given to EEG-related devices, which are one of the main focuses of this review and of which some information is reported in Table 4.

Table 4.

Synthetic information on the wireless devices employed by the reviewed papers. For the Nautilus entries, it was unclear which device the authors used; thus, all the versions are reported. An asterisk (*) is present when the devices are amplifiers or data acquisition tools. Last information update, 3 October 2022.

Device Producer Electrodes Price Papers
B-Alert X-Series [164] Advanced Brain Monitoring up to 20/W (gel) ask producer [102]
BrainMaster Discovery 24E * [165] bio-medical 24/W, D (compatible) USD 5800.00 [114]
Cyton Biosensing Board * [166] OpenBCI 8/W, D (compatible) USD 999.00 [98,113,162]
eego rt [167] ANT Neuro 8-64/W, D ask producer [168]
Enobio 20 [169] Neuroelectrics 20/W, D ask producer [88,142]
Enobio 8 [170] Neuroelectrics 8/W, D ask producer [93,97,148,162,171,172]
EPOC+ [173] Emotiv 14/W (saline) discontinued [95,110,115,121,126,127,129,132,133,135,136,139,156,174,175]
EPOC Flex [176] Emotiv up to 32/W (gel, saline) USD 1699.00–2099.00 [122,177]
EPOC X [178] Emotiv 14/W (saline) USD 849.00 [107]
g.USBamp * with(out) g.MOBIlab [179] g.tec medical engineering 16/W, D starting from EUR 11,900.00 [87,104,120,125]
Helmate [180] abmedica NA/NA ask producer [111,157]
Insight [181] Emotiv 5/S USD 499.00 [106]
MindWave Mobile 2 [182] Neurosky 1/NA USD 109.99 [108,183]
Muse headband 2 [184] InteraXon 4/NA EUR 269.99 [109,112,131,141,144]
g.Nautilus Multi-Purpose * [185] g.tec medical engineering 8–64/W, D (compatible) changing according to configuration, starting from EUR 4990.00 [89,90,105,125,130,145,146,151,153,155]
g.Nautilus PRO [186] g.tec medical engineering 8–32/W, D changing according to configuration, starting from EUR 5500.00 [89,90,105,125,130,145,146,151,153,155]
g.Nautilus RESEARCH [187] g.tec medical engineering 8–64/W, D changing according to configuration, starting from EUR 4990.00 [89,90,105,125,130,145,146,151,153,155]
NuAmps * [188] NeuroScan 32/NA ask producer [92]
Quick-20 Dry EEG Headset [189] Cognionics 19/D ask producer [100,137]
Starstim [190] neuroelectrics 8–32/tES-EEG ask producer [103,140]
Synamps 2/RT * [191] Neuroscan 64/NA ask producer [118]

Notice that the first (Device) column of Table 4 provides the product names and presents an asterisk (*) when the devices are amplifiers (BrainMaster Discovery 24E g.USBamp, g.Nautilus Multi-Purpose, NuAmps, Synamps 2/RT) or data acquisition (Cyton Biosensing Board) tools. The link to the product is also reported as a citation. The producers are instead presented in the second column, and a good number of devices from Emotiv can be detected. Three g.Nautilus entries are reported due to the absence of a clear indication of the devices used by the related works [89,90,105,125,130,145,146,151,153,155].

The Electrodes field presents firstly the number of sensors or recorded channels and if they are wet (W), dry (D), or semi-dry (S) electrodes. Notice that for the Starstim device, the electrode type is marked as tES-EEG, having that its sensors allow both transcranial electrical stimulation and EEG monitoring. Observe that 9 of 21 devices can be used with both wet and dry electrodes and six products do not provide clear information on the sensor types.

Additional information on device pricing (updated to 3 October 2022) is reported in column 4. The need to ask the producers for the price of the majority of the products is immediately detectable. Finally, the Papers field provides the list of reviewed works employing a specific device. A total of 13 of 84 papers [86,123,128,134,138,143,154,158,159,160,161,162,163] do not appear in this column, having that the authors design and use custom EEG headsets. Finally, Khan et al. [135] and Vourvopoulos et al. [162] compare their custom devices with Emotiv EPOC+ and Enobio 8, respectively.

After having given a brief overview of the synthetic information regarding EEG-related devices, it is possible to move to a deeper analysis of how these technologies have been effectively used by the authors according to the table present in the Supplementary Materials (Table S1).

A total of 14 of 84 works prefer reducing the number of electrodes provided by the device producers [88,92,104,106,110,126,142,162] and especially make a selection on the channel placed over the central cortical area. In particular, only the C{3,4} [139,147], Cz [92,143,148], and C{1,2} [102] electrodes are considered.

Moreover, some of the authors clearly specify the sampling rate used for signal acquisition that goes from 125 Hz [113], 128 Hz [91,110,122,133,135,139], 250 Hz [98,99,112,145,149], 256 Hz [135], and 500 Hz [88,92,93,97,103,117,137,142,147,148,152,171,172] to 512 Hz [124].

Finally, considering the other technologies presented in the supplementary Table S1, the massive use of Bluetooth technology for wireless communication between wearable devices and acquisition/control tools is immediately observable. Moreover, some sensors have been used together with EEG devices like the electromyogram [87,92,113,175,177], functional electrical stimulation devices [102], near-infrared spectroscopy tools [104], and magnetic resonance imaging devices [150].

Notice that feedback was also provided by considering different tools, such as exoskeletons [88,92,100,103], wheelchairs and vehicles [98,99,106,108,110,122], rehabilitation and assistive robotic tools [91,97,105,107], eye tracking devices [171], movement tracking devices [171], virtual reality devices [93,101,119], robots [114,117,125,126], and simulators [127].

5.3. Signal Processing and Analysis

This section is devoted to the second step of the BCI life-cycle and, in particular, to signal preprocessing, feature engineering and channel selection, and data classification and analyses.

5.3.1. EEG Data Preprocessing

EEG signals contain artifacts from internal sources due to physiological activities, e.g., ocular and muscle movements, cardiac activity, respiration, and perspiration. Moreover, artifacts can be generated by external sources related to environmental and experimental noise, e.g., power line and mobile phone interference, electrode movements, and electromagnetic components [7]. Each of these types of noise has its own frequency band and can affect brain rhythms differently.

Due to the non-stationarity and non-linearity of EEG signals, it is difficult to remove these artifacts without the loss of neural information [192].

In particular, EOG artifacts (related to eye movements) and EMG artifacts (related to muscle movements) are considered among the most common sources of physiological artifacts in BCI systems [193].

In the case of wearable devices, the possibility of introducing other artifacts (e.g., data transmission fault, electrical interference), with respect to traditional wired devices, increases [194]. Therefore, the preprocessing module of a typical BCI life-cycle becomes fundamental to provide better data as inputs to the subsequent modules.

From the analysis of the literature summarized in Table S1, we observe that 60 out of 84 articles indicate some details on the preprocessing step of the BCI life cycle. From this subset, we list the main techniques adopted, enumerating only those papers that explicitly reported them:

  • Nine works assumed that source signals are statistically independent of each other and instantaneously mixed, and apply Independent Component Analysis (ICA) to remove noise, mainly due to eye movements and eye blinks [93,95,103,141,147,150,152,158,171]. EEGLAB toolbox [195] is frequently employed by the authors to implement ICA;

  • A total of 21 papers explicitly indicate that notch filtering is applied to eliminate the power line interference at 50/60 Hz [89,92,97,98,107,111,113,119,131,132,136,140,142,143,144,151,153,155,157,158,174];

  • A number of 11 works applied temporal filtering approaches such as Butterworth of different orders and cutoffs: third order filter in 0.5–30 Hz [148] or in 4–33 Hz [123], fourth order in 16–24 Hz [88], fifth order in 8–30 Hz [96,114,132,142,143] or in 1–400 Hz [113], biquad tweaked Butterworth in 8–13 Hz [138], and sixth order in 8–30 Hz [153];

  • A total of 13 papers applied spatial filtering approaches such as the Common Average Reference (CAR) filter [99,120,132,138,139,148] and Laplacian ones [88,97,101,103,140,172,177].

5.3.2. Feature Engineering

The next step after the signal preprocessing is the feature engineering stage, where the relevant information about the EEG signals is extracted and analyzed.

As detailed in Section 3.2, during motor movement or imagination, μ and β rhythms are interested in a Desynchronization (ERD). Instead, the deactivation of the motor cortex goes against a Synchronization (ERS) of the β rhythm.

Therefore, the power changes due to ERD/ERS encodes relevant information on MI. In general, ERD (ERS) is observed in the contralateral (ipsilateral) sensorimotor area. Taking into account these phenomena, Common Spatial Pattern (CSP) and band power features (that represent the power of EEG signals for a given frequency band averaged over a time window) are natural choices with respect to feature extraction methods and are applied by most of the works within the field of EEG analysis. The lateralization index between hemispheres is also used to describe the asymmetry of neural activation intensity [93].

Following the analysis of Table S1 provided as supplementary material and taking into account the articles that explicitly indicate the feature extraction steps performed, we have that

  • In the frequency domain, the authors of 25 papers compute the Power Spectral Density (PSD) of the signal, usually through Fast Fourier Transform (FFT) or Welch’s method [86,88,89,92,93,94,97,104,109,111,114,123,124,136,138,139,140,144,145,154,160,161,162,171,177];

  • In the time–frequency domain, wavelet transform-based methods are employed in 8 studies [89,122,129,141,147,154,158,163];

  • In the spatial domain, CSP-based approaches are applied in 17 articles [90,92,93,96,99,100,110,111,134,135,137,146,150,151,153,168,175]. Variations of the CSP are found in [132], which considers local mean decomposition CSP, and [146], which exploits filter bank CSP, among others;

  • In the temporal domain, statistical features such as the standard deviation, skewness, kurtosis, entropy, and energy are considered [89,141,183].

Notice that most of the authors combine features from multiple domains in order to obtain a final, more robust feature vector able to improve the classification accuracy.

Nearly all the reviewed articles work with different combinations of hand-crafted features, but recently deep features extracted by convolutional neural networks (CNNs) have also been considered [110,115,126,131,143].

With respect to feature selection, different methods are applied, such as ICA [146], joint mutual information [90,133], generalized sparse discriminant analysis (that is employed to perform feature reduction and classification contemporaneously) [113,168], and sequential backward floating selection techniques [135].

To reduce the data dimensionality, the traditional channel set C{3,4,z}, located over the central cortical area, is often considered [92,147,148,158,160,175]. However, in several cases, the choice of the electrodes mainly depends on the used device [86,109,112,141,144]. Electrodes positioned over the fronto-central and central-parietal areas are also frequently taken into account. Electrode subsets are also added to the C{3,4,z} group: C{1,2} [102,142], FC{3,4} [154], and Fpz and Pz [162]. Daeglau et al. [117] selected the electrode set constituted by Cz and CP{1,z,2}.

Moreover, different groups of eight electrodes are selected by some of the reported studies: FC{1,2} + C{3,z,4} + CP{1,2} + Pz [98], F{3,4} + C{3,z,4} + T{7,8} + Pz [97], C{1,3,z} + CP{1,5} + FC{1,5} + P3 [171], FC{5,6} + C{1,2,3,4} + CP{5,6} [93], and Fp{1,2} + Fz + C{z,3,4} + O{1,2} [157]. Instead, groups of 9 electrodes have been considered by [103,140] (C{z,1,2,3,4} + CP{1,2} + FC{1,2}), while 10 electrodes have been selected by [101] (C{1,2,3,4,5,6} + FC{3,4} + CP{3,4}).

5.3.3. Classification and Data Analysis

As a final processing step, classification and data analysis are performed to provide a specific assessment of the EEG data or to allow the correct feedback execution.

Concerning the reviewed papers, notice that the main strategies employed can be divided into (i) traditional Machine Learning (ML) models, (ii) DL architectures, (iii) other supervised learning techniques considering ensemble and transfer learning approaches with the possible additional application of evolutionary algorithms, and (iv) statistical analysis, quality assessment, and functional connectivity.

The strategies are distributed as depicted in Figure 7. Notice that details on the proposed ML and DL models have been reported, while there are no details for the other two categories, due to the great variety of approaches.

Figure 7.

Figure 7

Radial graphic depicting the distribution of the classification and data analysis techniques employed by the reviewed papers.

Besides some papers that employ different strategies at the same time [89,101,115,119,121,122,134] or multiple ML techniques [89,111,133,135,144,146,152,157,183], the authors usually prefer to concentrate on a specific technique.

The traditional ML models seem to be the most used, and among them, the Linear Discriminant Analysis (LDA) classifier employed in its basic version [89,90,93,101,104,111,117,119,134,150,151,152,153,158], in combination with Common Spatial Pattern (CSP) [96], or considering multiple LDA models combined through fuzzy integration, considering optimal feature selection and classification through generalized sparse LDA [168], and presenting confidence levels assigned with particle swarm optimization [128] appears to be the most used. Likewise, the Support Vector Machine (SVM) classifier is widely employed [86,89,99,111,121,122,123,129,135,140,146,152,154,155,157,183].

The K-Nearest Neighbor (KNN) model is used by a good number of works [89,102,111,122,133,135,146,183], while the other techniques, i.e., Naive Bayes (NB) [89,111] Parzen Window [135], Multi-Layer Perceptron (MLP) [109,145,183], decision tree [89,133,183], Random Forest (RF) [157,183], neural network (NN) [111,161], Logistic Regression (LR) [100,144], and Quadratic Discriminant Analysis (QDA) [144], have been employed by a restricted number of works and together with other techniques.

Considering the DL approaches, the most used architectures have been based on the use of convolutional neural networks (CNNs) [112,121,123,126,131,142,147,163] sometimes combined with Long Short Term Memory (LSTM) layers [112,126,131]. Ref. [110] provided analyses by employing 1DCNN, 2DCCN, and one-dimensional multi-scale convolutional neural network (1DMSCNN). Instead, [143] specified the use of pre-trained CNNs, i.e., AlexNet, ResNet50, and InceptionV3. Finally, two works used a back-propagation NN [92] and an autoregressive model [120].

The Other AI (artificial intelligence) techniques depicted in Figure 7 refer to miscellaneous approaches that could use ensemble techniques [121], transfer learning [143] models, or other supervised learning approaches such as adaptive Riemannian classifier [130], eXtreme Gradient Boosting (XGBoost) [115], echo state network with genetic algorithm for parameter optimization and Gaussian readouts to create direction preferences [136], fuzzy integral with particle swarm optimization [134], multi-objective grey wolf optimization twin support vector machine [132], and Markov switching model [137].

Finally, a great number of statistical analysis, quality assessment, and functional connectivity studies, or other MI detection techniques, falls under the Other analysis label.

The applied strategies may be summarized as follows:

  • A t-test analysis has been applied to provide alpha wave testing for comparison between systems [163], to compare average ERDs derived from different devices [162], and to compare different types of experiment [97] and experimental conditions [171];

  • Questionnaire analyses have also been performed for quality assessment, by considering the opinions given by the subjects concerning a specific device [97], or employing the Quebec User Evaluation of Satisfaction with Assistive Technology test to evaluate patients’ satisfaction [88] and for subject MI ability assessment [113];

  • Correlation analyses have been used to compare different electrode types [159] and to quantify subjects’ intent [115];

  • MI detection through ERD only [138] or multivariate distribution [160];

  • Analysis of the change in functional connectivity [103,119].

Other analyses performed by the reviewed works are the BCI-use success rate assessment based on the beta power rebound threshold [88], learning vector quantization to predict character control [124], a two-command certainty evaluation algorithm proposed by [122], the use of SPSS tool for statistic analysis [89], and the application of a transfer rate metric to determine an asynchronous real-world BCI [118].

Table 5 summarizes the information related to the classification and other analyses presented by works reporting comparisons with benchmark datasets (Section 5.4) and/or having their proprietary datasets available upon request or present in public repositories.

Table 5.

Synthetic information on works presenting classification and other analyses, while comparing their results with benchmark datasets or their own available upon request or published datasets. An asterisk (*) has been applied to all the datasets that will be detailed in Section 5.4, at their first appearance.

Reference Dataset and Experimental Paradigm Classification and/or Other Analyses Performance of the Best Method Online and/or Offline
Tang et al. [110] Benchmark dataset: BCI Competition IV dataset 2b *.
Own dataset: not available. Five subjects performed an experiment consisting of 90 repetitions of each of the MI tasks (left/right hand).
Classification task: binary (left vs. right hand).
DBN, DWT-LSTM, 1DCNN, 2DCNN and one-dimensional multi-scale convolutional neural network (1DMSCNN).
Measures: average accuracy with 1DMSCNN.
Validation strategy: dataset division in training and test sets according to the 4:1 ratio.
BCI Competition IV dataset 2b (offline analysis): 82.61%.
Own dataset (online analysis): accuracy for each subject 76.78%, 91.78%, 70.00%.
both
Guan, Zhao, and Yang [133] Benchmark dataset: BCI Competition IV dataset 2a *, BCI Competition III dataset IIIa *.
Own dataset: available upon request. Seven subjects performed imagination of shoulder flexion, extension, and abduction. The acquisition were repeated for 20 trials, each lasting for 5 s of activity plus 5–7 s rest.
Classification tasks: one-vs.-one, one-vs.-rest.
1. Subject-specific decision tree (SSDT) framework with filter geodesic minimum distance to Riemannian mean (FGMDRM).
2. Feature extraction algorithm combining semisupervised joint mutual information with general discriminate analysis (SJGDA) to reduce the dimension of vectors in the Riemannian tangent plane and classification with KNN.
Measures: average accuracy and mean kappa value.
Validation strategy: k-fold cross validation.
BCI Competition IV dataset 2a:
- SSDT-FGMDRM 10-fold cross-validation average accuracy left vs. rest 82.00%, right vs. rest 81.28%, foot vs. rest 81.51%, tongue vs. rest 83.95%. SSDT-FGMDRM mean kappa value 0.589.
- SJGDA 10-fold cross-validation mean accuracy left vs. rest 84.3%, right vs. rest 83.54%, foot vs. rest 82.11%, tongue vs. rest 85.23%. SJGDA mean kappa value 0.607.
- SJGDA 10-fold cross-validation mean accuracy on left vs. right 79.41%, left vs. feet 87.14%, left vs. tongue 86.51%, right vs. feet 86.75%, right vs. tongue 87.00%, feet vs. tongue 82.04%.
BCI Competition III dataset IIIa: 5-fold cross-validation mean accuracy on 82.78%.
Own dataset: 5-fold cross-validation mean accuracy (rounded values taken from the provided bar graphics) flexion vs. rest 90.00%, extension vs. rest 80.00%, abduction vs. rest a bit more than 90.00% and flexion vs. extension 90.00%, flexion vs. abduction 95.00%, extension vs. abduction 90.00%.
offline
Peterson et al. [168] Benchmark dataset: BCI competition III dataset IVa * and BCI competition IV dataset 2b.
Own dataset: 11 subjects performed imagination of dominant hand grasping and a resting condition in four runs constituted by 20 trials per class.
Classification task: binary (rest vs. dominant hand grasping).
Optimal feature selection and classification contemporaneously performed through generalized sparse LDA.
Measures: average accuracy (reported best).
Validation strategy: 10 × 10 fold cross validation.
BCI competition III: 90.94 (±1.06)%.
BCI competition IV: 81.23 (±2.46)%.
Own dataset: 82.26 (±2.98)%.
offline
Yang, Nguyen, and Chung [147] Benchmark dataset: None
Own dataset: available upon request. Six subjects, 10 trials of right hand grasping imagination for 5 s. The experiment was repeated for 10 runs. Notice that SSVEP tasks were also included.
Classification task: multi-class both MI and SSVEP.
CNN.
Measures: best average accuracy (MI task).
Own dataset: 91.73 (±1.55)%.
offline
Freer, Deligianni, and Yang [130] Benchmark dataset: BCI competition IV dataset 2a.
Own dataset: three subjects performed a paradigm without and with feedback. A total of 20 trials for each of the four conditions (left/right hand, both hands/feet) are performed per run with MI of 2–3 s.
Classification task: multi-class (4 classes).
Adaptive Riemannian classifier.
Measures: accuracy.
BCI Competition IV dataset 2a: lower than 50%.
Own dataset: lower than 50%.
both
Barria et al. [88] Benchmark dataset: None
Own dataset: available (https://www.clinicaltrials.gov/ct2/show/NCT04995367 accessed on 31 January 2023). Five subjects, five phases: calibration, real movement, stationary therapy, MI detection with visual stimulation, and MI detection with visual and haptic stimulation. Besides the first phases, the protocol consisted of 10 s alternations of knee flection task and rest.
Classification task: None.
Other analyses:
- Control of ankle exoskeleton through knee flection.
- Analysis of the success rate in using the BCI, based on the beta power rebound threshold.
- Quebec User Evaluation of Satisfaction with Assistive Technology test to evaluate patients’ satisfaction.
None. offline
Peterson et al. [113] Benchmark dataset: None
Own dataset *: https://github.com/vpeterson/MI-OpenBCI (accessed on 31 January 2023).
Classification task: binary (rest vs. dominant hand grasping).
Generalized sparse discriminant analysis is used for both feature selection and classification.
Other analyses:
- The motor imagery ability of a single subject has been accessed through the KVIQ-10 questionnaire.
- Analyses of temporal and frequency information.
Measures: average accuracy (extracted from bar plot).
Own dataset: around 85% with Penalized Time–Frequency Band Common Spatial Pattern (PTFBCSP).
both
Shajil, Sasikala, and Arunnagiri [143] Benchmark dataset: BCI competition IV dataset 2a.
Own dataset: nine subjects performed 80 trials per MI conditions: left and right hand.
Classification task: binary. AlexNet, ResNet50, and InceptionV3 (pre-trained CNN models) plus transfer learning. Measures: best average accuracy.
BCI competition IV dataset 2a: InceptionV3 82.78 (±4.87)%.
Own dataset: InceptionV3 83.79 (±3.49)%.
offline
Zhang et al. [115] Benchmark dataset: used 10 subjects of Physionet EEG Motor Movement/Imagery Dataset *.
Own dataset: seven subjects. Five conditions: eyes closed, left/right hand, both hands/feet paradigm (as for the benchmark dataset).
Classification task: multi-class on five conditions. RNN, CNN, RNN + CNN. Measures: average accuracy. (Precision, Recall, F1, AUC and confusion matrix for both Physionet and own dataset are also provided).
Validation:
- Benchmark dataset divided into training (21,000 samples) and test sets (7000 samples).
- Own dataset divided into training (25,920 samples) and test sets (8640 samples) for each subject.
Benchmark dataset: best model RNN+CNN 95.53% average accuracy.
Own dataset: best model RNN+CNN 94.27% average accuracy.
offline
Mwata et al. [126] Benchmark dataset: EEG BCI dataset *.
Own dataset: four subjects. Experimental conditions: right and left hand, and the neutral action.
Classification task: multi-class on three conditions (neutral, left/right with corresponding robot command forward, backward, and neutral). Hybrid CNN-LSTM model.
Other analyses: Report different subject-combinations based-analysis.
Measures: average accuracy.
Validation strategy: 10-fold cross validation.
Benchmark dataset: 79.2%.
Own dataset: 84.69%.
online
Apicella et al. [111] Benchmark dataset: None.
Own dataset: 17 subjects. Motor task consists of maintaining attention focused only on (i) the squeeze movement (attentive-subject trial), or (ii) a concurrent distractor task (distracted-subject trial); in both trials, the participant must perform the squeeze-ball movement (three sessions, 30 trials per session). Total epochs: 4590. Half of the epochs were collected during the attentive-subject trials and were labeled as belonging to the first class. The remaining part was acquired during the distracted-subject trials and was labeled as belonging to the second class.
Classification task: binary (MI during attention vs. MI during distraction).
KNN, SVM, ANN, LDA, NB.
Measures: average accuracy (also provide precision, recall and F1 measure).
Validation strategy: 10-fold cross validation.
Own dataset: k-NN 92.8 (±1.6)%.
offline
Alanis and Gutiérrez [119] Benchmark dataset: None.
Own dataset: available upon request, two subjects, four conditions: left or right hand, both hands, move up and down both feet. Five runs of forty trials.
Classification tasks: binary one-vs.-rest. LDA classifier using features extracted by BCI2000.
Other analyses: graph theory metrics to understand the differences in functional brain connectivity.
Best binary classification for both subjects: right hand open/close vs. rest. No classification results reported. online
Mahmood et al. [123] Benchmark dataset: None.
Own dataset: available upon request, four subjects. Experimental conditions: eyes closed, left/right hand, pedal pressing.
Classification tasks: multiclass (4 classes). Population-based approach. SVM and CNN classifiers. Measures: average accuracy.
Validation strategy: 5-fold cross validation. CNN real-time accuracy: 89.65% and 93.22% for Ag/AgCl and FMNEs electrodes
both

For each table entry, the (i) reference work, (ii) the benchmark datasets (which are detailed in Section 5.4) and their own dataset with a minimal description of the considered experimental paradigm, (iii) the classification tasks and models (if any) and other analyses (if any), (iv) the performance of the best reported model comprising the evaluation measures, the validation strategies, and the results on each of the employed datasets, and (v) if the system was tested online and/or offline, are reported.

Notice that the analyses are performed by considering the subjects one at a time, if not otherwise stated. Moreover, an asterisk (*) has been applied to all the datasets that will be detailed in the following Section 5.4, at their first appearance.

The fields of application of the reported works are very diverse; however, the classification tasks usually involve the binary classification related to the MI of hands grasping, opening/closing, or moving.

Moreover, most of the works report accuracy (usually higher than 70% for both binary and multi-class tasks) as the only performance measure to evaluate the proposed classification models, and the validation strategy is not always specified.

Finally, most of the entries present offline analyses, and just a few try to work in an online modality.

5.4. Dataset and Experimental Paradigms

Of the 84 cited papers, 7 report that their proprietary datasets are available upon request [93,111,119,123,133,146,174], and only [113] providing the MI-OpenBCI dataset has a publicly available resource.

Some articles present links to their data, which does not seem to be available [126,134].

Instead, the publicly available third parties’ BCI Competition III dataset IIIa [133], BCI Competition III dataset IVa [168], BCI Competition IV dataset 2a [130,133,143], 2b [110], EEG Motor Movement/Imagery Dataset [115], and the EEG BCI dataset [126] have been employed by the reviewed papers as benchmarks before proprietary data testing or to directly test the proposed approaches.

Table 6 presents a summary of the publicly available datasets, which will be described in detail in the following subsections. Notice that the reported citations in the Dataset field refer only to the publications presenting the dataset description or required by the dataset authors. Links to the repositories, brief information on the used technologies, and the experimental paradigms are summarized in the remaining fields.

Table 6.

Summary of the publicly available datasets employed by some of the reviewed papers. In the Dataset field are reported only the citations directly related to the dataset publication.

Dataset Link Device Experimental Paradigm
BCI Competition III dataset IIIa [196] https://www.bbci.de/competition/iii/ Neuroscan, 64 channel EEG amplifier (wired) cue-based left/right hand, foot, tongue MI
BCI Competition III dataset IVa [196] https://www.bbci.de/competition/iii/ BrainAmps and 128 channel ECI cap (wired) cue-based left/right hand, right foot MI
BCI Competition IV dataset 2a [197] https://www.bbci.de/competition/iv/ 22 electrodes (wired) cue-based MI-BCI left/right hand, both feet, tongue MI
BCI Competition IV dataset 2b [197] https://www.bbci.de/competition/iv/ 3 electrodes (wired) cue-based MI-BCI left/right hand MI
EEG Motor Movement/Imagery Dataset [42,198] https://physionet.org/content/eegmmidb/1.0.0/ 64 electrodes (wired) cue-based motor execution/imagination left/right fist and both feet/fists opening/closing
MI-OpenBCI [113] https://github.com/vpeterson/MI-OpenBCI OpenBCI Cyton and Daisy Module, Electrocap System II, 15 electrodes (wearable) cue-based dominant hand grasping MI
EEG BCI dataset [199] https://figshare.com/collections/A_large_electroencephalographic_motor_imagery_dataset_for_electroencephalographic_brain_computer_interfaces/3917698 EEG-1200 JE-921A EEG system, 19 electrodes (wired) left/right hand, left/right leg, tongue, and finger MI

All links accessed on 31 January 2023.

Considering the other papers, notice that the experimental paradigms usually concern the motor imagination of left/right hand/fist [86,90,93,94,95,96,99,101,110,112,115,119,120,123,128,129,130,131,134,135,137,138,141,142,143,146,147,148,149,150,151,153,154,158,161,163,172,183], both hands [99,119,120,130,142,172], dominant or single hand movements [87,102,113,160,168], finger tapping [175], shoulder flexion, extension, and abduction [132,133], the motion of upper/lower limbs [91,97,103,104,145,177], foot/feet movement [88,100,114,119,123,129,130,142,148,172], pedaling [98,140,152], tongue movement [129], game character/robot/machinery movement control [105,106,107,108,109,117,118,119,121,122,124,125,126,127,156,174], and generic motor intention [89,155].

Peculiar experimental conditions are presented by [144,157]. Tiwari et al. [144] propose the imagination of eight cognitive tasks, i.e., forward, backward, left, right, hungry, food, water, and sleep, with the perspective of developing an efficient assistive tool for disabled people. Angrisani et al. [157] design a complex experimental paradigm of performed and imagined soft ball squeeze, dorsiflexion of the ankle, flex-extension of the forearm, finger mobilization by clenching a clothespin, and flex-extension of the leg, to validate their BCI instrumentation.

Besides the experimental paradigms considered by the reviewed works, it is interesting to have an overview of the subjects involved in the experiments.

Excluding [115,134], which employ third parties’ datasets, framework proposals [87,106], and simulated environments [105], in the remaining 79, analysis on the involved subjects can be synthetically listed as follows:

  • A total of 7/79 papers do not provide any information regarding the involved subjects;

  • A total of 36/79 papers specify the biological gender of the subjects and report most of the time the number of subjects divided per male and female;

  • A total of 50/79 papers recruited healthy subjects, while only 5 considered patients affected by specific pathologies;

  • A total of 21/79 papers present information regarding the previous experience of the subjects with EEG, BCI, or MI-based experiments;

  • A total of 35/79 papers report no information on the participants’ age, while the other works consider subjects aged around 20–30 years. Only [88,93,112] ask for the participation of adults over 30 years to a maximum of 60 years of age;

  • Almost 50% of the works reporting information on the subjects perform their experiment on a maximum of 5 participants; 27% recruit a maximum of 10 subjects, and very few works consider more than 20 participants. A detailed infographic is depicted in Figure 8.

Figure 8.

Figure 8

Number of subjects recruited by the remaining 79 works presenting information on the matter. The first number (bold black) represents the subject number range, while the second number (bold gray) is the percentage considering the total number of works. For example, 48% of the works presenting subject information recruit a maximum of five participants.

Finally, notice that of the 84 papers, 39 present an ethical statement regarding the approval of the proposed experiment, and 33 confirm that written or oral informed consent was given by the subjects.

5.4.1. BCI Competition III Dataset IIIa

The BCI Competition III dataset IIIa [196] collects the data recorded from three subjects while performing a cue-based experiment of MI tasks, i.e., left/right hand, foot, or tongue movement randomly presented in six runs of 40 trials each.

The EEG signals have been acquired on 60 electrodes (the montage is depicted in the official dataset description available at https://www.bbci.de/competition/iii/desc_IIIa.pdf accessed on 31 January 2023) through a wired 64-channel Neuroscan device (250 Hz sampling rate). The reference and ground electrodes were placed on the left and right mastoids, respectively.

The output signal has been bandpass filtered (1–50 Hz), and the notch filter was enabled.

5.4.2. BCI Competition III Dataset IVa

The BCI Competition III dataset IVa [196] presents the recording of five subjects, who were asked to perform left/right hand and right foot MI according to two types of visual stimulations. Each subject had to respond to 280 cues.

BrainAmp amplifiers and 128 channel Ag/AgCl electrode cap from ECI were employed for the EEG signal collection. Notice that of the 128 channels, 118 were measured at positions compliant with the extended international 10-20 system (more details on the official dataset description available at https://www.bbci.de/competition/iii/desc_IVa.html accessed on 31 January 2023).

The acquired signals were bandpass (0.05–200 Hz) filtered and digitized at 1000 Hz with 16-bit (0.1 uV) accuracy. A data version presenting downsampled signals at 100 Hz was also provided.

5.4.3. BCI Competition IV Dataset 2a

The widely known BCI Competition IV dataset 2a [197] contains continuous multi-class motor imagery data acquired from nine subjects. The participants were asked to perform a cue-based MI-BCI considering the imagination of the left/right hand, both feet, and tongue movements. The subjects had to participate in two experimental sessions (six runs of 48 trials each) on different days.

Notice that the signals were recorded from 22 Ag/AgCl wired electrodes (please, consult the original publication for the montage details). Left and right mastoids presented the ground and reference electrodes, while two EOG channels were positioned to allow artifact removal. The signal presented a sampling rate of 250 Hz and was bandpass (0.5–100 Hz) and notch (50 Hz) filtered. Moreover, experts performed a manual screening of the signals and marked the trials containing artifacts.

5.4.4. BCI Competition IV Dataset 2b

The Session-to-Session Transfer of a Motor Imagery BCI under Presence of Eye Artifacts dataset, widely known as the BCI Competition IV dataset 2b [197], was intended to provide the classification of EEG signals in the presence of ocular artifacts. Therefore, it collects EEG (on C{3,4,z} electrodes) and electrooculogram signals previously acquired by [200]. Nine right-handed healthy subjects had to perform an experiment during which they were guided to produce specific ocular artifacts, and this also presented a cue-based MI-BCI paradigm consisting of the motor imagination of left- and right-hand movements. Two sessions (each of six runs with 10 trials per run) without feedback were recorded separately for each subject. Afterward, three sessions with online feedback were performed, given that each session was constituted by four runs of 40 trials each. The feedback was in the form of a smiley changing expression and color depending on the good outcome of the MI task.

5.4.5. EEG Motor Movement/Imagery Dataset

The EEG Motor Movement/Imagery Dataset available on the PhysioNet repository (https://physionet.org/content/eegmmidb/1.0.0/ accessed on 31 January 2023) [42,198] presents data acquired using a BCI2000 system and considering 64 EEG channels positioned according to the 10-10 International System (excluding electrodes Nz, F9, F10, FT9, FT10, A1, A2, TP9, TP10, P9, and P10). The recording was performed with a sampling rate of 160 Hz.

Each of the 109 subjects undertook a cue-based experiment of 14 runs consisting of two baseline recordings and three recordings per experimental task, i.e., MI and executed opening/closing of left/right fist, MI and executed opening/closing of both fists/feet.

5.4.6. MI-OpenBCI

The MI-OpenBCI dataset [113] presents the recordings acquired through a consumer-grade MI-BCI system based on OpenBCI Cyton and Daisy Module. The EEG signal was recorded by using the Electrocap System II. Moreover, an electromyographic signal was acquired through the OpenBCI Ganglion board connected to the Myoware sensors. OpenViBE and OpenBCI GUI were used for EEG and electromyographic data recording, respectively.

The experiment was approved by the Comité Asesor de Ética y Seguridad en el Trabajo Experimental and performed by 12 (four female) healthy right-handed subjects (mean age ± SD = 25.9 ± 3.7 years) who did not have any previous experiences with BCIs. The subjects gave their informed consent. Regarding the sole EEG wireless data recording (125 Hz sampling rate), the F{z,3,4,7,8}, C{z,3,4}, T{3,4,5,6}, P{z,3,4} electrodes were employed. The reference and ground electrodes were placed at the left/right ear lobes.

The experimental protocol consisted of a cue-based motor imagination of the dominant hand grasping and a resting condition. The tasks were presented randomly 20 times (4 s) each for four runs. A 20 s baseline was acquired before the protocol started.

5.4.7. EEG BCI Dataset

With the aim of providing a large and uniform dataset to design and evaluate processing strategies, [199] provides the EEG BCI dataset. The data were acquired after the approval of the Ethics Committees of Toros University and Mersin University in the city of Mersin (Turkey) and after having received the informed consent form signed by the subjects.

The data were acquired through a standard medical EEG station (EEG-1200 JE-921A EEG system, Nihon Kohden, Japan) considering 19 electrodes of the Electrocap System II, with varying sampling rates and an in-built filtering application.

The 13 healthy participants (five females and eight males, aged 20–35) were asked to perform different MI paradigms consisting of left/right hand, left/right leg, tongue, and finger movements.

6. Discussion

In this systematic review, 84 papers published in the last ten years have been deeply analyzed with the aim of answering the following main research question:

Are wearable technologies mature for EEG-based MI-BCI applications in uncontrolled environments?

However, several aspects should be considered to properly address this point, and thus, four sub-questions have been defined, as introduced in Section 1.

Important conclusions can be drawn to answer the first research sub-question,

RQ1: Is there a significant amount of EEG-based MI-BCI studies using wearable technologies in the literature that implies a promising future development of this research field, especially in uncontrolled environments and outside the medical and clinical settings?

by analyzing the results obtained through the extensive search initially performed considering different EEG, MI, and BCI related keyword combinations (Section 2.3) detailed in Table 2 and the final paper pool identified through the PRISMA flow (Figure 1).

In fact, according to the results reported in Table 2, the MI paradigm is particularly used in the EEG domain. About 26% of the works retrieved by considering only the EEG-based BCI keywords present MI paradigms, while only 0.71% present the use of wearable technologies for MI experiments.

Considering the timeline of the final filtered publications (Figure 2), most of the reviewed works have been published between 2019 and 2020, denoting the relatively new interest in wearable devices and a recent increase in the availability of these technologies to the EEG community.

Notice that around 20 different devices (Section 5.2) have been adopted in the applications reported by the 84 papers here analyzed, with different spatial resolutions (from 1 to 64 electrodes) and characteristics. This huge number of tools and variety of technical properties denote the increasing interest in this technology but make it difficult to qualitatively compare them.

Research directions have been clearly paved to provide new EEG-based MI-BCI wearable solutions with the aim of being employed for applications in heterogeneous and real-life environments.

One-third of the applications found in the reviewed literature are related to rehabilitation and assistive purposes, where the feedback of the systems plays a significant role in controlling external devices. Nearly 25% of the reviewed papers focus on methodological testing, presenting either new frameworks or particular signal processing and analysis techniques. Several works (15%) describe BCI applications developed in the entertainment field, while nearly 17% of contributions address the evaluation of new technical solutions and paradigm proposals.

Therefore, uncontrolled environments have been scrutinized by researchers to propose new EEG-based MI-BCIs wearable solutions.

Another interesting datum on the research production of the last ten years regards the development and study of BCI life-cycle pipelines, which concerns the second sub-question.

RQ2: Are there common pipelines of processing that can be adopted from signal acquisition to feedback generation?

Data acquisition, signal preprocessing, feature engineering and channel selection, data classification, and analyses, as well as feedback modalities of the 84 papers considered here were extensively analyzed in this review and synthesized in Section 5, and in particular in Table 5 and Figure 7. To summarize this analysis and answer RQ2, we observe that the first crucial point, especially using wireless technologies and wearable devices, is related to noise removal. To address this point, considering both internal and external noise sources, preprocessing algorithms can benefit from the knowledge of the frequencies of both the artifacts to be removed and the rhythms that should be preserved. However noise and signal frequencies often interfere.

Three main approaches can be identified, namely the use of blind source separation techniques, filters in the frequency domain, and spatial filters. The first approach usually presents the application of ICA, which separates a mixed signal into different components, assuming the presence of different signal sources. The second type of preprocessing relies on filters in the frequency domain, especially Butterworth filters, to select the brain rhythms of interest, and at the same time, remove noise. The last type of approach applies spatial filtering, like CAR filtering, taking into account the spatial correlation of the brain waves.

Even if a unique strategy is not adopted by all the applications, the noise removal procedures are quite similar among the considered publications.

For what concerns the feature engineering steps (which usually come after the preprocessing one), the variability among different papers is relatively low. In general, the ERD/ERS phenomenon is widely studied considering μ and β rhythms, exploiting time, frequency, and time–frequency handcrafted features. Only in recent years have deep learning methods begun to be used to automatically extract features from the raw signals.

Moreover, working with wearable devices and potentially in uncontrolled environments with low computational power, the reduction in data dimensionality is particularly important, especially considering the need for a low number of input data to be considered in a classification task. To this end, besides traditional feature reduction and feature selection strategies, a good number of works focus only on specific channels that are usually chosen among the central cortical area, which is coherent with the neuroscientific literature on MI.

The last processing step, represented by data analysis and classification, appears to be more heterogeneous with respect to the other ones, as depicted in Figure 7. In particular, regarding models used to perform different classification tasks, most of the works (about 54%) rely on traditional machine learning techniques, especially LDA and SVM; about 10% adopt ensemble techniques, transfer learning models, or other supervised learning approaches, while only 15% of them adopt deep learning strategies. It is also worth noting that 21% of the works do not face classification problems, but present statistical analysis, quality assessment, and functional connectivity studies.

From these considerations, we can conclude that there is a low variability in the initial steps of the whole BCI life cycle, while for what concerns data analysis and classification a higher variability can be identified. In particular, the adoption of deep learning models is at its early stage, and it is not outstanding with respect to traditional machine learning strategies.

A clear comparison and assessment of the efficacy of different classification models would benefit from the application of these strategies on data acquired using a similar experimental paradigm or on benchmark datasets.

This observation is strictly related to RQ3 and RQ4 sub-question answering. Starting from the third sub-question,

RQ3: Are there consolidated experimental paradigms for wearable EEG-based MI-BCI applications?

notice that the experimental paradigm adopted by most of the considered works (39 out of 84) concerns MI of left/right hand/fist movement. However a high number of different types of other MI paradigms are considered: single hand/both hands, foot/feet or tongue movement, shoulder flexion, extension and abduction, the motion of upper/lower limbs, pedaling, game character/robot/machinery movement control or generic motor intention and even the imagination of cognitive tasks. Moreover, single task duration, task order, administration modality, and experimental settings are also very heterogeneous.

From this variety of MI paradigms, several datasets have been collected or employed by the authors of the reviewed papers, allowing them to answer the last research sub-question:

RQ4: Are there datasets available for the research community to properly compare classification models and data analysis?

Considering data acquisition, 79 out of 84 works collect their own dataset, involving, in most of the cases (76%), less than 10 subjects. In particular, about 49% of these 79 works consider less than five participants. Notice that only seven proprietary datasets are available upon request. Moreover, among all the publicly available datasets reported in Table 6, which can be considered benchmarks, only one is acquired using wearable devices [113].

Among the 84 papers considered, only 9 adopted these benchmark datasets to evaluate the proposed models, of which 8 employed the third-party datasets acquired using wired systems.

Furthermore, notice that even if the same benchmark dataset is adopted, the classification tasks may vary from different types of binary classification: one-vs.-one (for instance, left versus right hand) or one-vs.-rest (for example, right-hand imagined movement versus resting state), and multiclass classification, with a range between three and five classes. Classification models and their performance, as well as other types of analysis, are reported in Table 5 only for those works (13 out of 84) that present results either on benchmark datasets or on available proprietary ones, making the proposed analysis reproducible. As a final important note, among these 13 publications, only 5 declare having performed online analysis.

Considering the answers to the provided sub-questions, the main research question concerning the maturity of EEG-based MI-BCI applications in uncontrolled environments can be addressed.

Having a closer look at the applications reported by the reviewed papers, it seems that most of them pertain to the medical and rehabilitation fields and are mostly employed in controlled environments. However, the EEG-based MI-BCI systems using wearable technologies in real-life scenarios seem to provide reliable assistance to their users and to be well received in case of assistive employment. They also seem promising in the case of entertainment, gaming, and other applications. The scenario of wearable devices available in the market is wide, also offering a huge variability in terms of electrodes, features, and costs. Even if several different computational models have been presented in the analyzed literature with promising results, the lack of reference experimental paradigms and of publicly and validated benchmark datasets acquired using wearable devices make the analysis of the model performance and the feasibility of real-time applications not completely accessible. It is unclear whether proposed strategies, often tested offline on wired benchmark datasets, can be effectively translated into online real-life wearable contexts.

Many concerns remain regarding the ethical aspects that permeate the use of these systems in environments managed by experts and in consumer-grade platforms. Concerning this point, note that among the 84 considered works, only 39 provide an ethical statement on the approval of the performed experiments.

7. Conclusions and Future Perspectives

The interest in EEG-based MI-BCI systems using wearable technologies has been rising in the last few years. Moreover, very different devices have been used in the analyzed studies for very diverse applications.

The experimental paradigms concerning MI tasks usually involve the motor imagination of left- and right-hand movements, even though new paradigms have been presented to address the specific needs of patients and researchers. Therefore, numerous datasets have been collected to face these demands. However, most of them are not publicly available, and testing is usually performed on recordings acquired through wired devices.

Surely, the typical steps of the BCI life-cycle appear to be maintained by most of the analyzed works. However, it is not quite clear if strategies applied to offline wired benchmark datasets can be translated into an online wireless environment.

An example that may clarify this point regards the pervasive use of ICA for signal preprocessing, which is usually performed in offline analyses. In fact, due to its methodological aspects, ICA requires an attentive evaluation of the outputted components and the identification of the artifactual ones. New ICA-based strategies have been recently proposed [201,202] to provide real-time usage of such methodologies. Therefore, future works should focus on the assessment of specific techniques developed for online analyses and concerning all the data processing steps.

Other concerns pertain to the beginning and end of the BCI life-cycle, i.e., how the non-stationarity of the EEG signal is handled and the responsiveness of the system.

In fact, the performance of EEG-based BCIs is heavily influenced by the variations due to the signal non-stationarity, especially during trial-to-trial and session-to-session transfers [203], and transitioning from the training to the feedback phase [116,204]. However, most of the available studies present insufficient information regarding the time between the system training phase and its real-time application. The reliability of the BCI in a real-world scenario should increase if the test phase shows good performances, even if taken after a long time from the training phase. Therefore, future works should consider these aspects to guarantee the applicability of BCIs in real-life contexts.

The reliability of these systems is also dependent on their responsiveness, which becomes particularly important in self-paced BCIs [205]. Feedback should be provided almost instantly to the users, who are usually trained to perform specific metal tasks [206].

The responsiveness concerning the classification and feedback time, as well as the users’ proficiency, should be documented in works concerning real-time BCIs.

Regarding other future research directions, two main fields can be identified. On the one hand, edge computing is fast evolving to improve data processing speed in real-time applications. For example, [207] overviews adaptive edge computing in wearable biomedical devices (in general, not only EEG ones), highlighting the pathway from wearable sensors to their application through intelligent learning. The authors state the following:

The ultimate goal toward smart wearable sensing with edge computing capabilities relies on a bespoke platform embedding sensors, front-end circuit interface, neuromorphic processor and memristive devices.

Furthermore, [208] investigates the possibility of addressing the drawbacks of wearable devices with edge computing.

The other frontier research field regards the application of quantum computing to BCI. Although efforts are only at the initial stage, some hybrid applications of quantum computing and BCI have been found, as reviewed by [209]. Recently, the authors in [210,211] discuss Quantum Brain Networks, a new interdisciplinary field integrating knowledge and methods from neurotechnology, artificial intelligence, and quantum computing. In [211], brain signals are detected utilizing electrodes placed on the scalp of a person who learns how to produce the required mental activity to issue instructions to rotate and measure a qubit, proposing an approach to interface the brain with quantum computers.

Abbreviations

The following abbreviations are used in this manuscript:

1DMSCNN One-Dimensional Multi-Scale Convolutional Neural Network
AI Artificial intelligence
ANN Artificial neural network
AUC Area Under the Curve
BCI Brain–computer interface
CAR Common Average Reference
CNN Convolutional neural network
CSP Common Spatial Pattern
DBN Deep Belief Network
DL Deep learning
ECG Electrocardiogram
ECoG Electrocorticography
EEG Electroencephalography
EMG Electromyography
EOG Electrooculography
ERD Event-Related Desynchronization
ERP Event-Related Potentials
ErrP Error-Related Potential
ERS Event-Related Synchronization
FFT Fast Fourier Transform
FGMDRM Framework with filter geodesic minimum distance to Riemannian mean
FMRI Functional Magnetic Resonance Imaging
ICA Independent Component Analysis
KNN K-Nearest Neighbor
LDA Linear Discriminant Analysis
LPA Left pre-auricolar point
LR Logistic Regression
LSTM Long-Short Term Memory
MI Motor imagery
ML Machine Learning
MLP Multi-Layer Perceptron
MSCNN Multi-Scale Convolutional Neural Network
NB Naive Bayes
NN Neural network
PRISMA Preferred Reporting Items for Systematic Reviews and Meta-Analyses
PSD Power Spectral Density
PTFBCSP Penalized Time–Frequency Band Common Spatial Pattern
QDA Quadratic Discriminant Analysis
RF Random forest
RNN Recurrent neural network
RPA Right pre-auricolar point
RQ Research question
SJGDA Semisupervised Joint mutual information with General Discriminate Analysis
SNR Signal to Noise Ratio
SSDT Subject specific decision tree
SSVEP Steady-state visual evoked potential
SVM Support vector machine
TES Transcranial electrical stimulation
VR Virtual reality
XGBoost Extreme Gradient Boosting

Supplementary Materials

The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/s23052798/s1. Table S1: table reporting detailed notes on the 84 reviewed papers. The notes are organized to provide a clear reference to the papers and to follow the core section of the review (Section 5), analyzing (i) field of applications, (ii) employed technologies, (iii) signal processing and analysis methodologies, (iv) BCI feedback, and (v) dataset information.

Author Contributions

Conceptualization, A.S., M.C., S.C. and F.G.; methodology, A.S., M.C., S.C. and F.G.; validation, A.S., M.C., S.C. and F.G.; formal analysis, A.S., M.C., S.C. and F.G.; investigation, A.S., M.C., S.C. and F.G.; resources, A.S., M.C., S.C. and F.G.; data curation, A.S., M.C., S.C. and F.G.; writing—original draft preparation, A.S., M.C., S.C. and F.G.; writing—review and editing, A.S., M.C., S.C. and F.G.; visualization, A.S., M.C., S.C. and F.G.; supervision, A.S., M.C., S.C. and F.G. All authors have read and agreed to the published version of the manuscript.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Funding Statement

This research received no external funding.

Footnotes

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

References

  • 1.Millett D. Hans Berger: From psychic energy to the EEG. Perspect. Biol. Med. 2001;44:522–542. doi: 10.1353/pbm.2001.0070. [DOI] [PubMed] [Google Scholar]
  • 2.Shih J.J., Krusienski D.J., Wolpaw J.R. Brain-computer interfaces in medicine. Mayo Clin. Proc. 2012;87:268–279. doi: 10.1016/j.mayocp.2011.12.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Kögel J., Schmid J.R., Jox R.J., Friedrich O. Using brain–computer interfaces: A scoping review of studies employing social research methods. BMC Med. Ethics. 2019;20:1–17. doi: 10.1186/s12910-019-0354-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Al-Saegh A., Dawwd S.A., Abdul-Jabbar J.M. Deep learning for motor imagery EEG-based classification: A review. Biomed. Signal Process. Control. 2021;63:102172. doi: 10.1016/j.bspc.2020.102172. [DOI] [Google Scholar]
  • 5.Page M.J., McKenzie J.E., Bossuyt P.M., Boutron I., Hoffmann T.C., Mulrow C.D., Shamseer L., Tetzlaff J.M., Akl E.A., Brennan S.E., et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. Syst. Rev. 2021;10:1–11. doi: 10.1186/s13643-021-01626-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Srinivasan R., Nunez P. Electroencephalography. In: Ramachandran V., editor. Encyclopedia of Human Behavior. 2nd ed. Academic Press; San Diego, CA, USA: 2012. pp. 15–23. [DOI] [Google Scholar]
  • 7.Zhang J., Yin Z., Chen P., Nichele S. Emotion recognition using multi-modal data and machine learning techniques: A tutorial and review. Inf. Fusion. 2020;59:103–126. doi: 10.1016/j.inffus.2020.01.011. [DOI] [Google Scholar]
  • 8.Rojas G.M., Alvarez C., Montoya C.E., de la Iglesia-Vayá M., Cisternas J.E., Gálvez M. Study of resting-state functional connectivity networks using EEG electrodes position as seed. Front. Neurosci. 2018;12:235. doi: 10.3389/fnins.2018.00235. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Craik A., He Y., Contreras-Vidal J.L. Deep learning for electroencephalogram (EEG) classification tasks: A review. J. Neural Eng. 2019;16:031001. doi: 10.1088/1741-2552/ab0ab5. [DOI] [PubMed] [Google Scholar]
  • 10.Hosseini M.P., Hosseini A., Ahi K. A Review on Machine Learning for EEG Signal Processing in Bioengineering. IEEE Rev. Biomed. Eng. 2020;14:204–218. doi: 10.1109/RBME.2020.2969915. [DOI] [PubMed] [Google Scholar]
  • 11.LaRocco J., Le M.D., Paeng D.G. A systemic review of available low-cost EEG headsets used for drowsiness detection. Front. Neuroinform. 2020;14:553352. doi: 10.3389/fninf.2020.553352. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Wan X., Zhang K., Ramkumar S., Deny J., Emayavaramban G., Ramkumar M.S., Hussein A.F. A review on electroencephalogram based brain computer interface for elderly disabled. IEEE Access. 2019;7:36380–36387. doi: 10.1109/ACCESS.2019.2903235. [DOI] [Google Scholar]
  • 13.Oostenveld R., Praamstra P. The five percent electrode system for high-resolution EEG and ERP measurements. Clin. Neurophysiol. 2001;112:713–719. doi: 10.1016/S1388-2457(00)00527-7. [DOI] [PubMed] [Google Scholar]
  • 14.Paranjape R., Mahovsky J., Benedicenti L., Koles Z. The electroencephalogram as a biometric; Proceedings of the Canadian Conference on Electrical and Computer Engineering 2001. Conference Proceedings (Cat. No. 01TH8555); Toronto, ON, Canada. 13–16 May 2001; Piscataway, NJ, USA: IEEE; 2001. pp. 1363–1366. [Google Scholar]
  • 15.Xygonakis I., Athanasiou A., Pandria N., Kugiumtzis D., Bamidis P.D. Decoding motor imagery through common spatial pattern filters at the EEG source space. Comput. Intell. Neurosci. 2018;2018:7957408. doi: 10.1155/2018/7957408. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Roy Y., Banville H., Albuquerque I., Gramfort A., Falk T.H., Faubert J. Deep learning-based electroencephalography analysis: A systematic review. J. Neural Eng. 2019;16:051001. doi: 10.1088/1741-2552/ab260c. [DOI] [PubMed] [Google Scholar]
  • 17.Bigdely-Shamlo N., Mullen T., Kothe C., Su K.M., Robbins K.A. The PREP pipeline: Standardized preprocessing for large-scale EEG analysis. Front. Neuroinform. 2015;9:16. doi: 10.3389/fninf.2015.00016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Gramfort A., Strohmeier D., Haueisen J., Hämäläinen M.S., Kowalski M. Time-frequency mixed-norm estimates: Sparse M/EEG imaging with non-stationary source activations. NeuroImage. 2013;70:410–422. doi: 10.1016/j.neuroimage.2012.12.051. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Lee H., Choi S. Group nonnegative matrix factorization for EEG classification; Proceedings of the Artificial Intelligence and Statistics; Virtual. 16–18 April 2009; pp. 320–327. [Google Scholar]
  • 20.Zhang D., Yao L., Chen K., Wang S., Chang X., Liu Y. Making sense of spatio-temporal preserving representations for EEG-based human intention recognition. IEEE Trans. Cybern. 2019;50:3033–3044. doi: 10.1109/TCYB.2019.2905157. [DOI] [PubMed] [Google Scholar]
  • 21.Vaid S., Singh P., Kaur C. EEG signal analysis for BCI interface: A review; Proceedings of the 2015 fifth International Conference on Advanced Computing & Communication Technologies; Washington, DC, USA. 21–22 February 2015; Piscataway, NJ, USA: IEEE; 2015. pp. 143–147. [Google Scholar]
  • 22.Blinowska K., Durka P. Wiley Encyclopedia of Biomedical Engineering. Wiley; Hoboken, NJ, USA: 2006. Electroencephalography (eeg) [Google Scholar]
  • 23.McFarland D.J., Miner L.A., Vaughan T.M., Wolpaw J.R. Mu and beta rhythm topographies during motor imagery and actual movements. Brain Topogr. 2000;12:177–186. doi: 10.1023/A:1023437823106. [DOI] [PubMed] [Google Scholar]
  • 24.Decety J. The neurophysiological basis of motor imagery. Behav. Brain Res. 1996;77:45–52. doi: 10.1016/0166-4328(95)00225-1. [DOI] [PubMed] [Google Scholar]
  • 25.Beisteiner R., Höllinger P., Lindinger G., Lang W., Berthoz A. Mental representations of movements. Brain potentials associated with imagination of hand movements. Electroencephalogr. Clin. Neurophysiol. Potentials Sect. 1995;96:183–193. doi: 10.1016/0168-5597(94)00226-5. [DOI] [PubMed] [Google Scholar]
  • 26.Jeannerod M. Mental imagery in the motor context. Neuropsychologia. 1995;33:1419–1432. doi: 10.1016/0028-3932(95)00073-C. [DOI] [PubMed] [Google Scholar]
  • 27.Lotze M., Halsband U. Motor imagery. J.-Physiol.-Paris. 2006;99:386–395. doi: 10.1016/j.jphysparis.2006.03.012. [DOI] [PubMed] [Google Scholar]
  • 28.McAvinue L.P., Robertson I.H. Measuring motor imagery ability: A review. Eur. J. Cogn. Psychol. 2008;20:232–251. doi: 10.1080/09541440701394624. [DOI] [Google Scholar]
  • 29.Pfurtscheller G., Neuper C. Motor imagery activates primary sensorimotor area in humans. Neurosci. Lett. 1997;239:65–68. doi: 10.1016/S0304-3940(97)00889-6. [DOI] [PubMed] [Google Scholar]
  • 30.Jeon Y., Nam C.S., Kim Y.J., Whang M.C. Event-related (De) synchronization (ERD/ERS) during motor imagery tasks: Implications for brain–computer interfaces. Int. J. Ind. Ergon. 2011;41:428–436. doi: 10.1016/j.ergon.2011.03.005. [DOI] [Google Scholar]
  • 31.Munzert J., Lorey B., Zentgraf K. Cognitive motor processes: The role of motor imagery in the study of motor representations. Brain Res. Rev. 2009;60:306–326. doi: 10.1016/j.brainresrev.2008.12.024. [DOI] [PubMed] [Google Scholar]
  • 32.Dose H., Møller J.S., Iversen H.K., Puthusserypady S. An end-to-end deep learning approach to MI-EEG signal classification for BCIs. Expert Syst. Appl. 2018;114:532–542. doi: 10.1016/j.eswa.2018.08.031. [DOI] [Google Scholar]
  • 33.Pfurtscheller G., Da Silva F.L. Event-related EEG/MEG synchronization and desynchronization: Basic principles. Clin. Neurophysiol. 1999;110:1842–1857. doi: 10.1016/S1388-2457(99)00141-8. [DOI] [PubMed] [Google Scholar]
  • 34.Pfurtscheller G., Brunner C., Schlögl A., Da Silva F.L. Mu rhythm (de) synchronization and EEG single-trial classification of different motor imagery tasks. NeuroImage. 2006;31:153–159. doi: 10.1016/j.neuroimage.2005.12.003. [DOI] [PubMed] [Google Scholar]
  • 35.Nam C.S., Jeon Y., Kim Y.J., Lee I., Park K. Movement imagery-related lateralization of event-related (de) synchronization (ERD/ERS): Motor-imagery duration effects. Clin. Neurophysiol. 2011;122:567–577. doi: 10.1016/j.clinph.2010.08.002. [DOI] [PubMed] [Google Scholar]
  • 36.Dai M., Zheng D., Na R., Wang S., Zhang S. EEG classification of motor imagery using a novel deep learning framework. Sensors. 2019;19:551. doi: 10.3390/s19030551. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Wriessnegger S.C., Brunner C., Müller-Putz G.R. Frequency specific cortical dynamics during motor imagery are influenced by prior physical activity. Front. Psychol. 2018;9:1976. doi: 10.3389/fpsyg.2018.01976. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Demarin V., Morović S. Neuroplasticity. Period. Biol. 2014;116:209–211. [Google Scholar]
  • 39.Kaiser V., Bauernfeind G., Kreilinger A., Kaufmann T., Kübler A., Neuper C., Müller-Putz G.R. Cortical effects of user training in a motor imagery based brain–computer interface measured by fNIRS and EEG. Neuroimage. 2014;85:432–444. doi: 10.1016/j.neuroimage.2013.04.097. [DOI] [PubMed] [Google Scholar]
  • 40.Reaves J., Flavin T., Mitra B., Mahantesh K., Nagaraju V. Assessment Furthermore, Application of EEG: A Literature Review. Appl. Bioinf. Comput. Biol. 2021;7:2. [Google Scholar]
  • 41.Wolpaw J.R., Birbaumer N., McFarland D.J., Pfurtscheller G., Vaughan T.M. Brain–computer interfaces for communication and control. Clin. Neurophysiol. 2002;113:767–791. doi: 10.1016/S1388-2457(02)00057-3. [DOI] [PubMed] [Google Scholar]
  • 42.Schalk G., McFarland D.J., Hinterberger T., Birbaumer N., Wolpaw J.R. BCI2000: A general-purpose brain–computer interface (BCI) system. IEEE Trans. Biomed. Eng. 2004;51:1034–1043. doi: 10.1109/TBME.2004.827072. [DOI] [PubMed] [Google Scholar]
  • 43.Aggarwal S., Chugh N. Signal processing techniques for motor imagery brain computer interface: A review. Array. 2019;1:100003. doi: 10.1016/j.array.2019.100003. [DOI] [Google Scholar]
  • 44.Abiri R., Borhani S., Sellers E.W., Jiang Y., Zhao X. A comprehensive review of EEG-based brain–computer interface paradigms. J. Neural Eng. 2019;16:011001. doi: 10.1088/1741-2552/aaf12e. [DOI] [PubMed] [Google Scholar]
  • 45.Gasparini F., Cazzaniga E., Saibene A. Inner speech recognition through electroencephalographic signals. arXiv. 20222210.06472 [Google Scholar]
  • 46.Ramele R., Villar A.J., Santos J.M. EEG Waveform Analysis of P300 ERP with Applications to Brain Computer Interfaces. Brain Sci. 2018;8:199. doi: 10.3390/brainsci8110199. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Friman O., Volosyak I., Graser A. Multiple Channel Detection of Steady-State Visual Evoked Potentials for Brain-Computer Interfaces. IEEE Trans. Biomed. Eng. 2007;54:742–750. doi: 10.1109/TBME.2006.889160. [DOI] [PubMed] [Google Scholar]
  • 48.Baek H.J., Chang M.H., Heo J., Park K.S. Enhancing the Usability of Brain-Computer Interface Systems. Comput. Intell. Neurosci. 2019;2019:12. doi: 10.1155/2019/5427154. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Bhattacharyya S., Konar A., Tibarewala D.N. Motor imagery and error related potential induced position control of a robotic arm. IEEE/CAA J. Autom. Sin. 2017;4:639–650. doi: 10.1109/JAS.2017.7510616. [DOI] [Google Scholar]
  • 50.Kawala-Sterniuk A., Browarska N., Al-Bakri A., Pelc M., Zygarlicki J., Sidikova M., Martinek R., Gorzelanczyk E.J. Summary of over fifty years with brain–computer interfaces—A review. Brain Sci. 2021;11:43. doi: 10.3390/brainsci11010043. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Xu M., Han J., Wang Y., Jung T.P., Ming D. Implementing over 100 command codes for a high-speed hybrid brain–computer interface using concurrent P300 and SSVEP features. IEEE Trans. Biomed. Eng. 2020;67:3073–3082. doi: 10.1109/TBME.2020.2975614. [DOI] [PubMed] [Google Scholar]
  • 52.Ma T., Li H., Deng L., Yang H., Lv X., Li P., Li F., Zhang R., Liu T., Yao D., et al. The hybrid BCI system for movement control by combining motor imagery and moving onset visual evoked potential. J. Neural Eng. 2017;14:026015. doi: 10.1088/1741-2552/aa5d5f. [DOI] [PubMed] [Google Scholar]
  • 53.Duan F., Lin D., Li W., Zhang Z. Design of a multimodal EEG-based hybrid BCI system with visual servo module. IEEE Trans. Auton. Ment. Dev. 2015;7:332–341. doi: 10.1109/TAMD.2015.2434951. [DOI] [Google Scholar]
  • 54.Mane R., Chouhan T., Guan C. BCI for stroke rehabilitation: Motor and beyond. J. Neural Eng. 2020;17:041001. doi: 10.1088/1741-2552/aba162. [DOI] [PubMed] [Google Scholar]
  • 55.Khan M.A., Das R., Iversen H.K., Puthusserypady S. Review on motor imagery based BCI systems for upper limb post-stroke neurorehabilitation: From designing to application. Comput. Biol. Med. 2020;123:103843. doi: 10.1016/j.compbiomed.2020.103843. [DOI] [PubMed] [Google Scholar]
  • 56.Yang S., Li R., Li H., Xu K., Shi Y., Wang Q., Yang T., Sun X. Exploring the Use of Brain-Computer Interfaces in Stroke Neurorehabilitation. BioMed Res. Int. 2021;2021:12. doi: 10.1155/2021/9967348. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57.Möller J.C., Zutter D., Riener R. Technology-Based Neurorehabilitation in Parkinson’s Disease – A Narrative Review. Clin. Transl. Neurosci. 2021;5:23. doi: 10.3390/ctn5030023. [DOI] [Google Scholar]
  • 58.Miladinović A., Ajčević M., Busan P., Jarmolowska J., Silveri G., Deodato M., Mezzarobba S., Battaglini P.P., Accardo A. Evaluation of Motor Imagery-Based BCI methods in neurorehabilitation of Parkinson’s Disease patients; Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC); Montreal, QC, Canada. 20–24 July 2020; pp. 3058–3061. [DOI] [PubMed] [Google Scholar]
  • 59.Abdulkader S.N., Atia A., Mostafa M.S.M. Brain computer interfacing: Applications and challenges. Egypt. Inform. J. 2015;16:213–230. doi: 10.1016/j.eij.2015.06.002. [DOI] [Google Scholar]
  • 60.Kerous B., Skola F., Liarokapis F. EEG-based BCI and video games: A progress report. Virtual Real. 2018;22:119–135. doi: 10.1007/s10055-017-0328-x. [DOI] [Google Scholar]
  • 61.Sanchez-Fraire U., Parra-Vega V., Martinez-Peon D., Sepúlveda-Cervantes G., Sánchez-Orta A., Muñoz-Vázquez A.J. On the brain computer robot interface (bcri) to control robots. IFAC-PapersOnLine. 2015;48:154–159. doi: 10.1016/j.ifacol.2015.12.026. [DOI] [Google Scholar]
  • 62.Perrin X., Chavarriaga R., Colas F., Siegwart R., Millán J.d.R. Brain-coupled interaction for semi-autonomous navigation of an assistive robot. Robot. Auton. Syst. 2010;58:1246–1255. doi: 10.1016/j.robot.2010.05.010. [DOI] [Google Scholar]
  • 63.Balderas D., Ponce P., Lopez-Bernal D., Molina A. Education 4.0: Teaching the Basis of Motor Imagery Classification Algorithms for Brain-Computer Interfaces. Future Internet. 2021;13:202. doi: 10.3390/fi13080202. [DOI] [Google Scholar]
  • 64.Myrden A., Chau T. A passive EEG-BCI for single-trial detection of changes in mental state. IEEE Trans. Neural Syst. Rehabil. Eng. 2017;25:345–356. doi: 10.1109/TNSRE.2016.2641956. [DOI] [PubMed] [Google Scholar]
  • 65.Wolpaw J.R. Handbook of Clinical Neurology. Volume 110. Elsevier; Amsterdam, The Netherlands: 2013. Brain–computer interfaces; pp. 67–74. [DOI] [PubMed] [Google Scholar]
  • 66.Fiedler P., Fonseca C., Supriyanto E., Zanow F., Haueisen J. A high-density 256-channel cap for dry electroencephalography. Human Brain Mapping. 2022;43:1295–1308. doi: 10.1002/hbm.25721. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 67.Casson A.J., Yates D.C., Smith S.J., Duncan J.S., Rodriguez-Villegas E. Wearable electroencephalography. IEEE Eng. Med. Biol. Mag. 2010;29:44–56. doi: 10.1109/MEMB.2010.936545. [DOI] [PubMed] [Google Scholar]
  • 68.Soufineyestani M., Dowling D., Khan A. Electroencephalography (EEG) technology applications and available devices. Appl. Sci. 2020;10:7453. doi: 10.3390/app10217453. [DOI] [Google Scholar]
  • 69.Hu L., Zhang Z. EEG Signal Processing and Feature Extraction. Springer; Berlin/Heidelberg, Germany: 2019. [Google Scholar]
  • 70.Li G.L., Wu J.T., Xia Y.H., He Q.G., Jin H.G. Review of semi-dry electrodes for EEG recording. J. Neural Eng. 2020;17:051004. doi: 10.1088/1741-2552/abbd50. [DOI] [PubMed] [Google Scholar]
  • 71.Casson A.J. Wearable EEG and beyond. Biomed. Eng. Lett. 2019;9:53–71. doi: 10.1007/s13534-018-00093-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 72.Mihajlović V., Grundlehner B., Vullers R., Penders J. Wearable, wireless EEG solutions in daily life applications: What are we missing? IEEE J. Biomed. Health Inform. 2014;19:6–21. doi: 10.1109/JBHI.2014.2328317. [DOI] [PubMed] [Google Scholar]
  • 73.Blum S., Emkes R., Minow F., Anlauff J., Finke A., Debener S. Flex-printed forehead EEG sensors (fEEGrid) for long-term EEG acquisition. J. Neural Eng. 2020;17:034003. doi: 10.1088/1741-2552/ab914c. [DOI] [PubMed] [Google Scholar]
  • 74.You S., Cho B.H., Yook S., Kim J.Y., Shon Y.M., Seo D.W., Kim I.Y. Unsupervised automatic seizure detection for focal-onset seizures recorded with behind-the-ear EEG using an anomaly-detecting generative adversarial network. Comput. Methods Programs Biomed. 2020;193:105472. doi: 10.1016/j.cmpb.2020.105472. [DOI] [PubMed] [Google Scholar]
  • 75.Do Valle B.G., Cash S.S., Sodini C.G. Wireless behind-the-ear EEG recording device with wireless interface to a mobile device (iPhone/iPod touch); Proceedings of the 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society; Chicago, IL, USA. 26–30 August 2014; Piscataway, NJ, USA: IEEE; 2014. pp. 5952–5955. [DOI] [PubMed] [Google Scholar]
  • 76.Wang Y., Yin L., Bai Y., Liu S., Wang L., Zhou Y., Hou C., Yang Z., Wu H., Ma J., et al. Electrically compensated, tattoo-like electrodes for epidermal electrophysiology at scale. Sci. Adv. 2020;6:eabd0996. doi: 10.1126/sciadv.abd0996. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 77.Wang H., Wang J., Chen D., Ge S., Liu Y., Wang Z., Zhang X., Guo Q., Yang J. Robust tattoo electrode prepared by paper-assisted water transfer printing for wearable health monitoring. IEEE Sens. J. 2022;22:3817–3827. doi: 10.1109/JSEN.2022.3141457. [DOI] [Google Scholar]
  • 78.Casson A.J., Abdulaal M., Dulabh M., Kohli S., Krachunov S., Trimble E. Seamless Healthcare Monitoring. Springer; Berlin/Heidelberg, Germany: 2018. Electroencephalogram; pp. 45–81. [Google Scholar]
  • 79.Palumbo A., Gramigna V., Calabrese B., Ielpo N. Motor-imagery EEG-based BCIs in wheelchair movement and control: A systematic literature review. Sensors. 2021;21:6285. doi: 10.3390/s21186285. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 80.Udovičić G., Topić A., Russo M. Wearable technologies for smart environments: A review with emphasis on BCI; Proceedings of the 2016 24th International Conference on Software, Telecommunications and Computer Networks (SoftCOM); Split, Croatia. 22–24 September 2016; Piscataway, NJ, USA: IEEE; 2016. pp. 1–9. [Google Scholar]
  • 81.TajDini M., Sokolov V., Kuzminykh I., Shiaeles S., Ghita B. Wireless sensors for brain activity—A survey. Electronics. 2020;9:2092. doi: 10.3390/electronics9122092. [DOI] [Google Scholar]
  • 82.Portillo-Lara R., Tahirbegi B., Chapman C.A., Goding J.A., Green R.A. Mind the gap: State-of-the-art technologies and applications for EEG-based brain–computer interfaces. APL Bioeng. 2021;5:031507. doi: 10.1063/5.0047237. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 83.Jamil N., Belkacem A.N., Ouhbi S., Lakas A. Noninvasive Electroencephalography Equipment for Assistive, Adaptive, and Rehabilitative Brain–Computer Interfaces: A Systematic Literature Review. Sensors. 2021;21:4754. doi: 10.3390/s21144754. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 84.Gu X., Cao Z., Jolfaei A., Xu P., Wu D., Jung T.P., Lin C.T. EEG-based brain–computer interfaces (BCIs): A survey of recent studies on signal sensing technologies and computational intelligence approaches and their applications. IEEE/ACM Trans. Comput. Biol. Bioinform. 2021;18:1645–1666. doi: 10.1109/TCBB.2021.3052811. [DOI] [PubMed] [Google Scholar]
  • 85.Neuper C., Pfurtscheller G. Brain-Computer Interfaces. Springer; Berlin/Heidelberg, Germany: 2009. Neurofeedback training for BCI control; pp. 65–78. [Google Scholar]
  • 86.Jiang Y., Hau N.T., Chung W.Y. Semiasynchronous BCI using wearable two-channel EEG. IEEE Trans. Cogn. Dev. Syst. 2017;10:681–686. doi: 10.1109/TCDS.2017.2716973. [DOI] [Google Scholar]
  • 87.Mattia D., Pichiorri F., Colamarino E., Masciullo M., Morone G., Toppi J., Pisotta I., Tamburella F., Lorusso M., Paolucci S., et al. The Promotoer, a brain–computer interface-assisted intervention to promote upper limb functional motor recovery after stroke: A study protocol for a randomized controlled trial to test early and long-term efficacy and to identify determinants of response. BMC Neurol. 2020;20:254. doi: 10.1186/s12883-020-01826-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 88.Barria P., Pino A., Tovar N., Gomez-Vargas D., Baleta K., Díaz C.A., Múnera M., Cifuentes C.A. BCI-Based Control for Ankle Exoskeleton T-FLEX: Comparison of Visual and Haptic Stimuli with Stroke Survivors. Sensors. 2021;21:6431. doi: 10.3390/s21196431. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 89.Karakullukcu N., Yilmaz B. Detection of Movement Intention in EEG-Based Brain-Computer Interfaces Using Fourier-Based Synchrosqueezing Transform. Int. J. Neural Syst. 2022;32:2150059. doi: 10.1142/S0129065721500593. [DOI] [PubMed] [Google Scholar]
  • 90.Du Bois N., Bigirimana A.D., Korik A., Kéthina L.G., Rutembesa E., Mutabaruka J., Mutesa L., Prasad G., Jansen S., Coyle D. Neurofeedback with low-cost, wearable electroencephalography (EEG) reduces symptoms in chronic Post-Traumatic Stress Disorder. J. Affect. Disord. 2021;295:1319–1334. doi: 10.1016/j.jad.2021.08.071. [DOI] [PubMed] [Google Scholar]
  • 91.Looned R., Webb J., Xiao Z.G., Menon C. Assisting drinking with an affordable BCI-controlled wearable robot and electrical stimulation: A preliminary investigation. J. Neuroeng. Rehabil. 2014;11:51. doi: 10.1186/1743-0003-11-51. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 92.Li Z., Yuan Y., Luo L., Su W., Zhao K., Xu C., Huang J., Pi M. Hybrid brain/muscle signals powered wearable walking exoskeleton enhancing motor ability in climbing stairs activity. IEEE Trans. Med. Robot. Bionics. 2019;1:218–227. doi: 10.1109/TMRB.2019.2949865. [DOI] [Google Scholar]
  • 93.Vourvopoulos A., Jorge C., Abreu R., Figueiredo P., Fernandes J.C., Bermudez i Badia S. Efficacy and brain imaging correlates of an immersive motor imagery BCI-driven VR system for upper limb motor rehabilitation: A clinical case report. Front. Hum. Neurosci. 2019;13:244. doi: 10.3389/fnhum.2019.00244. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 94.Kong W., Fu S., Deng B., Zeng H., Zhang J., Guo S. Embedded BCI Rehabilitation System for Stroke. J. Beijing Inst. Technol. 2019;28:35–41. [Google Scholar]
  • 95.Athanasiou A., Arfaras G., Xygonakis I., Kartsidis P., Pandria N., Kavazidi K.R., Astaras A., Foroglou N., Polyzoidis K., Bamidis P.D. Commercial BCI Control and functional brain networks in spinal cord injury: A proof-of-concept; Proceedings of the 2017 IEEE 30th International Symposium on Computer-Based Medical Systems (CBMS); Thessaloniki, Greece. 22–24 June 2017; Piscataway, NJ, USA: IEEE; 2017. pp. 262–267. [Google Scholar]
  • 96.Simon C., Ruddy K.L. A wireless, wearable Brain-Computer Interface for neurorehabilitation at home; A feasibility study; Proceedings of the 2022 10th International Winter Conference on Brain-Computer Interface (BCI); Gangwon, Republic of Korea. 21–23 February 2022; Piscataway, NJ, USA: IEEE; 2022. pp. 1–6. [Google Scholar]
  • 97.Quiles E., Suay F., Candela G., Chio N., Jiménez M., Álvarez-Kurogi L. Low-cost robotic guide based on a motor imagery brain–computer interface for arm assisted rehabilitation. Int. J. Environ. Res. Public Health. 2020;17:699. doi: 10.3390/ijerph17030699. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 98.Cardoso V.F., Delisle-Rodriguez D., Romero-Laiseca M.A., Loterio F.A., Gurve D., Floriano A., Valadão C., Silva L., Krishnan S., Frizera-Neto A., et al. Effect of a Brain–Computer Interface Based on Pedaling Motor Imagery on Cortical Excitability and Connectivity. Sensors. 2021;21:2020. doi: 10.3390/s21062020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 99.Wang H., Bezerianos A. Brain-controlled wheelchair controlled by sustained and brief motor imagery BCIs. Electron. Lett. 2017;53:1178–1180. doi: 10.1049/el.2017.1637. [DOI] [Google Scholar]
  • 100.Lisi G., Hamaya M., Noda T., Morimoto J. Dry-wireless EEG and asynchronous adaptive feature extraction towards a plug-and-play co-adaptive brain robot interface; Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA); Stockholm, Sweden. 16–21 May 2016; Piscataway, NJ, USA: IEEE; 2016. pp. 959–966. [Google Scholar]
  • 101.Carrino F., Dumoulin J., Mugellini E., Abou Khaled O., Ingold R. A self-paced BCI system to control an electric wheelchair: Evaluation of a commercial, low-cost EEG device; Proceedings of the 2012 ISSNIP Biosignals and Biorobotics Conference: Biosignals and Robotics for Better and Safer Living (BRC); Manaus, Brazil. 9–11 January 2012; Piscataway, NJ, USA: IEEE; 2012. pp. 1–6. [Google Scholar]
  • 102.Gant K., Guerra S., Zimmerman L., Parks B.A., Prins N.W., Prasad A. EEG-controlled functional electrical stimulation for hand opening and closing in chronic complete cervical spinal cord injury. Biomed. Phys. Eng. Express. 2018;4:065005. doi: 10.1088/2057-1976/aabb13. [DOI] [Google Scholar]
  • 103.Gaxiola-Tirado J.A., Iáñez E., Ortíz M., Gutiérrez D., Azorín J.M. Effects of an exoskeleton-assisted gait motor imagery training in functional brain connectivity; Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC); Berlin, Germany. 23–27 July 2019; Piscataway, NJ, USA: IEEE; 2019. pp. 429–432. [DOI] [PubMed] [Google Scholar]
  • 104.Khan M.J., Hong K.S., Naseer N., Bhutta M.R. Motor imagery performance evaluation using hybrid EEG-NIRS for BCI; Proceedings of the 2015 54th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE); Hangzhou, China. 28–30 July 2015; Piscataway, NJ, USA: IEEE; 2015. pp. 1150–1155. [Google Scholar]
  • 105.Freer D., Yang G.Z. MIndGrasp: A New Training and Testing Framework for Motor Imagery Based 3-Dimensional Assistive Robotic Control. arXiv. 20202003.00369 [Google Scholar]
  • 106.Jameel H.F., Mohammed S.L., Gharghan S.K. Electroencephalograph-based wheelchair controlling system for the people with motor disability using advanced brainwear; Proceedings of the 2019 12th International Conference on Developments in eSystems Engineering (DeSE); Kazan, Russia. 7–10 October 2019; Piscataway, NJ, USA: IEEE; 2019. pp. 843–848. [Google Scholar]
  • 107.Ketola E., Lloyd C., Shuhart D., Schmidt J., Morenz R., Khondker A., Imtiaz M. Lessons Learned from the Initial Development of a Brain Controlled Assistive Device; Proceedings of the 2022 IEEE 12th Annual Computing and Communication Workshop and Conference (CCWC); Virtual. 26–29 January 2022; Piscataway, NJ, USA: IEEE; 2022. pp. 0580–0585. [Google Scholar]
  • 108.Permana K., Wijaya S., Prajitno P. Controlled wheelchair based on brain computer interface using Neurosky Mindwave Mobile 2. Aip Conf. Proc. 2019;2168:020022. [Google Scholar]
  • 109.Priyatno S.B., Prakoso T., Riyadi M.A. Classification of motor imagery brain wave for bionic hand movement using multilayer perceptron. Sinergi. 2022;26:57–64. doi: 10.22441/sinergi.2022.1.008. [DOI] [Google Scholar]
  • 110.Tang X., Li W., Li X., Ma W., Dang X. Motor imagery EEG recognition based on conditional optimization empirical mode decomposition and multi-scale convolutional neural network. Expert Syst. Appl. 2020;149:113285. doi: 10.1016/j.eswa.2020.113285. [DOI] [Google Scholar]
  • 111.Apicella A., Arpaia P., Frosolone M., Moccaldi N. High-wearable EEG-based distraction detection in motor rehabilitation. Sci. Rep. 2021;11:5297. doi: 10.1038/s41598-021-84447-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 112.Garcia-Moreno F.M., Bermudez-Edo M., Rodríguez-Fórtiz M.J., Garrido J.L. A CNN-LSTM deep Learning classifier for motor imagery EEG detection using a low-invasive and low-Cost BCI headband; Proceedings of the 2020 16th International Conference on Intelligent Environments (IE); Madrid, Spain. 20–23 July 2020; Piscataway, NJ, USA: IEEE; 2020. pp. 84–91. [Google Scholar]
  • 113.Peterson V., Galván C., Hernández H., Spies R. A feasibility study of a complete low-cost consumer-grade brain–computer interface system. Heliyon. 2020;6:e03425. doi: 10.1016/j.heliyon.2020.e03425. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 114.Tariq M., Trivailo P.M., Simic M. Motor imagery based EEG features visualization for BCI applications. Procedia Comput. Sci. 2018;126:1936–1944. doi: 10.1016/j.procs.2018.08.057. [DOI] [Google Scholar]
  • 115.Zhang X., Yao L., Sheng Q.Z., Kanhere S.S., Gu T., Zhang D. Converting your thoughts to texts: Enabling brain typing via deep feature learning of eeg signals; Proceedings of the 2018 IEEE international conference on pervasive computing and communications (PerCom); Athens, Greece. 19–23 March 2018; Piscataway, NJ, USA: IEEE; 2018. pp. 1–10. [Google Scholar]
  • 116.Chowdhury A., Raza H., Meena Y.K., Dutta A., Prasad G. Online covariate shift detection-based adaptive brain–computer interface to trigger hand exoskeleton feedback for neuro-rehabilitation. IEEE Trans. Cogn. Dev. Syst. 2017;10:1070–1080. doi: 10.1109/TCDS.2017.2787040. [DOI] [Google Scholar]
  • 117.Daeglau M., Wallhoff F., Debener S., Condro I.S., Kranczioch C., Zich C. Challenge accepted? Individual performance gains for motor imagery practice with humanoid robotic EEG neurofeedback. Sensors. 2020;20:1620. doi: 10.3390/s20061620. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 118.LaFleur K., Cassady K., Doud A., Shades K., Rogin E., He B. Quadcopter control in three-dimensional space using a noninvasive motor imagery-based brain–computer interface. J. Neural Eng. 2013;10:046003. doi: 10.1088/1741-2560/10/4/046003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 119.Alanis-Espinosa M., Gutiérrez D. On the assessment of functional connectivity in an immersive brain–computer interface during motor imagery. Front. Psychol. 2020;11:1301. doi: 10.3389/fpsyg.2020.01301. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 120.Xu B., Li W., He X., Wei Z., Zhang D., Wu C., Song A. Motor imagery based continuous teleoperation robot control with tactile feedback. Electronics. 2020;9:174. doi: 10.3390/electronics9010174. [DOI] [Google Scholar]
  • 121.Zhuang J., Geng K., Yin G. Ensemble learning based brain–computer interface system for ground vehicle control. IEEE Trans. Syst. Man, Cybern. Syst. 2019;51:5392–5404. doi: 10.1109/TSMC.2019.2955478. [DOI] [Google Scholar]
  • 122.Liu Y., Habibnezhad M., Jebelli H. Brain-computer interface for hands-free teleoperation of construction robots. Autom. Constr. 2021;123:103523. doi: 10.1016/j.autcon.2020.103523. [DOI] [Google Scholar]
  • 123.Mahmood M., Kwon S., Kim H., Kim Y.S., Siriaraya P., Choi J., Otkhmezuri B., Kang K., Yu K.J., Jang Y.C., et al. Wireless Soft Scalp Electronics and Virtual Reality System for Motor Imagery-Based Brain–Machine Interfaces. Adv. Sci. 2021;8:2101129. doi: 10.1002/advs.202101129. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 124.Djamal E.C., Abdullah M.Y., Renaldi F. Brain computer interface game controlling using fast fourier transform and learning vector quantization. J. Telecommun. Electron. Comput. Eng. JTEC. 2017;9:71–74. [Google Scholar]
  • 125.Mitocaru A., Poboroniuc M.S., Irimia D., Baciu A. Comparison Between Two Brain Computer Interface Systems Aiming to Control a Mobile Robot; Proceedings of the 2021 International Conference on Electromechanical and Energy Systems (SIELMEN); Chisinau, Moldova. 7–8 October 2021; Piscataway, NJ, USA: IEEE; 2021. pp. 1–5. [Google Scholar]
  • 126.Mwata-Velu T., Ruiz-Pinales J., Rostro-Gonzalez H., Ibarra-Manzano M.A., Cruz-Duarte J.M., Avina-Cervantes J.G. Motor imagery classification based on a recurrent-convolutional architecture to control a hexapod robot. Mathematics. 2021;9:606. doi: 10.3390/math9060606. [DOI] [Google Scholar]
  • 127.Parikh D., George K. Quadcopter Control in Three-Dimensional Space Using SSVEP and Motor Imagery-Based Brain-Computer Interface; Proceedings of the 2020 11th IEEE Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON); Vancouver, BC, Canada. 4–7 November 2020; Piscataway, NJ, USA: IEEE; 2020. pp. 0782–0785. [Google Scholar]
  • 128.Wu S.L., Liu Y.T., Chou K.P., Lin Y.Y., Lu J., Zhang G., Chuang C.H., Lin W.C., Lin C.T. A motor imagery based brain–computer interface system via swarm-optimized fuzzy integral and its application; Proceedings of the 2016 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE); Vancouver, BC, Canada. 24–29 July 2016; Piscataway, NJ, USA: IEEE; 2016. pp. 2495–2500. [Google Scholar]
  • 129.Abdulwahab S.S., Khleaf H.K., Jassim M.H. EEG Motor-Imagery BCI System Based on Maximum Overlap Discrete Wavelet Transform (MODWT) and cubic SVM. J. Phys. Conf. Ser. 2021;1973:012056. doi: 10.1088/1742-6596/1973/1/012056. [DOI] [Google Scholar]
  • 130.Freer D., Deligianni F., Yang G.Z. Adaptive Riemannian BCI for enhanced motor imagery training protocols; Proceedings of the 2019 IEEE 16th International Conference on Wearable and Implantable Body Sensor Networks (BSN); Chicago, IL, USA. 19–22 May 2019; Piscataway, NJ, USA: IEEE; 2019. pp. 1–4. [Google Scholar]
  • 131.Garcia-Moreno F.M., Bermudez-Edo M., Garrido J.L., Rodríguez-Fórtiz M.J. Reducing response time in motor imagery using a headband and deep learning. Sensors. 2020;20:6730. doi: 10.3390/s20236730. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 132.Guan S., Li J., Wang F., Yuan Z., Kang X., Lu B. Discriminating three motor imagery states of the same joint for brain–computer interface. PeerJ. 2021;9:e12027. doi: 10.7717/peerj.12027. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 133.Guan S., Zhao K., Yang S. Motor imagery EEG classification based on decision tree framework and Riemannian geometry. Comput. Intell. Neurosci. 2019;2019:5627156. doi: 10.1155/2019/5627156. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 134.Jawanjalkar A.R., Padole D.V. Development of soft computing technique for classification of EEG signal; Proceedings of the 2017 International Conference on Innovations in Information, Embedded and Communication Systems (ICIIECS); Coimbatore, India. 17–18 March 2017; Piscataway, NJ, USA: IEEE; 2017. pp. 1–6. [Google Scholar]
  • 135.Khan J., Bhatti M.H., Khan U.G., Iqbal R. Multiclass EEG motor-imagery classification with sub-band common spatial patterns. Eurasip J. Wirel. Commun. Netw. 2019;2019:174. doi: 10.1186/s13638-019-1497-y. [DOI] [Google Scholar]
  • 136.Kim H.H., Jeong J. Decoding electroencephalographic signals for direction in brain–computer interface using echo state network and Gaussian readouts. Comput. Biol. Med. 2019;110:254–264. doi: 10.1016/j.compbiomed.2019.05.024. [DOI] [PubMed] [Google Scholar]
  • 137.Lisi G., Rivela D., Takai A., Morimoto J. Markov switching model for quick detection of event related desynchronization in EEG. Front. Neurosci. 2018;12:24. doi: 10.3389/fnins.2018.00024. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 138.Lo C.C., Chien T.Y., Chen Y.C., Tsai S.H., Fang W.C., Lin B.S. A wearable channel selection-based brain–computer interface for motor imagery detection. Sensors. 2016;16:213. doi: 10.3390/s16020213. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 139.Mladenov T., Kim K., Nooshabadi S. Accurate motor imagery based dry electrode brain–computer interface system for consumer applications; Proceedings of the 2012 IEEE 16th International Symposium on Consumer Electronics; Harrisburg, PA, USA. 4–6 June 2012; Piscataway, NJ, USA: IEEE; 2012. pp. 1–4. [Google Scholar]
  • 140.Rodriguez-Ugarte M.D.l.S., Iáñez E., Ortiz-Garcia M., Azorín J.M. Effects of tDCS on real-time BCI detection of pedaling motor imagery. Sensors. 2018;18:1136. doi: 10.3390/s18041136. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 141.Riyadi M.A., Setiawan I., Amir A. EEG Multiclass Signal Classification Based on Subtractive Clustering-ANFIS and Wavelet Packet Decomposition; Proceedings of the 2021 International Conference on Electrical and Information Technology (IEIT); Malang, Indonesia. 14–15 September 2021; Piscataway, NJ, USA: IEEE; 2021. pp. 81–86. [Google Scholar]
  • 142.Shajil N., Mohan S., Srinivasan P., Arivudaiyanambi J., Arasappan Murrugesan A. Multiclass classification of spatially filtered motor imagery EEG signals using convolutional neural network for BCI based applications. J. Med. Biol. Eng. 2020;40:663–672. doi: 10.1007/s40846-020-00538-3. [DOI] [Google Scholar]
  • 143.Shajil N., Sasikala M., Arunnagiri A. Deep learning classification of two-class motor imagery EEG signals using transfer learning; Proceedings of the 2020 International Conference on e-Health and Bioengineering (EHB); Iasi, Romania. 29–30 October 2020; Piscataway, NJ, USA: IEEE; 2020. pp. 1–4. [Google Scholar]
  • 144.Tiwari S., Goel S., Bhardwaj A. Machine learning approach for the classification of EEG signals of multiple imagery tasks; Proceedings of the 2020 11th International Conference on Computing, Communication and Networking Technologies (ICCCNT); Kharagpur, India. 1–3 July 2020; Piscataway, NJ, USA: IEEE; 2020. pp. 1–7. [Google Scholar]
  • 145.Triana Guzmán N., Orjuela-Cañón Á.D., Jutinico Alarcon A.L. Incremental training of neural network for motor tasks recognition based on brain–computer interface; Proceedings of the Iberoamerican Congress on Pattern Recognition; Havana, Cuba. 28–31 October 2019; Berlin/Heidelberg, Germany: Springer; 2019. pp. 610–619. [Google Scholar]
  • 146.Yang B., Tang J., Guan C., Li B. Motor imagery EEG recognition based on FBCSP and PCA; Proceedings of the International Conference on Brain Inspired Cognitive Systems; Xi’an, China. 7–8 July 2018; Berlin/Heidelberg, Germany: Springer; 2018. pp. 195–205. [Google Scholar]
  • 147.Yang D., Nguyen T.H., Chung W.Y. A Synchronized Hybrid Brain-Computer Interface System for Simultaneous Detection and Classification of Fusion EEG Signals. Complexity. 2020;2020:4137283. doi: 10.1155/2020/4137283. [DOI] [Google Scholar]
  • 148.Yusoff M.Z., Mahmoud D., Malik A.S., Bahloul M.R. Discrimination of four class simple limb motor imagery movements for brain–computer interface. Biomed. Signal Process. Control. 2018;44:181–190. [Google Scholar]
  • 149.Zhou B., Wu X., Lv Z., Zhang L., Zhang C. Independent component analysis combined with compressed sensing for EEG compression in BCI; Proceedings of the 2015 10th International Conference on Information, Communications and Signal Processing (ICICS); Singapore. 2–4 December 2015; Piscataway, NJ, USA: IEEE; 2015. pp. 1–4. [Google Scholar]
  • 150.Zich C., Schweinitz C., Debener S., Kranczioch C. Multimodal evaluation of motor imagery training supported by mobile EEG at home: A case report; Proceedings of the 2015 IEEE International Conference on Systems, Man, and Cybernetics; Hong Kong, China. 9–12 October 2015; Piscataway, NJ, USA: IEEE; 2015. pp. 3181–3186. [Google Scholar]
  • 151.Verma P., Heilinger A., Reitner P., Grünwald J., Guger C., Franklin D. Performance investigation of brain–computer interfaces that combine EEG and fNIRS for motor imagery tasks; Proceedings of the 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC); Bari, Italy. 6–9 October 2019; Piscataway, NJ, USA: IEEE; 2019. pp. 259–263. [Google Scholar]
  • 152.Rodríguez-Ugarte M., Angulo-Sherman I., Iáñez E., Ortiz M., Azorín J. Preliminary study of pedaling motor imagery classification based on EEG signals; Proceedings of the 2017 International Symposium on Wearable Robotics and Rehabilitation (WeRob); Houston, TX, USA. 5–8 November 2017; Piscataway, NJ, USA: IEEE; 2017. pp. 1–2. [Google Scholar]
  • 153.Hirsch G., Dirodi M., Xu R., Reitner P., Guger C. Online classification of motor imagery using EEG and fNIRS: A hybrid approach with real time human–computer interaction; Proceedings of the International Conference on Human–Computer Interaction; Sibiu, Romania. 22–23 October 2020; Berlin/Heidelberg, Germany: Springer; 2020. pp. 231–238. [Google Scholar]
  • 154.Dehzangi O., Zou Y., Jafari R. Simultaneous classification of motor imagery and SSVEP EEG signals; Proceedings of the 2013 6th International IEEE/EMBS Conference on Neural Engineering (NER); San Diego, CA, USA. 6–8 November 2013; Piscataway, NJ, USA: IEEE; 2013. pp. 1303–1306. [Google Scholar]
  • 155.Cha K., Lee J., Kim H., Kim C., Lee S. Steady-State Somatosensory Evoked Potential based Brain-Computer Interface for Sit-to-Stand Movement Intention; Proceedings of the 2019 7th International Winter Conference on Brain-Computer Interface (BCI); Gangwon, Republic of Korea. 18–20 February 2019; Piscataway, NJ, USA: IEEE; 2019. pp. 1–3. [Google Scholar]
  • 156.Arfaras G., Athanasiou A., Pandria N., Kavazidi K.R., Kartsidis P., Astaras A., Bamidis P.D. Visual versus kinesthetic motor imagery for BCI control of robotic arms (Mercury 2.0); Proceedings of the 2017 IEEE 30th International Symposium on Computer-Based Medical Systems (CBMS); Thessaloniki, Greece. 22–24 June 2017; Piscataway, NJ, USA: IEEE; 2017. pp. 440–445. [Google Scholar]
  • 157.Angrisani L., Arpaia P., Donnarumma F., Esposito A., Frosolone M., Improta G., Moccaldi N., Natalizio A., Parvis M. Instrumentation for Motor Imagery-based Brain Computer Interfaces relying on dry electrodes: A functional analysis; Proceedings of the 2020 IEEE International Instrumentation and Measurement Technology Conference (I2MTC); Dubrovnik, Croatia. 25–28 May 2020; Piscataway, NJ, USA: IEEE; 2020. pp. 1–6. [Google Scholar]
  • 158.Bista S., BikramAdhikari N. Performance Analysis of Tri-channel Active Electrode EEG Device Designed for Classification of Motor Imagery Brainwaves for Brain ComputerInterface; Proceedings of the 2018 International Conference on Advances in Computing, Communication Control and Networking (ICACCCN); Greater Noida, India. 12–13 October 2018; Piscataway, NJ, USA: IEEE; 2018. pp. 662–667. [Google Scholar]
  • 159.Liao L.D., Wu S.L., Liou C.H., Lu S.W., Chen S.A., Chen S.F., Ko L.W., Lin C.T. A novel 16-channel wireless system for electroencephalography measurements with dry spring-loaded sensors. IEEE Trans. Instrum. Meas. 2014;63:1545–1555. doi: 10.1109/TIM.2013.2293222. [DOI] [Google Scholar]
  • 160.Lin C.L., Chu T.Y., Wu P.J., Wang C.A., Lin B.S. Design of wearable brain computer interface based on motor imagery; Proceedings of the 2014 Tenth International Conference on Intelligent Information Hiding and Multimedia Signal Processing; Kitakyushu, Japan. 27–29 August 2014; Piscataway, NJ, USA: IEEE; 2014. pp. 33–36. [Google Scholar]
  • 161.Lin B.S., Pan J.S., Chu T.Y., Lin B.S. Development of a wearable motor-imagery-based brain–computer interface. J. Med. Syst. 2016;40:1–8. doi: 10.1007/s10916-015-0429-6. [DOI] [PubMed] [Google Scholar]
  • 162.Vourvopoulos A., Niforatos E., Giannakos M. EEGlass: An EEG-eyeware prototype for ubiquitous brain–computer interaction; Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers; London, UK. 9–13 September 2019; pp. 647–652. [Google Scholar]
  • 163.Zhang Y., Zhang X., Sun H., Fan Z., Zhong X. Portable brain–computer interface based on novel convolutional neural network. Comput. Biol. Med. 2019;107:248–256. doi: 10.1016/j.compbiomed.2019.02.023. [DOI] [PubMed] [Google Scholar]
  • 164.Advanced Brain Monitoring I. B-Alert X-Series Wireless & Mobile EEG System. [(accessed on 3 October 2022)]. Available online: https://www.advancedbrainmonitoring.com/products/b-alert-x-series.
  • 165.bio medical.com. BrainMaster Discovery 24E-24 Channel qEEG. [(accessed on 3 October 2022)]. Available online: https://bio-medical.com/brainmaster-discovery-24e-24-channel-qeeg.html.
  • 166.By Shopify, O.O.S.P. Cyton Biosensing Board (8-Channels)—OpenBCI Online Store. [(accessed on 3 October 2022)]. Available online: https://shop.openbci.com/products/cyton-biosensing-board-8-channel?variant=38958638542.
  • 167.Neuro A. eego™rt|ANT Neuro. [(accessed on 3 October 2022)]. Available online: https://www.ant-neuro.com/products/eego_rt.
  • 168.Peterson V., Wyser D., Lambercy O., Spies R., Gassert R. A penalized time-frequency band feature selection and classification procedure for improved motor intention decoding in multichannel EEG. J. Neural Eng. 2019;16:016019. doi: 10.1088/1741-2552/aaf046. [DOI] [PubMed] [Google Scholar]
  • 169.Neuroelectrics Enobio 20|Solutions|Neuroelectrics. [(accessed on 3 October 2022)]. Available online: https://www.neuroelectrics.com/solutions/enobio/20.
  • 170.Neuroelectrics Enobio 8|Solutions|Neuroelectrics. [(accessed on 3 October 2022)]. Available online: https://www.neuroelectrics.com/solutions/enobio/8.
  • 171.Venot T., Corsi M.C., Saint-Bauzel L., de Vico Fallani F. Towards multimodal BCIs: The impact of peripheral control on motor cortex activity and sense of agency; Proceedings of the 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC); Virtual. 1–5 November 2021; Piscataway, NJ, USA: IEEE; 2021. pp. 5876–5879. [DOI] [PubMed] [Google Scholar]
  • 172.Abdalsalam E., Yusoff M.Z., Malik A., Kamel N.S., Mahmoud D. Modulation of sensorimotor rhythms for brain–computer interface using motor imagery with online feedback. Signal Image Video Process. 2018;12:557–564. doi: 10.1007/s11760-017-1193-5. [DOI] [Google Scholar]
  • 173.EMOTIV EMOTIV EPOC+ 14-Channel Wireless EEG Headset-EMOTIV. [(accessed on 3 October 2022)]. Available online: https://www.emotiv.com/epoc/
  • 174.Zhang S., Yuan S., Huang L., Zheng X., Wu Z., Xu K., Pan G. Human mind control of rat cyborg’s continuous locomotion with wireless brain-to-brain interface. Sci. Rep. 2019;9:1321. doi: 10.1038/s41598-018-36885-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 175.Zich C., De Vos M., Kranczioch C., Debener S. Wireless EEG with individualized channel layout enables efficient motor imagery training. Clin. Neurophysiol. 2015;126:698–710. doi: 10.1016/j.clinph.2014.07.007. [DOI] [PubMed] [Google Scholar]
  • 176.EMOTIV EPOC Flex-32-Channel Wireless EEG Device-EMOTIV. [(accessed on 3 October 2022)]. Available online: https://www.emotiv.com/epoc-flex/
  • 177.Paszkiel S., Dobrakowski P. Brain–computer technology-based training system in the field of motor imagery. IET Sci. Meas. Technol. 2020;14:1014–1018. doi: 10.1049/iet-smt.2019.0522. [DOI] [Google Scholar]
  • 178.EMOTIV EMOTIV EPOC X-14 Channel Wireless EEG Headset—EMOTIV. [(accessed on 3 October 2022)]. Available online: https://www.emotiv.com/epoc-x/
  • 179.g.tec medical engineering GmbH Austria g.USBAMP RESEARCH|EEG/Biosignal Amplifier | g.tec Medical Engineering GmbH Medical Engineering. [(accessed on 3 October 2022)]. Available online: https://www.gtec.at/product/gusbamp-research/
  • 180.ab medica s.p.a. Dispositivi medici innovativi-Robotica-Telemedicina-ab Medica. [(accessed on 3 October 2022)]. Available online: https://www.abmedica.it/
  • 181.EMOTIV Insight Brainwear® 5 Channel Wireless EEG Headset-EMOTIV. [(accessed on 3 October 2022)]. Available online: https://www.emotiv.com/insight/
  • 182.NeuroSky MindWave. [(accessed on 3 October 2022)]. Available online: https://store.neurosky.com/pages/mindwave.
  • 183.Kevric J., Subasi A. The impact of Mspca signal de-noising in real-time wireless brain computer interface system. Southeast Eur. J. Soft Comput. 2015;4:43–47. doi: 10.21533/scjournal.v4i1.90. [DOI] [Google Scholar]
  • 184.Muse Muse 2: Brain Sensing Headband-Technology Enhanced Meditation. [(accessed on 3 October 2022)]. Available online: https://choosemuse.com/muse-2/
  • 185.g.tec medical engineering GmbH Austria g.Nautilus Multiple Biosignal Recording|g.tec Medical Engineering GmbH. [(accessed on 3 October 2022)]. Available online: https://www.gtec.at/product/gnautilus-multipurpose/
  • 186.g.tec medical engineering GmbH Austria g.Nautilus PRO Wearable EEG|g.tec Medical Engineering GmbH. [(accessed on 3 October 2022)]. Available online: https://www.gtec.at/product/gnautilus-pro/
  • 187.g.tec medical engineering GmbH Austria g.NAUTILUS RESEARCH|Wearable EEG Headset|g.tec Medical Engineering GmbH. [(accessed on 3 October 2022)]. Available online: https://www.gtec.at/product/gnautilus-research/
  • 188.Neuroscan C. Nuamps–Compumedics Neuroscan. [(accessed on 3 October 2022)]. Available online: https://compumedicsneuroscan.com/nuamps-4/
  • 189.COGNIONICS, I. Quick-20 Dry EEG Headset (2) [(accessed on 3 October 2022)]. Available online: http://www.cognionics.com/index.php/32-uncategorised/94-quick-20-dry-headset-2.
  • 190.Neuroelectrics Starstim® tES-EEG Systems|Neuroelectrics. [(accessed on 3 October 2022)]. Available online: https://www.neuroelectrics.com/solutions/starstim.
  • 191.AG, N. Synamps 2/RT | NEUROSPEC AG Research Neurosciences. [(accessed on 3 October 2022)]. Available online: https://www.neurospec.com/Products/Details/1017/synamps-2rt.
  • 192.Rashmi C., Shantala C. EEG artifacts detection and removal techniques for brain computer interface applications: A systematic review. Int. J. Adv. Technol. Eng. Explor. 2022;9:354. [Google Scholar]
  • 193.Fatourechi M., Bashashati A., Ward R.K., Birch G.E. EMG and EOG artifacts in brain computer interface systems: A survey. Clin. Neurophysiol. 2007;118:480–494. doi: 10.1016/j.clinph.2006.10.019. [DOI] [PubMed] [Google Scholar]
  • 194.Seok D., Lee S., Kim M., Cho J., Kim C. Motion artifact removal techniques for wearable EEG and PPG sensor systems. Front. Electron. 2021;2:685513. doi: 10.3389/felec.2021.685513. [DOI] [Google Scholar]
  • 195.Delorme A., Makeig S. EEGLAB: An open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J. Neurosci. Methods. 2004;134:9–21. doi: 10.1016/j.jneumeth.2003.10.009. [DOI] [PubMed] [Google Scholar]
  • 196.Blankertz B., Muller K.R., Krusienski D.J., Schalk G., Wolpaw J.R., Schlogl A., Pfurtscheller G., Millan J.R., Schroder M., Birbaumer N. The BCI competition III: Validating alternative approaches to actual BCI problems. IEEE Trans. Neural Syst. Rehabil. Eng. 2006;14:153–159. doi: 10.1109/TNSRE.2006.875642. [DOI] [PubMed] [Google Scholar]
  • 197.Tangermann M., Müller K.R., Aertsen A., Birbaumer N., Braun C., Brunner C., Leeb R., Mehring C., Miller K.J., Mueller-Putz G., et al. Review of the BCI competition IV. Front. Neurosci. 2012;6:55. doi: 10.3389/fnins.2012.00055. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 198.Goldberger A.L., Amaral L.A., Glass L., Hausdorff J.M., Ivanov P.C., Mark R.G., Mietus J.E., Moody G.B., Peng C.K., Stanley H.E. PhysioBank, PhysioToolkit, and PhysioNet: Components of a new research resource for complex physiologic signals. Circulation. 2000;101:e215–e220. doi: 10.1161/01.CIR.101.23.e215. [DOI] [PubMed] [Google Scholar]
  • 199.Kaya M., Binli M.K., Ozbay E., Yanar H., Mishchenko Y. A large electroencephalographic motor imagery dataset for electroencephalographic brain computer interfaces. Sci. Data. 2018;5:180211. doi: 10.1038/sdata.2018.211. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 200.Leeb R., Lee F., Keinrath C., Scherer R., Bischof H., Pfurtscheller G. Brain–computer communication: Motivation, aim, and impact of exploring a virtual apartment. IEEE Trans. Neural Syst. Rehabil. Eng. 2007;15:473–482. doi: 10.1109/TNSRE.2007.906956. [DOI] [PubMed] [Google Scholar]
  • 201.Lin X., Wang L., Ohtsuki T. Online Recursive ICA Algorithm Used for Motor Imagery EEG Signal; Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC); Montreal, QC, Canada. 20–24 July 2020; pp. 502–505. [DOI] [PubMed] [Google Scholar]
  • 202.Tuţă L., Roşu G., Popovici C., Nicolaescu I. Real-Time EEG Data Processing Using Independent Component Analysis (ICA); Proceedings of the 2022 14th International Conference on Communications (COMM); Bucharest, Romania. 16–18 June 2022; pp. 1–4. [DOI] [Google Scholar]
  • 203.Raza H., Rathee D., Zhou S.M., Cecotti H., Prasad G. Covariate shift estimation based adaptive ensemble learning for handling non-stationarity in motor imagery related EEG-based brain–computer interface. Neurocomputing. 2019;343:154–166. doi: 10.1016/j.neucom.2018.04.087. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 204.Miladinović A., Ajčević M., Jarmolowska J., Marusic U., Colussi M., Silveri G., Battaglini P.P., Accardo A. Effect of power feature covariance shift on BCI spatial-filtering techniques: A comparative study. Comput. Methods Programs Biomed. 2021;198:105808. doi: 10.1016/j.cmpb.2020.105808. [DOI] [PubMed] [Google Scholar]
  • 205.Millan J., Mourino J. Asynchronous BCI and local neural classifiers: An overview of the adaptive brain interface project. IEEE Trans. Neural Syst. Rehabil. Eng. 2003;11:159–161. doi: 10.1109/TNSRE.2003.814435. [DOI] [PubMed] [Google Scholar]
  • 206.Roc A., Pillette L., Mladenović J., Benaroch C., N’Kaoua B., Jeunet C., Lotte F. A review of user training methods in brain computer interfaces based on mental tasks. J. Neural Eng. 2020;18:011002. doi: 10.1088/1741-2552/abca17. [DOI] [PubMed] [Google Scholar]
  • 207.Covi E., Donati E., Liang X., Kappel D., Heidari H., Payvand M., Wang W. Adaptive extreme edge computing for wearable devices. Front. Neurosci. 2021;15:611300. doi: 10.3389/fnins.2021.611300. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 208.Jin X., Li L., Dang F., Chen X., Liu Y. A survey on edge computing for wearable technology. Digit. Signal Process. 2022;125:103146. doi: 10.1016/j.dsp.2021.103146. [DOI] [Google Scholar]
  • 209.Huang D., Wang M., Wang J., Yan J. A Survey of Quantum Computing Hybrid Applications with Brain-Computer Interface. Cogn. Robot. 2022;2:164–176. doi: 10.1016/j.cogr.2022.07.002. [DOI] [Google Scholar]
  • 210.Miranda E.R., Martín-Guerrero J.D., Venkatesh S., Hernani-Morales C., Lamata L., Solano E. Quantum Brain Networks: A Perspective. Electronics. 2022;11:1528. doi: 10.3390/electronics11101528. [DOI] [Google Scholar]
  • 211.Miranda E.R., Venkatesh S., Martın-Guerrero J.D., Hernani-Morales C., Lamata L., Solano E. An approach to interfacing the brain with quantum computers: Practical steps and caveats. arXiv. 20222201.00817 [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Data Availability Statement

Not applicable.


Articles from Sensors (Basel, Switzerland) are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES