Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2020 Oct 7.
Published in final edited form as: Brain Comput Interfaces (Abingdon). 2019 Dec 10;6(3):71–101. doi: 10.1080/2326263X.2019.1697163

Workshops of the Seventh International Brain-Computer Interface Meeting: Not Getting Lost in Translation

Jane E Huggins 1, Christoph Guger 2, Erik Aarnoutse 3, Brendan Allison 4, Charles W Anderson 5, Steven Bedrick 6, Walter Besio 7, Ricardo Chavarriaga 8, Jennifer L Collinger 9, An H Do 10, Christian Herff 11, Matthias Hohmann 12, Michelle Kinsella 13, Kyuhwa Lee 14, Fabien Lotte 15, Gernot Müller-Putz 16, Anton Nijholt 17, Elmar Pels 18, Betts Peters 19, Felix Putze 20, Rüdiger Rupp 21, Gerwin Schalk 22, Stephanie Scott 23, Michael Tangermann 24, Paul Tubig 25, Thorsten Zander 26
PMCID: PMC7539697  NIHMSID: NIHMS1544882  PMID: 33033729

Abstract

The Seventh International Brain-Computer Interface (BCI) Meeting was held May 21-25th, 2018 at the Asilomar Conference Grounds, Pacific Grove, California, United States. The interactive nature of this conference was embodied by 25 workshops covering topics in BCI (also called brain-machine interface) research. Workshops covered foundational topics such as hardware development and signal analysis algorithms, new and imaginative topics such as BCI for virtual reality and multi-brain BCIs, and translational topics such as clinical applications and ethical assumptions of BCI development. BCI research is expanding in the diversity of applications and populations for whom those applications are being developed. BCI applications are moving toward clinical readiness as researchers struggle with the practical considerations to make sure that BCI translational efforts will be successful. This paper summarizes each workshop, providing an overview of the topic of discussion, references for additional information, and identifying future issues for research and development that resulted from the interactions and discussion at the workshop.

Keywords: brain-computer interface, brain-machine interface, neuroprosthetics, conference

Introduction

The terms brain-computer interface (BCI) and brain-machine interface (BMI) are both used to describe an interface that interprets information from the human brain to provide control signals for the operation of technology. The brain activity may be recorded either with invasive or non-invasive methods. However, the signals must come directly from the brain, without traveling through sensorimotor output channels. BCIs may be used by people with or without physical impairments. This paper is intended as an overview of the breadth and depth of BCI research and development as represented by the workshops of the 7th International Brain-Computer Interface Meeting.

The BCI Meeting Series

With the 2018 BCI Meeting, the International Brain-Computer Interface Meeting Series transitioned from a schedule of meeting approximately every 3 years to meeting in alternate years. The goal of these meetings (1999 [1] 2002 [2], 2005 [3], 2010 [4], 2013 [5, 6], and 2016 [7-9]) remains to gather those interested in BCI research from many disciplines and all parts of the world. The BCI Meeting Series has grown from 50 delegates in 1999 [1] to 423 delegates in 2018. The importance of these meetings is demonstrated in the fact that 74 of the 100 most highly cited BCI-related articles (as of 2016) have authors who attended the BCI Meeting Series [10]. The Seventh International Brain-Computer Interface (BCI) Meeting was held May 21-25th, 2018 at the Asilomar Conference Grounds, Pacific Grove, California, United States. The BCI Society, itself an outcome of the BCI Meeting series, assumed full logistical and financial responsibility for this BCI Meeting. This Meeting was attended by 423 delegates from 221 laboratories and organizations in 28 countries. Respondents to the 2018 BCI Meeting evaluation survey identified themselves as 37% students, 13% postdocs, 15% early career, 25% established researchers, and 10% other. The Asilomar Conference Grounds provides common meals to promote interaction between attendees, most of whom stay on-site, and a beautiful environment for casual conversation and networking.

The BCI Meeting series is designed to promote interaction between attendees of all backgrounds and career stages, with a specific focus on providing educational and networking opportunities for students and early-career investigators. The 2018 BCI Meeting theme of “Not Getting Lost in Translation” was meant to express the desire to bring together all the perspectives and expertise necessary for successful BCI research, development, and translation into commercial products. The BCI Meeting is attended by BCI users, caregivers, clinical rehabilitation specialists, computer scientists, engineers, entrepreneurs, federal funding representatives, neuroscientists, physicians, psychologists, speech-language pathologists, and others. The process of translating BCIs into commercial products, thus providing benefit to potential users, requires the combined efforts of people from many of these areas. This diversity also promotes the identification of new areas for BCI research and development. While a few BCI sessions may occur at conferences for any of these disciplines, only the BCI Meetings bring together so many researchers from such diverse backgrounds in large numbers. The workshops of the BCI Meeting Series are always reported as one of the most valuable meeting components. This paper provides summaries of the workshops as an overview of the current status of BCI research and development and presents the challenges and conclusions that emerged from the workshop interactions.

Organization of Workshop Summaries

The workshops were proposed by members of the BCI community and evaluated by the Program Committee to ensure a broad representation of BCI-related topics. These workshop summaries are organized by themes with each summary listing the organizers and all additional presenters. Each summary provides an overview of the workshop topic, key points of material presented and conclusions reached by the resulting discussion. Of course, the interaction between workshop attendees that form a key benefit of the workshops cannot be reproduced in an article. However, each summary ends with conclusions or consensus opinions reached through these interactions as well as calls for action or opportunities for participation in joint research or discussion endeavors.

The workshops expressed unique viewpoints, yet their topics also overlapped and complemented each other. Some were closely related, such as ECoG-Based BCIs, which elaborated on technical issues of ECoG recording and data handling, and ECoG for Control and Mapping, which focused on the diverse applications for which ECoG has been successfully used. Other workshops built on each other, such as Progress in Decoding Speech Processes Using Intracranial Signals, and Real-time BCI Communication for Non-verbal Individuals with Cerebral Palsy, which considered decoding of speech for a specific user population.

This report is organized around three themes, though other structures could also support these workshops. These themes are independent of the timing of the workshops at the BCI Meeting, where workshop slots were assigned to provide diversity of options to attendees and to avoid timing conflicts for presenters. The workshops in the first group focused on foundational topics, such as sensor quality, specific recording modalities, signal analysis methods, or hybridization of different data sources. The second group of workshops has a gradual shift in topic from BCIs for specific user groups to new BCI applications. These topics could not easily be separated since specific user groups often have unique application requirements. The final theme encompasses the varied aspects of translational efforts for BCI, including the ethical assumptions underlying BCI research and development, various clinical considerations and user-centered design issues, and a discussion of standards.

So, the presentation of the workshop summaries creates a progression from foundational topics to translational efforts for standardized clinical applications. Together these workshops demonstrate the diversity of BCI applications and intended users and the complexity of the issues that must be solved to make BCIs into useful tools for the many intended user groups.

Foundational Topics

Recent Developments in Non-Invasive EEG Sensor Technology

Organizers: Dr. Charles Anderson (Colorado State University and Dr. Walter Besio (University of Rhode Island and CREmedical)

Additional Presenters: Dr. Walid Soussou (Quasar), Dr. Fenghua Tian (University of Texas at Arlington)

Conventional non-invasive EEG electrodes limit the quality of recorded EEG and thus of BCI performance. Recently developed sensing technologies may provide needed breakthroughs for practical BCI use. This workshop described new sensor technology including tri-polar electrodes, no-prep dry electrodes, and fNIRS.

Walter Besio reviewed the history of EEG electrodes, starting with Dr. Hans Berger in 1924 [11, 12], pointing out that the electrodes are basically unchanged. He then described limiting aspects of EEG: low spatial resolution, muscle and movement artifact contamination, limited frequency bandwidth, high mutual information, and need for impedance matching mediums [13]. He gave examples of new electrode technologies that reduce wire movement artifacts (active electrodes) and impedance matching mediums (dry electrodes) [14]. Then he described the theory of his tripolar concentric ring electrode technology for tripolar EEG (tEEG) [15]. He showed results where tEEG significantly improved the spatial resolution, muscle contamination, bandwidth, and mutual information compared to conventional EEG [15]. Examples were provided for: imagined movement BCI, visual evoked potentials (VEP), and high-frequency oscillations (HFOs) prior to seizures. He provided videos [16] showing electromyogram attenuation, fast VEP, real-time center-out imagined movement cursor control, and HFOs spreading over the brain. Further, he showed results for bi-directional BCI: providing multiple examples where transcranial focal stimulation (TFS) through concentric electrodes stopped or prevented seizures from different convulsants, blocked kindling, increased GABA, decreased glutamate, and blocked pain. TFS has been shown to be safe in rats [17] and now humans [18].

Charles Anderson summarized his BCI experiments with Dr. Besio’s tEEG electrodes. P300 ERPs were detected more reliably using tEEG versus conventional disc electrodes, and motor-related potentials recorded with tEEG were significantly differentiable related to specific finger movements, while there was no significant difference in signals recorded with conventional disc electrodes [19].

Walid Soussou surveyed current brain monitoring approaches, discussing the trade-offs between the sensing modalities’ capabilities and mobile usability. He discussed how EEG is being synchronized with a wide range of wearable physiological monitoring sensors, such as EMG, ECG, galvanic skin response, respiration, eye and body tracking, fNIRS, video, tDCS. He also described how these multi-modal systems are being integrated into VR and AR environments for exploration of a wide range of BCI applications, including biomarkers, neurofeedback, neuromarketing, neuroeducation, neurogaming, and sleep health. Dr. Soussou demonstrated the use of Quasar’s Dry Sensor Interface (DSI) 10/20 wireless EEG recording cap and software.

Fenghua Tian described functional near-infrared spectroscopy (fNIRS) as an emerging technology that has received increasing attention in recent years because it is safe, portable and relatively low-cost. It uses near-infrared light between 650-900 nm to penetrate deeply into the tissues. The light is primarily absorbed by oxygenated and deoxygenated hemoglobin. Therefore, it is sensitive to the changes in cerebral hemodynamics and metabolism. This technology has now reached a tipping point where users are exploring its potential for a wide range of health science and clinical applications. Dr. Tian discussed the image quality of fNIRS (spatial resolution, temporal resolution, and penetration depth) and its application in guiding transcranial magnetic stimulation (TMS) therapy.

These advances in EEG sensing technology create exciting opportunities for wearable and high spatial selectivity with muscle artifact canceling for improved BCI functionality.

Making use of the future of BCI implant technology

Organizer: Erik Aarnoutse (UMC Utrecht Brain Center)

Additional Presenters: Tim Denison (Medtronic Neuromodulation); Robert Franklin (Blackrock Microsystems); Luca Maiolo (CNR-IMM); Samantha Santacruz, (University of California, Berkeley)

This workshop showcased exciting new developments in implant neurotechnology for neuroscientific research and neurotherapy. These fast paced developments in the areas of microelectronics and material science are largely being driven by efforts in neuromodulation therapy for conditions such as Parkinson’s disease and epilepsy. However, these developments will clearly also bring opportunities for the next generation of BCI implants.

Tim Denison presented the novel implantable Summit RC+S system [20], which incorporates the lessons learned from clinical neuromodulation studies (e.g., in Tourette’s Syndrome [21] and a clinical BCI study for communication [22]) with a predecessor (Activa PC+S). Changes include longer range telemetry, wireless-inductive recharging, an increase in the number of channels and enhanced digital signal processing. In addition, it incorporates active recharge to limit the amplifier blanking interval during stimulation.

BCIs are developing beyond their original design goals. Initially, BCIs were devices that sensed a user’s intent and translated that intent into control signals for an actuator to replace one lost function. BCIs are now emerging as a technology to restore all lost functions, including lost input to the brain. In such a bidirectional BCI, electrical stimulation restores sensory input [23], requiring simultaneous sensing and stimulation at the neural interface. Rob Franklin presented removing stimulation artifacts by signal blanking [24] and that emphasized electrode design, whether microelectrode arrays or ECoG, is important for an implanted BCI.

The rapid developments in neurotechnology include advances in the various types of electrodes available to record ECoG, local field potentials (LFPs), and action potentials. Luca Maiolo presented on developments in ECoG electrodes. Ultra-flexible thin-film polyimide foils allow for decreasing the size of electrodes and increasing the density, which gives the opportunity to record more fine-grained signals [25]. The impedance of the electrodes can be kept low by maximizing the surface of the electrode. In addition, nano-patterning, with methods such as the focused ion beam technique, lowers impedance significantly.

Samantha Santacruz presented the novel Wireless Artefact-free Neuromodulation Device (WAND), a neuromodulation device that combines fast online artefact removal with wireless streaming of high channel counts for recording and stimulation. This is accomplished with a newly developed neuromodulation integrated circuit (NMIC) combined with a field-programmable gate array (FPGA) for artefact cancellation and a radio system on chip (SoC) for a bidirectional wireless link. Results demonstrating all the capabilities of the WAND were presented from a primate study in which the WAND recorded preparatory activity during a delayed-reach task, which could be disrupted by stimulation [26] in a closed loop.

These hardware improvements are complementary to the ongoing efforts in the development of signal processing techniques and translation algorithms for BCI purposes. The discussed hardware devices are in several stages of readiness for clinical use (from animal studies only to ready-to-go into clinical trials), with hurdles of compliance to regulatory requirements still to come. The goal of this workshop was to inspire BCI researchers to make use of all new developments to envision BCIs that can have an even larger impact on the quality-of-life of BCI users.

Tools for Establishing Neuroadaptive Technology Through Passive BCIs

Organizer: Thorsten O. Zander (Technische Universität Berlin & Zander Laboratories B.V)

Additional Presenters: David E. Medine (Brain Products GmbH); Laurens R. Krol (Technische Universität Berlin); Lena Andreessen (Zander Laboratories)

This workshop concerned the use of BCIs for Human-Machine systems and drew an interdisciplinary audience from the areas of Psychology, Neuroscience, Medicine, Human-Computer Interaction, Human Factors, Engineering, Machine Learning and Computer Science. The BCI applications discussed included ideas for users with or without disabilities and involved different approaches to BCI-based input, ranging from direct control to implicit input scenarios [27]. The main focus was on passive BCIs [28] and how these can be used to automatically adapt machines to the current state of their operator [29].

Thorsten Zander opened the workshop with an overview of the concept of Neuroadaptive Technology [30], which utilizes real-time measurements of neurophysiological activity within a closed control loop to enable intelligent software adaptation. Measures of electrocortical and neurovascular brain activity are quantified to provide a dynamic representation of the user state, with respect to implicit psychological activity related to cognition, emotion, and motivation.

Laurens Krol presented a state-of-the-art theoretical framework describing different items that can be controlled with a BCI, with a clear focus on Passive BCIs [31, 32]. This framework includes the concept of direct, intentional control, that of passively monitoring the cognitive or affective state of a user during human-computer interaction, and finally that of controlling a device implicitly, with no need for intentionally formulating commands to the machine [27]. After implicit control, cognitive probing [33] was discussed. This concept allows a machine to build and update a user model by selecting and presenting specific stimuli to its user. Cognitive and affective responses to these stimuli can be assessed by a passive BCI and collated with the ongoing context of the interaction, refining the user model. The general potential and ethical implications of these concepts was discussed in more detail.

David Medine gave an interactive lecture on a major tool for modern applications using BCI technology, the Lab Streaming Layer (LSL) framework [34]. This framework has the potential to synchronize data streams from different sources. The resulting data stream can then be analyzed and utilized in other toolboxes or applications, allowing for simultaneous analysis of all data in real-time. The toolbox is currently developed by an open-source community, constantly adding new hardware drivers and extending the mechanisms of LSL.

Following this theoretical foundation, David Medine presented on the practical issue of hardware components to record brain activity for use in Neuroadaptive Technology. He presented solutions, including stationary and mobile amplifiers, gel-based [35] and dry electrodes [35, 36] and simultaneous recordings of EEG and eye-tracking [37].

Lena Andreessen concluded the workshop with a demonstration of a neuroadaptive game called Meyendtris [38], which simultaneously uses brain and eye activity for direct and implicit control. The demonstration included the setup of 32 dry electrodes, calibration of the classifier, interlinking EEG activity, gaze control and game mechanisms, and actually playing the game. Workshop attendees could observe and participate in full neuroadaptive experiments, gaining an understanding of the effort needed. That experiment included the recording of calibration data, the setup of a classifier and the application of that classifier during the game in real-time.

This workshop showed and discussed the potential of BCI-technology for users with or without disabilities and the implications of Neuroadaptive Technology in our daily lives.

ECoG-Based BCIs

Organizer: Gerwin Schalk (National Center for Adaptive Neurotechnologies)

Additional Presenters: Dora Hermes (Stanford University & UMC Utrecht Brain Center); Aysegul Gunduz (University of Florida); Kai Miller (Stanford University)

Electrocorticography (ECoG) is the technique of recording from or stimulating the surface of the brain. Over the past several years, the unique qualities of ECoG for research and clinical applications have become increasingly recognized. Recent work has addressed a number of important scientific questions and has established the potential value of ECoG for BCI operation.

The first speaker, Gerwin Schalk, reviewed the history of ECoG beginning with the first demonstration of electrical stimulation of the brain using subdural electrodes by Ezio Sciamanna in 1882. He then reviewed the physiological basis of broadband gamma and low-frequency oscillatory activity, the two ECoG features that are most promising for potential BCI use [39]. He presented examples that highlight the unique qualities of ECoG recordings and stimulation for research use, including detailed characterization of the temporal and spatial progression of task-related ECoG activity across the cortex. He concluded with examples of studies using ECoG signals for tasks including communication with a matrix speller, control of a computer game, decoding of continuous speech, auditory perception of a song, as well as for functional mapping in neurosurgical patients.

Dora Hermes discussed the necessity of localizing the area to implant an ECoG electrode to record brain signals that are optimal to control a BCI. It is often assumed that functional magnetic resonance imaging (fMRI) can help to localize these signals. However, the fMRI signal is an indirect measure of brain function and reflects changes in blood flow, blood volume, and blood oxygenation. fMRI has been correlated with different features of the ECoG signal, including broadband changes in the power spectrum and increases and decreases of power in narrowband oscillations at various frequencies. To develop an understanding of how to interpret these signals and the relationship between them, Dora Hermes presented a model of neuronal population responses and transformations from neuronal responses into the fMRI signal and ECoG field potential [40]. This model was able to explain varying relations between fMRI and ECoG in visual cortex.

Aysegul Gunduz presented on ECoG BCIs for neuromodulation therapies such as deep-brain stimulation (DBS). Traditional DBS systems are an effective treatment for movement disorders, but deliver constant stimulation without customization for the patient’s individual symptom(s), medication status, or side effects. A biomarker approach to monitor a patient’s disease state and adapt the delivery and parameters of stimulation could improve patient outcomes and increase battery life. Next-generation neurostimulators such as NeuroPace RNS [41] and the Medtronic Activa PC+S [42] devices include chronic recording of brain signals directly from the implanted leads. Recording and stimulating from the same electrode array, however, creates stimulation artifacts. Separate ECoG strips for stimulation and recording can be placed through the same surgical burr hole to extract biomarkers of movement disorders with greater signal amplitudes. Moreover, these electrodes would enable the network study of basal ganglia-or thalamo-cortical interactions chronically in humans elucidating how DBS works at the network level.

Kai Miller discussed the components of the electrocorticographic signal and how they can be harnessed to create an electrical dialogue with the brain. Broadband spectral changes, event-locked voltage deflections, and low-frequency oscillations reflect complementary aspects of processing that can be used for decoding and virtual device control [43, 44]. Within motor cortex, the phase of oscillations entrains broadband activity, and selectively releases this entrainment when initiating movement [45]. In Parkinson’s disease, the ability to release entraining influence breaks down [46], but can be recovered with DBS [47]. Dr. Miller proposed that devices with closed-loop recording to trigger paired stimulation between DBS and surface ECoG could be used to retrain the Parkinsonian brain and improve long-term brain function in the disease setting.

Overall, this workshop documented increased enthusiasm for the ECoG platform, increased maturity of associated methods, and increased broadening of application of BCI methods to different clinical and neuroscientific problems.

ECoG for Control and Mapping

Organizer: Christoph Guger (g.tec medical engineering GmbH)

Additional Presenters: Kyousuke Kamada (Asahikawa Medical University-AMU), Kai Miller (Stanford University), Gerwin Schalk (NCAN)

The electrocorticogram (ECoG) can detect not only low-frequency oscillations in the alpha and beta range, but also broadband gamma activity. Broadband gamma activity is directly related to underlying neuronal firing, and has both high spatial as well as high temporal resolution. Thus, ECoG can be useful for controlling prosthetic limbs, avatars, or cursors, but can also be used to identify areas in the brain that are responsible for specific functions.

This workshop showcased state-of-the art ECoG experiments for brain-based control and functional mapping and described how data acquisition, signal processing and experimental setups are done in operating room and intensive care environments.

Kai Miller reviewed the basic ECoG features, showing that broadband spectral changes (or high-frequency approximations) correlate with fMRI (Hermes, 2012) and are robust features of local brain activity, with illustrative examples from real-time motor and language mapping [48, 49]. Christoph Guger presented on ECoG use for gamma-based functional mapping for epilepsy and tumor patients in the intensive care unit and operating room. Patients solve a Rubik’s cube, move their tongue, listen to a story, or perform other tasks while broadband gamma activity is extracted and compared with a baseline recording [50]. Statistical analysis shows the most important brain regions for performing these activities. This information is essential for neurosurgeons like Dr. Kamada and Dr. Miller to spare these brain regions during surgical resection. Gerwin Schalk presented group studies with epilepsy patients that demonstrated the high sensitivity and specificity of this technology for motor and language mapping [51, 52]. Kyousuke Kamada presented cortico-cortical evoked potentials (CCEPs) to map the whole language network. With this technique for mapping cortical networks, electrical stimulation of Broca’s area produces evoked potentials over the motor mouth region, Wernicke’s area, and the auditory cortex. These CCEPs are very useful as a monitoring tool during neurosurgery to interrupt pathological pathways [53, 54]. Gerwin Schalk developed a new signal processing pipeline to analyze broadband gamma activity instead of the EPs to map brain networks resulting from electrical stimulation. In addition to mapping language and sensorimotor cortex, ECoG can also be used to analyze broadband gamma signals from the temporal base. This can be useful to determine if the patient is seeing faces, Kanji characters, Arabic characters, or symbols. Using this information, a BCI system can be trained on images shown on a computer screen and the BCI is then able to determine real faces and symbols that were not part of the training set [54]. Gerwin Schalk showed that face illusions can be produced when the fusiform face region is electrically stimulated, independent of what other object a person is looking at [55]. This study showed that there is a specific cortical region responsible for faces, and that stimulation of this region does not alter perception of other objects presented simultaneously.

Turning negative into positives! Exploiting negative results in Brain-Machine Interface research

Organizers: Fabien Lotte (Inria Bordeaux Sud-Ouest / LaBRI, France), Camille Jeunet (Inria Rennes Bretagne Atlantique, France / EPFL, Switzerland), Ricardo Chavarriaga (EPFL, Switzerland) and Laurent Bougrain (Univ. Lorraine / Inria Nancy Grand-Est, France)

Additional Presenters: Moritz Grosse-Wentrup (Ludwig-Maximilians-Universität Munchen & Max Planck Institute for Intelligent Systems, Germany), David Thompson (Kansas State University, USA)

Negative results are “results that do not confirm expectations” [56, 57]. While essential for research, they are rarely published [56] because authors and reviewers may prefer successful stories. Yet, reporting negative results prevents wasteful repetition of studies and enables testing prior conclusions. In BMI research, small sample sizes and variability across and within users [58-60] could produce inconclusive or hard-to-replicate results. Reporting negative results may thus confirm or overturn published results. Models and theories to explain performance variations [61, 62] are lacking in BMI research, but their creation requires both positive and negative results.

Negative BMI results were then presented. Moritz Grosse-Wentrup explained that completely locked-in Amyotrophic Lateral Sclerosis (ALS) patients modulate a lower alpha frequency band during cognitive processing [63] and lack EEG correlates of self-referential thinking [64]. Laurent Bougrain highlighted that intrinsically multiclass classifiers may not outperform combinations of binary classifiers even for multi-class BMIs [65, 66]. David Thompson presented that EEG-based affect classification initially seemed promising, but accounting for class imbalance produced chance level performances [67]. Other attendees spontaneously reported on the inconsistent effectiveness of both BMI performance predictors and standard EEG workload markers.

Next, Ricardo Chavarriaga argued that negative results are simply results. Hypothesis-driven research design should ensure useful results, irrespective of whether they fulfill expectations. Indeed, even “positive results” lack impact without insights on why the expected outcome occurred. He recommended long-term BMI studies to characterize BMI users and EEG variations [68-70] and understand the causes of poor BMI performances. Camille Jeunet presented guidelines to avoid flaws in BMI studies [71], since useful negative results come from rigorous studies with clear research questions.

Ideas for dissemination of (relevant) negative results included: a) defining publication and review guidelines to ensure rigorous studies; b) giving incentives (awards or special issues) to publish replication studies and negative results; c) identifying open research questions to promote hypothesis-driven research; or d) pre-registering/reviewing study protocols to ensure that any result - even negative - would be relevant and disseminated.

Altogether, the workshop concluded that negative BMI results are necessary but too scarcely reported. BMI research should thus be conducted to ensure relevant results, either positive or negative, and to encourage their dissemination.

Natural Language Processing & BCI

Organizer: Steven Bedrick (Oregon Health & Science University)

Additional Presenters: David Smith (Northeastern University’s College of Computer and Information Science); Brian Roark (Google Research); Shiran Dudy (Oregon Health & Science University)

Natural Language Processing (NLP) is the computer science sub-field that analyzes language produced by humans for applications including machine translation, automatic speech recognition, document classification, and automated summarization. The application area of improving computerized text entry intersects with BCI-based AAC systems [72, 73]. This workshop covered core NLP concepts, relevant current research, and privacy issues.

David Smith reviewed formal linguistics, the history of NLP, key mathematical ideas behind computational analysis of language, and application of these concepts to practical linguistic problems. He demonstrated how the “Shannon Game” (in which players guess subsequent letters in a message, informed by prior linguistic knowledge [74]) illustrates information theoretic concepts useful in NLP applications such as predictive typing and spelling correction. Dr. Smith also led a stimulating discussion of Alan Turing’s pioneering statistical models of word and letter occurrence for code-breaking during World War 2 that made it possible to determine when automated code-breaking machines had successfully guessed a message’s encryption key (out of hundreds of thousands of possible keys).

Brian Roark presented on technical and social factors affecting text entry in people with severe communication-related disabilities, including the diverse communication needs of AAC users. All too often, AAC system designers forget (or fail to realize) that their users, in addition to communicating about immediate needs, also must be able to tell jokes, discuss the ball game, etc. He discussed strategies for reducing “keystrokes” (selection actions, etc.): Huffman-coded scanning patterns (instead of row-column patterns) [75], predictive text entry, etc., and presented experimental results of faster and easier text entry by AAC users. He concluded by discussing the technical challenges of text entry with non-Latin writing systems.

Shiran Dudy gave a tutorial on icon-based communication systems, their challenges for traditional NLP techniques, and a new approach to training icon-based language models. Icon-based AAC systems can be more efficient than character-level spelling, and are popular with adult users as well as children. Predictive text entry at the word level is more complex than at the character level, however, as the number of possible selections is much larger. Icons compound this issue, as individual icons can have multiple meanings, including entire phrases (“close the door”, etc.) rather than single words. Possible tools to address these challenges are recurrent neural network architectures and continuous vector-space icon embeddings [76].

Steven Bedrick then presented on language model adaptation and personalization. Different AAC users may have different interests, and thus different word prediction needs. Different technical approaches can adapt a word prediction model’s behavior to a user or topic, both for traditional as well as neural language models [77-79].

Personalized AAC devices increase privacy concerns. The workshop concluded by discussing a case study on a hypothetical AAC system that crowdsourced new vocabulary items from its users. Following the framework of Baase & Henry [80], we identified stakeholders and decision-makers, and listed risks, consequences, and benefits. We then mapped out possible technical actions through the lens of the consequences (positive and negative) to stakeholders, and concluded with a spirited discussion.

Unsupervised Learning for BCI

Organizers: Michael Tangermann (University of Freiburg); David Hübner (University of Freiburg);

Additional Presenters: Pieter-Jan Kindermans; Serafeim Perdikis

Unsupervised machine learning approaches do not require labeled data to train a machine learning model. This gives them the potential of completely removing the need for recording calibration data from the BCI data pipeline. When continuously applied from the beginning of a BCI session, unsupervised methods avoid the potential breakdown of model generalization between calibration and online use observed in standard approaches using supervised calibration. Furthermore, they can stabilize the BCI’s decoding performance over prolonged online sessions despite changing characteristics of the recorded signals.

The workshop began with an introduction by Michael Tangermann to the principles of purely unsupervised machine learning strategies. These strategies can all use adaptation to learn the brain signal decoding from scratch, but have the drawback of initially unreliable decoding performance before they have observed a sufficient amount of online data to ramp-up their performance. This rigorous approach was contrasted to transfer learning methods on the one hand and unsupervised adaptation after supervised calibration on the other hand. Transfer learning methods avoid in-session calibration but may include supervised training on data from earlier sessions, a topic covered in a workshop at a previous meeting [8].

The workshop continued with detailed presentation of three concrete examples of successful unsupervised learning in BCI. First, Pieter-Jan Kindermans introduced the idea that unsupervised learning from scratch can be realized in acceptable time limits, if the immanent structure of a BCI paradigm can be exploited. He explained that the temporal structure of standard auditory and visual event-related potential (ERP) paradigms is key for unsupervised learning, as a limited number of e.g. tones are presented not only multiple times, but (within a single trial) reliably also in either the target or the non-target role. This structure is known to by design of the paradigm. As it drastically reduces the hypothesis space for classification, it can successfully be exploited using an unsupervised maximum likelihood approach [81].

Second, David Hübner explained how a simple, but purposeful modification of a standard visual ERP spelling paradigm can create additional structural information, and how this information enables an analytically solvable unsupervised approach called learning from label proportions (LLP) [82]. After investigating the individual strengths and weaknesses of the maximum likelihood approach (rapid learning, but prone to fall for local optima depending on initialization) and LLP (slow learning but guaranteed to find optimum under i.i.d. assumptions), Hübner showed how the benefits of both approaches can be combined into a very robust unsupervised adaptive classifier, which does not require even a single labeled data point and nevertheless rapidly ramps up to the performance of supervisedly trained classifiers [83].

Third, Serafeim Perdikis gave an example of how error potentials obtained during the online use of a visual-/robot BCI application can be exploited in combination with task constraints from the application such that calibration-free online learning can be realized [84]. He showed, that this combination is specifically valuable for applications, where the overall hypothesis space is small.

After these presentations, the workshop participants discussed for which of their BCI paradigms and application scenarios similar unsupervised learning methods could potentially be adopted. While classic motor imagery or attempted motor execution paradigms would probably be hard to use, it became clear, that most ERP paradigms could be modified to function with unsupervised classification approaches. Further it was discussed, that an overly large hypothesis space, i.e. hundreds of options to choose from in each step, would probably pose a substantial challenge to existing unsupervised adaptive methods. Possible obstacles such as the initial ramp-up time of unsupervised classifiers were identified which depended on the application may

BCIs for Specific Populations or Applications

BCIs for assessment of locked-in and patients with disorders of consciousness (DOC)

Organizer: Christoph Guger (g.tec Guger Technologies OG),

Additional Presenters: Damien Coyle (Ulster University ), Donatella Mattia (Fondazione Santa Lucia), Leigh Hochberg (MGH/Brown Univ./Providence VAMC), Kyousuke Kamada (Asahikawa Medical University)

Some patients diagnosed as being in a vegetative state are reclassified as (at least) minimally conscious when assessed by expert teams. A further subset of potentially communicative non-responsive patients might be undetectable through standard clinical testing. Other patients might have transient periods of relative wakefulness, but remain unaware of their surroundings. The workshop provided an overview of research groups that aim to use BCI technology to identify patients who are non-responsive or locked-in, but who might be able to communicate with BCI or benefit from BCI technology as an assessment tool.

The workshop included presentation and discussion of recent experiments, analysis methods and results with EEG, fNIRS, and fMRI. The goal of the workshop was to identify the most important trends in recent years. Christoph Guger presented a system for assessment using auditory, somatosensory evoked potentials and motor imagery. The system was used to assess command following with vibro-tactile P300 paradigms and was used to answer YES/NO questions by patients with locked-in syndrome (LIS) and DOC [85, 86]. Out of 12 LIS/CLIS patients, 9 patients could use the system to answer YES/NO questions and 3 out of 12 unresponsive wakefulness patients used the system to answer YES/NO questions successfully. Spataro showed that recovery of consciousness could be predicted with the vibro-tactile P300 paradigm classification accuracy [87]. Patients improved in the Coma-Recovery-Scale if the accuracy was >= 50% (chance level 1/8). Damien Coyle presented a BCI that uses auditory sensorimotor rhythm feedback to aid motor imagery training in prolonged disorders of consciousness (PDoC) patients where visual impairments limit the utility of visual feedback [88]. Results of a pilot study with two PDoC patients involving motor imagery based response to closed questions presented verbally in a repetitive Q&A showed accuracies significantly above chance level in distinguishing yes and no responses [89]. Donatella Mattia presented auditory evoked potentials for the assessment of consciousness and a custom wavelet based analysis to compensate for the variability in the latency (time delay) of the P300 response in DOC patients [90, 91], a phenomenon known as latency jitter. Kyousuke Kamada presented the clinical needs of DOC assessment in the intensive care unit at Asahikawa Medical University. Leigh Hochberg performed a feasibility study of using a BCI system in the intensive care unit with auditory, vibro-tactile and motor imagery paradigms [92]. The presentations showed the need for BCI technology for the assessment, communication, prediction and treatment of DOC and LIS patients.

Progress in Decoding Speech processes using intracranial signals

Organizer: Christian Herff (Maastricht University);

Additional Presenters: Tanja Schultz (University of Bremen); Dean Krusienski (Old Dominion University); Jon Brumberg (University of Kansas); Phil Kennedy (Neural Signals); Tonio Ball (University of Freiburg); Efraim Salari (UMC Utrecht); Josh Chartier (UC San Francisco); James O’Sullivan (Columbia University); Stephanie Ries-Cornou (San Diego State University);Blaise Yvert (University Grenoble Alpes)

In this workshop, recent advances in the decoding of speech processes from intracranial recordings where presented. Intracranial measures allow the recording of neural signals with high spatial and temporal resolution and are not contaminated by muscle artifacts. Additionally, the recording position below the skull gives access to a much broader frequency spectrum, including the high-gamma band (>70 Hz) that is known to correlate with ensemble spiking [93] and to contain specific information for speech processes [94]. Intracranial recordings of neural activity, namely electrocorticography and stereotactic EEG (sEEG), are therefore particularly well-suited for the decoding of speech processes for BCI [95-97].

Presenting groups looked at different aspects of speech production and perception that could be successfully decoded while simultaneously increasing our knowledge of the underlying processes. Jonathan Brumberg’s approach was to decode articulator movements from the neural recordings. This was achieved by using Kalman-Filters to decode articulatory features [98] from continuous speech [99]. Josh Chartier presented an acoustic-to-articulatory inversion (AAI) based on video and ultrasound recordings during articulation by healthy volunteers [100] and used this AAI to show the encoding of articulatory movements in intracranial recordings [101]. Similarly, Blaise Yvert’s group also used ultrasound imaging [102] in combination with ECoG recordings to decode articulator movements [103].

Another obvious target for speech decoding from neural signals are phonemes [104], but Salari et al. highlighted the influence of co-articulation and showed that the preceding phonemes greatly impacted the neural representations of articulation [105, 106]. These findings are of particular importance for continuous speech BCIs based on phoneme models [107].

Instead of the classification of phonemes or articulatory features, Christian Herff showed how speech could also be synthesized [108] from ECoG recordings in inferior frontal, motor and pre-motor cortices using either dedicated approaches from the speech synthesis community, namely unit selection, or specifically tailored deep neural networks [109]. Also using pre-motor cortex activity, Tonio Ball presented their finding that perceived and articulated speech share a premotor-cortical substrate [110] and how this might be used in combination with deep neural networks [111]. When looking at representations of perceived speech in more detail, the cocktail-party phenomenon [112] quickly becomes a problem. James O’Sullivan demonstrated that the attended audio stream can be decoded from intracranial recordings, even when clean audio is not available [113].

Instead of decoding representations of speech production, Stephanie Ries-Cornou investigated word-retrieval in high-gamma activity measured with ECoG [114] and showed effects linked to lexical-semantic activation and word selection in widespread regions of the cortical mantle.

Phil Kennedy didnot use ECoG or sEEG and talked about their experience using the neurotrophic electrode [115] and how the long-term recordings obtained with this measuring technique can be used to decode speech processes [116, 117]. Additionally, this group also showed results using 12-20 Hz beta peaks, a frequency range not yet investigated by the other groups.

In summary, the workshop highlighted the feasibility to decode a number of different aspects of speech including articulator kinematics, phonemes and speech acoustics from intracranial and intracortical recordings. These continued efforts highlight the potential of Brain-Computer Interfaces based on speech processes.

Real-time BCI communication for non-verbal individuals with cerebral palsy: Challenges and Strategies for Progress

Organizer: Jane E. Huggins (University of Michigan)

Additional Presenters: James A. Blackman (Cerebral Palsy Alliance Research Foundation); Katya Hill (University of Pittsburgh); Adam Kirton (University of Calgary); Christian Herff (University of Bremen)

Cerebral palsy (CP) is a neurodevelopmental disability resulting in disordered movement [118]. Up to 80% of children with CP have communication impairments, and as many as 25% of children with CP are non-verbal or have complex communication needs [119]. Even though much progress has been made through development of computerized augmentative and alternative communication (AAC) devices, for someone with severe motor impairments, use of these remains exceedingly cumbersome and time-consuming. A thought-to-speech device would be life-changing, not only for persons with CP, but other neurological conditions and diseases (e.g. stroke, traumatic brain injury). However, most current efforts to decode intended speech target the motor cortex instead of language areas. Thus, they assume the existence of typically developed motor areas of the brain devoted to producing speech. Unless specific attention is paid to the needs of children born without the capacity for verbalization, the reliance on these assumptions could make a breakthrough in BCI for direct speech decoding useless for people with CP. This workshop was intended to ensure that the special considerations necessary for thought-to-speech work for people with CP are considered early in the development process by creating a research strategy and roadmap for developing a real-time BCI for communication among individuals with CP.

James Blackman presented on the causes of CP and its central and related impairments. CP is a developmental disability that results in disorders of movement or other nerve functions. It is caused by brain malformation or injury to the brain before or during birth, or in the early years of life. CP occurs in about two births per thousand [118].

Adam Kirton presented on neuroimaging of the brain in people with CP and the massive brain abnormalities such imaging methods can reveal [120]. These changes in brain organization are of particular concern for BCI implementations, which are typically designed to record from pre-determined locations.

Katya Hill presented on the neuroscience of speech and language development. In particular, she described how children with CP often learn to communicate using AAC devices, and the lack of knowledge about whether this communication is always supported by the stereotypical languages areas of the brain. More research is needed on the brain areas involved in language for people who use AAC.

Christian Herff gave a review of current efforts at decoding speech and language signals [96], including material that had been covered in greater detail in the workshop “Progress in Decoding Speech Processes Using Intracranial Signals.” He pointed out that Intracranial recordings of neural activity, namely ECoG and stereotactic EEG (sEEG), are particularly well-suited for the decoding of speech processes for BCI [95-97].

Jane Huggins presented results of a recent discussion on Thought to Speech in a special track at the Summit on Technology for People with CP hosted by Cerebral Palsy Alliance Research Foundation at the beginning of May 2019. Discussion at this Summit emphasized the importance of communication and the strong desire among people with CP for efficient communication, but attendees also expressed concern about invasive BCI approaches.

During discussion at this workshop, attendees agreed that although the term “thought to speech” had been used repeatedly, it is actually intended communication, not thoughts that the BCI should interpret. Additionally, concerns about invasive electrodes were considered to be of minor importance if a truly function decoding of intended communication were available. Workshop attendees planned continued discussion to map possible approaches for pilot testing of invasive decoding of intended speech in people with CP.

BCIs for stroke rehabilitation

Organizer: Christoph Guger (g.tec medical engineering GmbH)

Additional Presenters: José del R. Millán (École polytechnique fédérale de Lausanne-EPFL), Vivek Prabhakaran (University of Wisconsin-Madison-UWM), Kyousuke Kamada (Asahikawa Medical University-AMU), Tetsuo Ota (Asahikawa Medical University-AMU), David Lin (MGH Harvard)

BCI systems are increasingly being used in the context of stroke rehabilitation. Many of these BCI systems are based on motor imagery activity recorded from the sensorimotor cortex, which is translated into continuous control signals for rehabilitation devices. Some devices use Virtual Reality (VR) to allow users to observe an avatar’s limb movement. Other successful rehabilitation applications with patients use different brain stimulation techniques and/or robotic devices (such as exoskeletons or functional electrical stimulators - FES) attached to patients’ paralyzed limbs.

The workshop reviewed current stroke rehabilitation programs from different research labs and provided insight into technology (EEG, fMRI), experimental setups (VR, FES, BCI), results, and outcomes of patient studies in the acute, sub-acute, or chronic state.

Christoph Guger presented a system that instructs a patient to imagine left or right hand movements. The BCI system recognizes the movement imagination and triggers a functional electrical stimulator with FES electrodes attached to the corresponding hand so that the movement is really performed in real-time. At the same time, a VR avatar shows the same movement on a computer screen in front of the patient. Preliminary data of a clinical study shows a Fugl-Meyer Score improvement of 8 points in sub-acute and chronic patients (N=25, p<0.0001) and patients achieve a high BCI classification accuracy [121, 122]. José Millán presented a clinical study showing that BCI coupled to FES elicits significant, clinically relevant, and lasting motor recovery of arm and hand function in chronic stroke survivors, while this was not the case for sham FES where stimulation was random and not contingent on cortical patterns of motor activity indicating attempted movement [123]. BCI patients exhibited a significant functional recovery after the intervention, which remained 6-12 months after the end of therapy. This study also puts forward a mechanistic interpretation of the BCI-FES intervention. Vivek Prabhakaran presented a clinical study conducted in Wisconsin that combined BCI training with fMRI to show brain plasticity in a group study [124]. Kyousuke Kamada and Tetsuo Ota are treating acute and sub-acute patients at Asahikawa Medical University with a BCI system with FES and avatar feedback. The group is also using fMRI to show brain plasticity effects in these patients and they highlighted the importance for a neurosurgery unit to have access to such technology to treat patients after surgery. David Lin gave an overview of the clinical aspects of stroke rehabilitation and how they may inform rehabilitation technologies. As part of this, he introduced a research study on the natural history of upper extremity motor recovery recently launched at Massachusetts General Hospital in Boston. Insights from this study are informing approaches to neurorehabilitation technologies such as BCIs.

Non-invasive BCI-control of FES for grasp restoration in high spinal cord injured humans

Organizers: Gernot Müller-Putz (Graz University of Technology); Rüdiger Rupp (Heidelberg University Hospital)

Additional Presenters: Joana Pereira (Graz University of Technology); Andreea I. Sburlea (Graz University of Technology); Aleksandra Vuckovic (University of Glasgow)

This workshop reviewed the current state of non-invasive BCI-controlled grasp neuroprostheses in end users with spinal cord injury (SCI). Despite medical advances, severe SCI remains a devastating condition with little potential for functional recovery. Improvement of impaired hand function is a high priority for patients and health professionals [125]. When surgical solutions are not applicable due to the lack of enough strong muscles under voluntary control, a non-invasive grasp neuroprosthesis with functional electrical stimulation (FES) is an easy-to-apply option to restore grasp patterns for everyday activities. While the basic feasibility of non-invasive neuroprostheses using surface electrodes has been known for decades, persistent challenges limit their regular, independent use. Needed improvements include easy and quick electrode placement, robust generation of multiple grasp patterns in different hand positions, and an intuitive user interface that is distinct from the preserved residual functions and provides natural neuroprostheses control.

The first talk reviewed the state-of-the-science of non-invasive motor neuroprosthesis technology with a special focus on the new multi-pad stimulation electrodes [126]. These arrays contain a matrix of single electrodes that can be electronically merged to form larger electrodes, allowing dynamic spatial relocation of stimulation spots. This enables implementation of closed-loop stimulation control to dynamically compensate for variations such as those introduced into grasp patterns during wrist rotation. Such dynamic reconfiguration is particularly relevant for precise stimulation of small muscles such as the thumb extensors and abductors for fine motor control and grasping forces.

The second talk reported the state-of-the-science of motor imagery-based BCIs to operate non-invasive grasp neuroprostheses. Single-case studies introduced people with C4/C5 level SCI using BCIs based on sensorimotor rhythm modulation [127-129]. Although these case studies have proven feasibility, they have also highlighted limitations in intuitiveness. The next talk presented results from the MoreGrasp Project (www.moregrasp.eu), a European Horizon 2020 project to develop BCI controlled motor neuroprostheses for individuals with high SCI [126]. The talk introduced decoding of movement-related cortical potentials during attempted movement for control of various grasp patterns. Preliminary results of a clinical trial including 9 individuals with SCI were presented. The MoreGrasp neuroprostheses was demonstrated, showing the automatic correction of stimulation electrodes to generate robust key and palmar grasp patterns independent of wrist rotation.

The next talk presented BCI-controlled FES not as a neuroprostheis, but as an adjunct therapy in a task-oriented, neurorestorative rehabilitation program. A pilot randomized control trial with a small group of people with cervical SCI in the subacute phase showed that a portable BCI-controlled FES device enables self-managed therapy and a better recovery of motor functions [130].

The last talk from the ERC project ‘Feel Your Reach,’ presented future concepts for non-invasive EEG motor control. These included detection of goal-directed movements, decoding movement covariates from low frequency EEG signals, and their relation with neural activity, the detection of error-potentials, influence of feedback during continuous motor control, and strategies for kinesthetic feedback for people with SCI [131-135].

In summary, the workshop presented and demonstrated the recent developments and future potential of non-invasive BCIs to improve motor rehabilitation and restoration of grasp to increase the independence of end users with high SCI.

Lower-limb brain-machine interfaces and their applications

Organizers: Kyuhwa Lee (Swiss Federal Institute of Technology in Lausanne-EPFL); An Hong Do (University of California, Irvine); José Luis Contreras-Vidal (University of Houston); José Pons (Spanish National Research Council-CSIC)

Recent advances in wearable exoskeletons, rehabilitative devices and machine learning algorithms have enabled researchers to investigate brain-machine interfaces (BMI) as assistive tools as well as rehabilitative tools [136-138]. While in the past more research has been put into upper limb BMI, recently lower-limb BMIs have been gaining attention for assistive and clinical applications. With the potential to offer more health benefits than a wheelchair, various types of lower-limb exoskeletons have been actively developed to provide assistive functions for patients who have limited mobility [139-141].

Three major types of brain signals have been exploited to provide assistive functions using lower-limb exoskeletons, which are event-related desynchronization (ERD), movement-related cortical potentials (MRCP) and steady-state visual evoked potentials (SSVEP). For example, ERD features were used in [139, 142, 143] to deliver walk, stand, left and right turn commands, while MRCP features were used in [144-146] to detect kinematic parameters of lower limbs and deliver walk and stand commands. SSVEP features were used in [147] by letting a user visually attend at LEDs blinking at 5 different frequencies between 9 and 17 Hz to deliver walk, sit, stand, left and right turn commands. While the most common areas of application are in the basic mobility tasks of walking, sitting, and standing, more fine-grained control of the system will open up new ideas and possibilities in such applications.

In parallel to the progress made in assistive technologies, the use of BMI for rehabilitation has been increasingly investigated with a goal of helping patients with impaired mobility to gain neuroplasticity that will persist even after the BMI intervention is discontinued [148, 149]. For example, a functional electrical stimulation (FES) system was used to activate the dorsiflexion of ankle movement based on EEG decoding on five able-bodied subjects [150]. They achieved latencies between 1.4 and 3.1 seconds and 100% BMI-triggered FES response with a false alarm in only one trial. In [151], a motorized ankle-foot orthosis was used to trigger cortical neuroplasticity with a short intervention procedure on ten able-bodied subjects. Nine subjects exhibited a significant increase in motor-evoked potential (MEP) immediately following and after 30 minutes of the BMI intervention.

There are still challenges left in bringing lower-limb BMI research into practice to meet assistive technology and rehabilitative demands. Intuitive and natural feedback is essential to achieve real-world usability so that users can depend on the device. The intimate contact between the user and lower-limb technology requires close cognitive and physical interaction between the system and human user. Safety is another important issue, which requires rigorous testing of the hardware reliability and compatibility with the human body. As reported in [152], with recent advances in decoding algorithms as well as more sophisticated actuators, lower-limb BMI is gaining attention as an effective rehabilitative tool as well as an assistive tool. With the high degree of muscle selectivity of FES and the current research in improving the kinematic compatibility between the exoskeleton and human limb anatomy, it is expected that mechanical actuators and FES may be fused to further improve the efficiency and practicality in real-world settings [149].

Perception of Sensation Restored through Neural Interfaces

Organizer: Jennifer Collinger (University of Pittsburgh)

Additional Presenters: Dustin Tyler (Case Western Reserve University); David Caldwell (University of Washington); Luke Bashford (California Institute of Technology); Robert Gaunt (University of Pittsburgh); Tucker Tomlinson (Northwestern University); Sliman Bensmaia (University of Chicago)

Significant advances have been made in using BCIs to restore upper limb movement, allowing users with paralysis to grasp and interact with objects [153-158]. To date, most clinical BCI studies have provided only visual feedback, although it is well established that the sense of touch and proprioception are critical for skilled movements. In this workshop, we highlighted ongoing work related to the restoration of somatosensation through neural interfaces [159, 160]. We discussed progress towards the goal of restoring natural and useful sensation along with the limitations of currently available technology. Finally, we discussed how fundamental neuroscience and emerging technology can inform the development of biomimetic sensory neuroprosthetics.

Peripheral nerve stimulation in upper limb amputees has generated sensations perceived to originate from the missing limb [161-165]. The sensations are appropriate in perceived magnitude and are described as having tactile and proprioceptive qualities [163, 166]. Restored sensory feedback can also improve performance on structured functional assessments performed with a myoelectric prosthesis [161, 164, 165]. Recently, it was demonstrated that home use of a sensory-enabled myoelectric prosthesis significantly improved functional performance, prosthesis usage, and user experience, including embodiment of the prosthetic limb [162].

For individuals with spinal cord injury, peripheral nerves cannot be targeted because the connection to the brain has been damaged. Instead, neural interfaces can stimulate the brain directly using cortical surface electrodes [167] and penetrating microelectrode arrays [23, 168]. In able-bodied subjects undergoing electrocorticographic monitoring for epilepsy, electrical stimulation of somatosensory cortex that was synchronous with touches to a rubber hand led to increased ownership of the artificial hand [167]. Ownership was not induced when the electrical stimulation was asynchronous with the visual feedback of touching the artificial hand, suggesting that integration of the two sensory modalities creates the perception of ownership over the artificial hand. Intracortical microstimulation of the hand area of somatosensory cortex can evoke tactile sensations perceived as originating from the paralyzed hand in a person with chronic spinal cord injury [23]. Stimulation of different electrodes on the microelectrode array generates focal sensations on different fingers and parts of the palm that can be evoked at low stimulation amplitudes and can be graded in intensity. Intracortical microstimulation has also been used to generate tactile sensations and feelings of movement in the arm with perceptual qualities that were modulated by stimulation amplitude and frequency [168]. Preliminary evidence indicates that restored somatosensation may improve performance on a force matching task [169].

Many of the speakers spoke about the challenge of asking the participants to describe the perceptual qualities and naturalness of the sensations as we are lacking a well-calibrated and complete lexicon. In order to generate more natural sensations, we can turn to the basic science literature, where much is known about how somatosensory cortex responds to natural tactile stimulation of the intact hand [166, 170, 171]. Future work will evaluate whether reproducing the naturally occurring spatiotemporal activation patterns through intracortical microstimulation of somatosensory cortex can improve the naturalness and usefulness of restored somatosensation. Finally, in order to restore proprioception, new technology may be needed to access cortical areas within the central sulcus that typically receive proprioceptive inputs [172].

BCI and Augmented/Virtual Reality

Organizer: Felix Putze (University of Bremen)

Additional Presenters: Christian Herff (Maastricht University); Josef Faller )Columbia University); Nicolas Waytowich (Columbia University); Hakim Si-Mohammed (Inria Hybrid); Jelena Mladenovic (Inria Potioc); Dean Krusienski (Virginia Commonwealth University); Athanasios Vourvopoulos, (University of Southern California); Tim Mullen (Intheon)

Virtual/Augmented Reality (VR/AR) technology opens an exciting new field of research for the BCI community. BCI can be considered as a novel input device for VR/AR applications complementary to other modalities such as speech and gesture, and capable of conveying additional information. Active BCIs notably allow a user to issue commands to the device or to enter text without physical involvement of any kind; passive BCIs monitor a user’s state (e.g. workload level, attentional state) and can be used to proactively adapt the VR/AR interface. Additionally, AR/VR offers the possibility for immersive scenarios in basic BCI research. To live up to these expectations, methodological advances are required for BCI interface and stimulus design, synchronization, or dealing with artifacts and distractions specific to VR/AR. In this workshop, presentations outlined applications of BCI and AR/VR technology from entertainment to rehabilitation:

In the initial presentation, Felix Putze defined the spectrum of virtuality, which ranges from real world, through mixed/augmented reality to full virtual reality (see [173]), and other central concepts of AR/VR as well as technical challenges. While VR creates a completely new virtual environment, AR creates individual virtual objects that appear in the real environment. Putze and Christian Herff then discussed how AR and VR can be used to enhance Human-Computer Interaction with passive BCI (e.g. attention modeling from EEG [174] and fNIRS-based workload recognition [175]) and active BCI technology, such as smart home control based on SSVEP. Jelena Mladenovic discussed the concept of flow as an optimal user state for immersion and performance in BCI [176]. She discussed the concept in relation to an adaptive, BCI game. This is an important finding in that immersion is one of the key advantages of using VR applications and thus needs to be maintained appropriately. Josef Faller and Nicolas Waytowich showed how a complex piloting task can be successfully simulated in VR and that neurofeedback leads to improved flight performance [177]. Hakim Si-Mohammed presented an overview of the state of the art in BCIs for AR [178]. His presentation entailed a definition of different types of AR and how they can be combined with different BCI paradigms and applications. Furthermore, Dean Krusienski showed an experiment to estimating cognitive workload in an interactive virtual reality environment using EEG [179]. In the presented study, the established n-back paradigm was transferred to an immersive VR environment and initial results showed that classification of mental workload is still feasible despite increased presence of movement and visual distraction. Athanasios Vourvopoulos showed how BCI and multimodal VR can be used for augmenting stroke neurorehabilitation [180], promising better performance and higher training efficacy due to its immersiveness. Finally, Tim Mullen presented the real-time BCI platform NeuroPype [181] from Intheon, which provides a visual, Python-backed editor to compose BCIs pipelines. He presented a live demonstration of a mobile AR application based on that technology.

As a wrap-up of the workshop, several participants presented spotlights of their own work in the field, showing a large and diverse interest from both industry and academia. The presented works outlined both the potential of such applications as well as the challenges that arise during the combination of two complex technologies.

Neurofeedback during Artistic Expression as Therapy

Organizers: Stephanie Scott (Colorado State University); Charles Anderson (Colorado State University)

Additional Presenters: Juliet King (The George Washington University & Indiana University School of Medicine); Grace Leslie (Georgia Tech)

Art therapy employs visual arts and creativity within a therapeutic relationship to improve physical and psychological health and increase quality-of-life. Art therapy involves the purposeful and meaningful exploration of feelings, reconciliation of emotional conflicts, self-management of behavior, and fostering of a sense of self-esteem. Art therapists typically use neuroscience theory to interpret information from verbal and nonverbal communications from the art-making process, the ability of creativity to heal, and the materials and methods that encourage both self-expression and self-regulation [182].

By expanding interface frameworks, BCIs can extend self-expression beyond basic modes of communication to provide activities more supportive of therapy and creative expression.[183]. This workshop encouraged collaborative dialogue and conceptual consideration to enhance meaning-making and wellness outcomes through the integration of artistic art-based interfaces within device designs. Presenters defined key components for art-therapy practices, introduced inventive concepts and applications for BCIs and artistic expression, and demonstrated how more accurate, noninvasive EEG sensing technologies can serve as therapy tools when artistic processes are incorporated into neurofeedback and biofeedback systems. Most notably, presentations demonstrated how engaging both artistic and scientific concerns within research methodologies can offer new therapeutic benefits for varied user groups. The workshop encouraged participants to explore how individual-level research can be integral for developing new design strategies that advance inclusivity and accessibility for users.

Acknowledging these reciprocal relationships between art, science and technology provides an opportunity to examine existing BCI interfaces from alternative perspectives. Despite offering rehabilitative benefits, art and creative expression are often overlooked in assistive technology development. However, BCIs have attracted a wide range of user groups and enabled individual expression through various artistic processes including music, painting and other art forms (see: [183-188]). These efforts to extend avenues for creative expression are gaining attention [183]. Exploring BCI and neurofeedback development through introspective approaches illustrates how new integrative applications may provide opportunities to enhance well-being initiatives for users [7, 189].

BCIs offer new methods to investigate connections between the brain and artistic expression. When incorporated into existing BCI art-based programs, applying designs focused on user-centered experience such as open-ended BCIs [190], and hybrid BCIs [191] could provide users with more options for creative engagement. Breakthroughs in EEG sensing technologies such as tripolar concentric ring electrodes [192, 193] can provide more precise scalp recordings, enabling the discovery of new patterns of brain activity underlying artistic creation. Demonstrations and proof of concept neurofeedback [194] and biofeedback interventions [195] further highlighted how applying insights from multiple disciplines can help identify gaps in interface design that reduce BCI usability [191, 196].

Through examining ways current and emerging BCI applications can mediate communication processes, enable embodied interactions, and act as a medium for sharing both individual and collective experiences through creative processes, insight can be gained regarding the potential impact that augmentative and alternative communication (AAC) technologies may have on user-system interactions [195]. These discoveries can help to outline future roadmaps for using BCI and neurofeedback in therapy and research, and open new frontiers of self-expression and self-discovery for users.

Collaborative and Competing Multi-Brain BCI’s

Organizers: Anton Nijholt (Human Media Interaction, University of Twente); Guillaume Chanel (Swiss Center for Affective Sciences & Computer Vision and Multimedia Laboratory); Jan van Erp ( University of Twente); Mannes Poel (University of Twente); Fabien Lotte (Inria Bordeaux Sud-Ouest / LaBRI,)

Additional Presenters: Chris Berka (Advanced Brain Monitoring); Davide Valeriani (Harvard Medical School); Tim Mullen (Intheon)

An emerging line of BCI research is the development of BCIs based on brain activity recorded from multiple users simultaneously [197, 198]. Applications include joint decision making in environments requiring high accuracy and/or rapid reactions or feedback; joint/shared control and movement planning of vehicles or robots; assessing team performance, stress-aware task allocation, and rearrangement of tasks; characterization of group emotions, preferences, social interaction research (two or more people); arts [199], entertainment, and games.

Multi-brain research can be passive or active. Passive multi-brain research includes EEG hyperscanning [200] to study social interaction, but also monitoring and integrating of brain activity to realize on-line improvement of group performance or online adaptation of a task or media display. In active multi-brain research, we can distinguish between collaborative (movement planning, target detection) and competing multi-brain BCIs (e.g., in games). The workshop considered various ways of merging and comparing brain activity from multiple users, including computation of features such as synchrony, training models based on multi-users signals, and BCI paradigms optimized for several users. Workshop members discussed the concurrence of the until-now separate developments in research on neuro-ergonomics, neuroscience for social interaction, and active and passive (multi-brain) BCI.

The exploration of patterns of neural activity for understanding social networking and collaboration in dyads and small teams [201-203] was discussed by Chris Berka. Neuroscience methods can supplement traditional psychometric team performance approaches by unobtrusively measuring team dynamics and quantifying a team’s cognitive and emotional states in real-time. Davide Valeriani discussed collaborative brain-computer interfaces for enhancing group decision making. He presented a collaborative BCI that decoded the decision confidence of multiple individuals undertaking a decision-making task from a set of neural (EEG), physiological, and behavioral measures [204, 205]. These confidence estimates can be used to weigh individual responses and obtain group decisions in which the impact of over- and under-confident behaviors can be reduced. Fabien Lotte presented experimental results showing that studies with multiple users can improve BCI effectiveness. Training motor-imagery BCI users in pairs rather than alone can help them reach better BCI control [206]. For passive BCIs, relying on multiple users can produce more robust neuro-ergonomic evaluations [207]. Jan van Erp reported on experiments with 20 coupled brains in artistic expression, with topics ranging from measuring collective emotional experience in reading [208, 209] to real-time alterations of electronic dance music. He also explained the potential of hyperscanning techniques in understanding group dynamics and bullying behavior in the classroom. Finally, Tim Mullen discussed development of scalable technologies enabling real-time, simultaneous acquisition, storage, and analysis/decoding of synchronized multi-modal neurophysiological sensor data simultaneously from arbitrarily sized and physically distributed groups of individuals.

In a discussion session, organizers and audience members reflected on the future of multi-user BCIs, in particular the use of BCI platforms that allow the real-time, simultaneous processing and analysis of neural and other data from very large numbers of individuals. This requires the integration and synchronization of multiple sources of not only BCI, but also physiological and behavioral information. The main conclusion of the workshop was that further progress in multi-user BCI research requires the collaboration between BCI researchers, neuroscientists, behavioral scientists and human-computer interaction researchers.

Translational Efforts for BCI

Examining the Ethical Assumptions About Neural Engineering and BCI Development

Organizer: Paul Tubig (University of Washington)

Additional Presenters: Judy Illes (University of British Columbia), Jonathan Wolpaw (Adaptive Center for Neurotechnology), Jane Huggins (University of Michigan), and Laura Specker Sullivan (College of Charleston)

This workshop was intended to help neural engineers uncover, articulate, and analyze assumptions and values underlying neurotechnology development. Starting from the level of the individual researcher, this workshop aimed to foster sensitivity and responsiveness among researchers to the ethical implications of their work. The workshop used a dialogue tool, composed of two activities, developed by the Neuroethics Thrust at the University of Washington’s Center for Neurotechnology (CNT). The activities are: (1) a brief survey to identify researchers’ ethical values related to BCI research, and (2) a facilitated dialogue with researchers about their survey answers. The tool is designed to help engineers and researchers identify the ethical assumptions that underlie their work and examine these assumptions together as a research community. This discussion encourages consideration of alternative perspectives and analysis of whether the assumptions are shared and warranted. The goal is not to didactically teach participants (e.g., “X is right; Y is ethically problematic”), but rather to use philosophical discussion to get participants to think collectively and purposefully about their beliefs and values.

The workshop and dialogue tool were designed in response to the report from the President’s Commission for the Study of Bioethical Issues on the need to integrate ethical considerations into the neural engineering context [210]. There are serious moral concerns about effects of BCIs on agency, identity, privacy and on the distribution of its benefits and burdens [211-213]. The possibility that neural modifiers can be developed in ways that introduce serious harms or vulnerabilities makes it important for neural engineering research to be properly sensitive and responsive to ethical considerations. One way to ensure ethically acceptable disciplinary assumptions and norms in BCI research is to promote ethical reflexivity and engagement among the researchers themselves. However, this requires accessible “tools” to facilitate ethics engagement and the integration of ethics and neural engineering research.

The workshop revealed shared ethical perspectives, such as the desire for ethical considerations and the interests of the potential end users to influence neural engineering research. It also revealed notable disagreements, especially regarding which values, such as autonomy and individual wellbeing, should shape the direction of neural engineering research and whether employing BCIs for cognitive enhancement purposes is morally permissible.

The workshop reinforced the benefit of intentional interactions between researchers and ethicists to uncover and discuss the underlying values that might be shaping the research trajectory. The moral perspectives of researchers should contribute to the moral ends of BCI research. But the workshop also highlighted the limitations of such activities and possible underappreciation of the ethical dimensions of BCI development. One question raised is how to translate workshop results into changes in research practices and norms that reflect the values and ethical commitments that the researchers explicitly endorse. Another question is the extent of the impact of ethical engagement activities beyond the relatively small number of participants engaged in them. To truly reduce the likelihood of ethical pitfalls and integrate ethics into neuroscience research, then ethical considerations must play a more prominent role in neuroscience research.

From the Lab into the Wild: Shaping Methods and Technologies for Large-Scale BCI Research

Organizers: Matthias R. Hohmann (Max Planck Institute for Intelligent Systems), Vinay Jayaram (Max Planck Institute for Intelligent Systems), Moritz Grosse-Wentrup (Research Group Neuroinformatics, University of Vienna, Austria)

Additional Presenters: Tim Mullen (Intheon), Conor Russomanno (OpenBCI), Andrew J. Keller (Neurosity), Alexandre Barachant (CTRL-Labs)

BCIs are intended to assist people with disabilities to express themselves with their family, friends, and caregivers. However, developing BCIs for use in the homes of patients and consumers requires large-scale research studies “in the wild,” where the BCIs will ultimately have to function. Such studies are challenging due to the lack of concrete tools to create user-friendly systems that allow for reproducible science and hypothesis testing. Both hardware and software are rarely designed to be used by caregivers or patients directly [214]. Further, the medical-grade EEG devices that are used in BCI development cost between $30,000 and $60,000. They are rarely reimbursed by health insurance and most patients cannot afford to purchase them privately. Thus, cost and usability cause most research studies to be conducted in a controlled lab environment, and limit their exposure to the environmental influences they must ultimately be able to withstand.

In this workshop we highlighted novel hardware, software, and benchmarking tools to enabling robust communication and control for patients, and break the limitations of current research in terms of sample size and longitudinal data.

In terms of hardware, OpenBCI offers a variety of tools to create a low-cost EEG system, including bluetooth-enabled amplifiers, electrodes, and customizable headwear. The technology has been successfully utilized in previous studies [215-217], and could provide a basis for open and easy-to-use technology for at-home studies and BCI development.

In terms of software, an iOS prototype application for unsupervised, longitudinal EEG studies was presented [218]. The application guides users through the fitting procedure of the EEG headset and automatically encrypts and uploads recorded data to a remote server.

First results indicated that the application can be used outside of a laboratory, without the need for external guidance. Further, variants of automated fitting assistance for subjects based on impedance and the variance of the signal were discussed.

Tim Mullen presented progress on Neuroscale, a cloud-based solution by Intheon (https://www.intheon.io) for real-time signal processing. It is a middleware that allows for custom signal processing pipelines to be designed and deployed on a server. Mobile applications can stream a time-series of data to the server-based pipeline, and receive the processed time-series as output. The platform is agnostic to both the hardware and the type of data, which allows for employment in a variety of scenarios.

Lastly, in terms of benchmarking, the “mother of all BCI benchmarks” (MOABB) was presented [58]. The MOABB is an open-source platform that allows for the comparison of new signal processing algorithms across different datasets. The goal is to tackle the issue of small datasets and lack of reproducibility by providing a unified framework for the validation of novel algorithms.

The audience came from a variety of backgrounds, including neurosurgery, computer science, and psychology, bring together diverse opinions on the presented technology and future developments. Technical and privacy concerns were named as barriers for BCI development. We concluded that work is needed on ergonomic headset design, and non-technical aspects like user-experience and ethics. If possible, software should be open-source and compatible with other major analysis tools, while maintaining high customizability.

Eye Tracking, Vision, and BCI

Organizer: Melanie Fried-Oken

Additional Presenters: Brandon Eddy (Oregon Health & Science University); Deniz Erdogmus (Northeastern University); Melanie Fried-Oken (Oregon Health & Science University); Michelle Kinsella (Oregon Health & Science University); Boyla Mainsah (Duke University); Betts Peters (Oregon Health & Science University); and Bruce Wojciechowski (Northwest Eye Care Professionals).

Both BCI and eye tracking have been investigated as assistive technology access methods for people with severe speech and physical impairment (SSPI), separately and within hybrid systems. All eye tracking systems, and many BCI systems, involve visual interfaces requiring certain visual and oculomotor skills, which are often impaired in people with SSPI. We reviewed common visual and oculomotor impairments reported for this population, demonstrated how such impairments might affect use of a visual interface, discussed relevant clinical interventions, and explored possible design adaptations [219-226].

Recent research from Duke University demonstrated the fusion of EEG and eye tracking to improve P300 Speller performance. Three hybrid system configurations with different gaze variances (estimated, medium, and high) were compared to an EEG-only condition during a typing task with 16 healthy participants. Bit rate and spelling speed significantly increased for all hybrid configurations compared to EEG only, with higher variance configurations providing more robustness during training-test data mismatch. Inter-participant variability indicated that hybrid system parameters can potentially be tuned to maximize communication efficiency [227].

OHSU researchers presented a study on the effects of simulated visual and oculomotor impairments on SSVEP BCI typing performance. Thirty-eight healthy participants typed with the SSVEP Shuffle Speller [228] under three conditions: unimpaired vision, simulated acuity impairment (20/200 cataract simulation goggles), and simulated oculomotor impairment (trials rejected if user’s gaze left a specified area). All participants could type under the unimpaired and reduced acuity conditions, with no significant difference in accuracy or speed. Only six participants could type under the oculomotor condition, with a significant reduction in speed but not in accuracy. Results indicate that acuity impairment up to 20/200 is not an obstacle to use of the SSVEP Shuffle Speller, and that a small percentage of people may be able to use the system even with oculomotor impairment [229].

A study at Northeastern University compared a code-VEP based BCI to an eye tracking interface for navigating a cursor through a maze, finding that task completion was significantly faster with code-VEP for nine of ten participants, with no significant difference in accuracy. Eight of ten participants preferred code-VEP over eye tracking [230]. Another study compared code-VEP stimuli with different checkerboard color combinations, m-sequence lengths, and presentation rates for six participants. Classification accuracy was higher with red/green checkerboards than blue/yellow or black/white, and with mid-range sequence lengths (63 bits) and presentation rates (64 bps). However, participants found longer sequences (127 bits) and faster presentation rate (110 bps) to be more comfortable.

Visual BCI, eye tracking, and hybrid systems are important alternative access options for people with SSPI. It is vital to design customizable systems that can adapt to users’ visual/perceptual and oculomotor abilities, as these are often impaired among target BCI users. Both signal processing and interface design are promising areas for improvement, and hybrid systems may offer a robust and adaptable source of control data.

Clinical Applications of Brain-Computer Interfaces in Neurorehabilitation

Organizers: An H. Do (University of California, Irvine), Marc Slutzky (Northwestern University), Zoran Nenadic (University of California, Irvine)

Additional Presenters: Evgeniy Kreydin (University of Southern California), Charles Liu (University of Southern California), Karunesh Ganguly (University of California, San Francisco), Spencer Kellis (California Institute of Technology)

Clinical neurorehabilitation for neurological injuries including stroke, spinal cord injury (SCI), and traumatic brain injury (TBI) is severely limited by a lack of satisfactory means to restore lost neurological functions. BCIs have increasingly been studied as one such means and may either act as neuroprostheses to replace the lost motor function in those with complete paralysis or as tools that facilitate neural repair mechanisms to improve residual motor functions in patients with partial paralysis. There are many preclinical studies in humans and early phase clinical trials of BCI systems [142, 153, 155, 231-233], but there are no Phase III/pivotal trials to demonstrate safety, efficacy at reducing disability, and reliability for these systems. Consequently, BCI systems are not yet used in mainstream rehabilitation.

This workshop proposed that the most appropriate strategy for moving BCI into the clinical realm first involves identifying the most critical unmet needs in neurorehabilitation through discussion with stakeholders, e.g. patients, physicians, public health officials, and insurance companies. This would be followed by careful scientific/engineering consideration of which unmet needs could potentially best be addressed by BCI technology. The role of other treatment strategies should be critically evaluated, e.g. non-BCI devices, stem cell treatments, or neuromodulation approaches, as existing or emerging treatments may be complementary to or even supersede the need for BCIs for certain unmet needs.

Such an approach would lead to more efficient development of BCI technologies. Specifically, situations where BCIs are implemented in a manner that fails to garner patient or physician interest will be avoided. Furthermore, situations where BCIs are developed for conditions where there are more practical or effective solutions can also be avoided. In addition, careful review of unmet needs may reveal potential areas that are currently ignored by the BCI research community. For example, SCI patients identified restoration of bladder function as a top priority [234]. Motivated by this sentiment, it may be possible to develop a bi-directional BCI system where artificial bladder fullness sensation is generated via electrical stimulation of appropriate brain areas, and the intention to urinate is decoded and subsequently actuated by a bladder prosthesis [235][8]. Such a system would likely find significant support amongst the patient population and may also more easily leverage funding.

The workshop identified that the most successful clinical BCI projects all involved close collaboration between teams of physician-scientists, engineers, and neuroscientists. This collaboration model ensures that BCI designs are motivated to solve significant unmet clinical needs and that the requisite basic science studies are ethically conducted in human models to the extent possible. The medical device and pharmaceutical industries are already familiar with this model. However, academia will need to facilitate similar success in research institutions by acknowledging the importance of team science and supporting faculty who are not strictly single, independent researchers. Ultimately, the indicators of success will likely include investor interest from medical device companies, FDA approvals, and reimbursement plans from healthcare insurance companies.

User-Centered Design in BCI development; A Broad Perspective

Organizer: Elmar Pels (University Utrecht)

Additional Presenters: Katya Hill (University of Pittsburgh); Andrea Kübler (University of Wurzburg); Erik Aarnoutse (University Medical Center Utrecht); Ray Grott (San Francisco State University & former president of the Rehabilitation Engineering and Assistive Technology Society of North-America (RESNA))

In recent years, BCIs have left the laboratory and entered the homes of end-users [22, 85, 236] who rely on assistive technology (AT) for communication, mobility or for entertainment on a daily basis [237]. This daily-use gives researchers many opportunities to test their systems “in the wild” and gather valuable data. However, it also obliges researchers to think about practical use of the system, comfort, and ease of setup [238]. These considerations are of particular importance in situations where the BCI becomes a key factor in quality-of-life, which is the case with paralyzed patients who rely on BCI for communication. To enhance both usability and effectiveness and to reduce the risk of technology abandonment, a user-centered design (UCD) approach is pivotal. During this workshop, speakers from various disciplines who study different kinds of BCIs presented their insights into user-centered design and end-user involvement.

Qualitative research provides a first step to define the needs of end-users [239-241]. Katya Hill presented preliminary data emphasizing the value of focus groups to identify the needs of end-users. In contrast with internet-based surveys, focus groups allow us to investigate a wider degree of opinions. Further, the discussion inherent in focus groups enable participants to reach consensus and thus create a higher ratings of agreement on questions, making the data more valuable for decision-making.

Andrea Kübler presented brain-painting, one of the first and only BCI-applications designed to improve quality-of-life through artistic expression. Based on a P300-BCI, brain-painting was developed in an iterative process between users and her research team, and allowed users to paint on a computer screen using icons in a matrix [242]. In 2011, the software was brought to the patients’ home and it was shown that both patients and controls enjoyed Brain Painting [243].

Erik Aarnoutse presented another example of an iterative process between an end-user and the investigators leading to a user-centered BCI. In the Utrecht Neuroprosthesis study [22] special software was developed to be controlled by a click-based, implanted BCI. This software included not only spelling, but gave the user access to various novel and user-tailored features. The specific user needs were self-initiated calibration [244] and day and night call-caregiver functionality throughout the entire software [245].

Ray Grott, the former president of the Rehabilitation Engineering and Assistive Technology Society of North-America (RESNA) presented on the challenges for AT-developers posed by UCD. Do we actually involve and engage the end-users in research as participants helping to solve the puzzle instead of using them as subjects in our experiments? Do we facilitate their contributions? How do we address the concerns, struggles, and initiatives defined by the end-users? As BCI researchers, we must always remember who the end-user is and involve the end-user throughout the entire research process, including involvement in funding decisions, and the design, evaluation, and execution of research [246].

Towards the Elusive Killer App for BCIs

Organizer: Brendan Allison (UC San Diego)

Additional Presenters: Jing Jin (East China University of Science and Technology); Angela Vujic, (Georgia Tech BrainLab); Martin Walchshofer (g.tec)

Companies, universities, and other groups have been working toward a BCI “Killer App” for decades. A “Killer App” is that intuitive, engaging application that entices everyone to use a new technology, whether for business or entertainment. Facebook, Elon Musk, and other high-profile companies and entities have “Big BCI” projects aimed at mainstream users, which should greatly increase the money and effort spent on mainstream BCIs. Yet, despite major improvements such as wireless amplifiers, dry electrodes, friendlier software, and improved computing power, nobody has developed a BCI that appeals to most mainstream users.

Brendan Allison opened the workshop with review and discussion of ongoing challenges to translating BCI to mainstream (not clinical) populations [247-250]. These include the need for more transparent, wearable sensors and new ways to incorporate the capabilities of modern sensing systems into mainstream apps.

Jing Jin and his colleagues are working to make BCIs more appealing and usable through new displays and sounds for spellers and other applications that help engage users. They presented a new approach with simple, friendly cartoon faces within the well-known “face speller” approach, as well as an auditory BCI based on natural sounds and a new display to engage users. Their group’s work also included a spelling interface based on the Chinese T9 system and interfaces that allow people to control robots and external devices [251-254].

Angela Vujic reviewed work to improve usability by incorporating implicit aspects of user state. Many groups have tried to measure arousal, valence, and related factors with neuroimaging tools to facilitate user interaction. Vujic presented her MoodLens system, an outward-facing display intended for users with ALS to display emoticons corresponding to their emotion using a P300 interface. MoodLens is intended for individuals with ALS to add emotional expressions while using BCIs. More recent work with the “gut-BCI”, which non-invasively records electrical activity of the enteric nervous system (ENS) or “gut-brain”, further explores ways to incorporate visceral activity within user interaction [255, 256].

Martin Walchshofer used examples from g.tec to describe the commercial trend to develop new hardware and software that looks professional while reducing the need for assistance from experts. We saw videos and demos with new devices involving wireless amplifiers, dry electrodes, new cap designs, friendly avatars, and “hacking” tools designed for makers, students, and other users who want to easily design and change their own BCI applications [250, 257, 258].

The workshop, like previous workshops, showed that there is considerable progress addressing challenges that limit broader BCI adoption. The “Killer App” may not require new technological breakthroughs, but rather a clever way to use existing BCI capabilities within other systems that many people use.

Standards for Neurotechnologies and Brain-Machine Interfacing

Organizer: Ricardo Chavarriaga (EPFL, Switzerland)

Additional Presenters: Walter Besio (University of Rhode Island), Carole Carey (IEEE EMBC), Luigi Bianchi (University of Rome), José Contreras-Vidal (University of Houston), Christian Kothe (Intheon), Ander Ramos-Murguialday (Tecnalia; University of Tübingen)

Standardization should be a priority area with participation from the entire BMI community to advance the state-of-the-art of BMI studies. The IEEE-supported group on Neurotechnologies for BMI [259] organized this workshop with short presentations from invited speakers, followed by a group discussion on relevant topics for standardization chosen by the participants.

Invited talks highlighted the breadth of BMI topics that would benefit from standardization. Walter Besio presented on standards and regulation of non-invasive EEG, noting that even the most basic aspect of EEG, the sensors, are not standardized. Further, laboratory practices are not the same as the practices needed for product development, raising the question of how to make a seamless transition between these two stages. Christian Kothe talked about efforts to develop tools for interoperability across BMI systems. He presented the case of Hierarchical Event Descriptors, a set of descriptors for tagging EEG experimental events that provides a mean to facilitate more efficient data sharing [260]. Ander Ramos-Murguialday discussed clinical translation, focusing on assessment of clinical BMIs. He called for more large, randomized controlled trials and pointed out that other domains, such as therapeutic rehabilitation and assistive technologies, face similar challenges. The lack of benchmarking techniques is another area with great need for standardization. José Contreras-Vidal presented the state-of-the-art in exoskeletons [261, 262], highlighting how benchmarking and risk analysis becomes more difficult when such systems are combined with BMI systems. The combination raises the question of assigning responsibility for observed behavior to the exoskeleton or the user. Luigi Bianchi addressed the lack of standards for describing the specifications of BMI systems as well as the performance metrics to be used. He showed some formal attempts to tackle this issue [263-265]. Carole Carey explained the procedures for proposing and developing new standards through the IEEE Standards Association [266].

Finally, participants formed two discussion groups, with each choosing a particular discussion topic: ‘Data Annotation’ and ‘Benchmarking’. The Data Annotation Group defined three levels of annotation: ‘event description’, ‘study description’ (structure, tasks), and study annotations. The discussion highlighted the need to evaluate related approaches or tools that may exist already, as well as engaging all relevant stakeholders.

The ‘Benchmarking’ group started from the need for comparing different studies and drafted a common framework for identifying components relevant to the BCI. The framework included the type of user, the application (e.g., assistive/restorative/rehabilitation), and the system description, which covers aspects like feedback modality, strategy for control, and component tasks. This discussion launched an ongoing project supported by the IEEE Standards Association for developing “Reporting Standards for In Vivo Neural Interface Research” [267] in which readers are invited to participate.

Discussion extended well beyond the allotted time, reflecting the importance and complexity of the discussed topics. Considering the diversity and heterogeneity of BCI systems, it may be important to have standards and benchmarks that are consistent across multiples technologies. Workshop discussions confirmed the interest and potential benefit of standardization for academic and industrial researchers to advance the state-of-the-art in the field and help effective translation of these developments for the benefit of their intended users.

Conclusion

The growing maturity and robustness of the BCI field is shown in the diversity and depth of these workshops. BCI research and development is active at all levels from basic research to clinical translation with factors such as standardization and ethical development as an established part of these efforts. This research encompasses advances in the recording hardware, the algorithms, and the design for specific user populations, all the components necessary to create a successful BCI product.

Acknowledgements:

Overall Acknowledgements

The authors thank the National Institute on Deafness and other Communication Disorders (NIDCD) and the National Institute Of Neurological Disorders And Stroke (NINDS) in the National Institutes of Health (NIH) of the United States and the National Science Foundation (NSF) for travel support assisting student attendance at the BCI Meeting. The opinions expressed are those of the authors and do not reflect the views of NIDCD, NINDS, NIH, NSF, or any other funding agency that may have supported work presented at the BCI Meeting or in the individual workshops.

The workshop organizers thank the many presenters for their excellent work as well as the workshop attendees for the stimulating discussion. They also thank the various funding sources that supported the research of presenters and attendees. The workshop organizers also thank the members of the Program Committee for the Seventh International Brain-Computer Interface Meeting: Nick Ramsey, Brendan Z. Allison, Chuck Anderson, Jennifer Collinger, Shangkai Gao, Christoph Guger, Jane Huggins, Andrea Kübler, Donatella Mattia, José del R. Millán, Marc Slutzky, and Jonathan Wolpaw.

Individual Workshop Acknowledgements

The workshop BCIs for Assessment of Locked-in and Patients with Disorders of Consciousness (DOC) was supported by the H2020 project ComaWare.

The workshop BCIs for Stroke Rehabilitation was supported by the H2020 project recoveriX.

The workshop ECoG for Control and Mapping was supported by the Van Wagenen Foundation and The EC project RapidMaps.

The workshop Examining the Ethical Assumptions About Neural Engineering and BCI Development would like to thank all of the presenters for their invaluable contributions. I would also like to thank the CNT Neuroethics Thrust for their continual support and constructive feedback on the dialogue tool, such as Sara Goering, Eran Klein, Tim Brown, Michelle Pham, Marion Boulicault, Erika Versalovic, Sierra Simmerman and Hannah Martens. Thanks to Laura Specker Sullivan, Marion Boulicault, and Joseph Stramondo for their collaboration with prior iterations of the dialogue tool and facilitating past ethics roundtables.

The workshop Eye Tracking, Vision, and BCI was supported by the National Institutes of Health under grant #2R01DC009834-06A1 and the National Institute on Disability, Independent Living, and Rehabilitation Research under grant #90RE5017.

The workshop Natural Language Processing & BCI would like to acknowledge support from NIDCD grant #R01DC00983410.

The workshop Standards for Neurotechnologies and Brain-Machine Interfacing would like to thank all the participants of the workshop. This work was supported by the IEEE Brain Initiative and the IEEE Standards Association Industry Connections Program

The workshop Tools for Establishing Neuroadaptive Technology Through Passive BCIs was supported by the Society for Neuroadaptive Technology. (www.neuroadaptive.org)

The workshop Turning negative into positives! Exploiting negative results in Brain-Machine Interface research was supported by the French National Research Agency with the REBEL project (grant ANR-15-CE23-0013-01), the European Research Council with the Brain-Conquest project (grant ERC-2016-STG-714567), the Inria Project Lab BCI-LIFT as well as by the EPFL/Inria International Lab. The organizers would also like to thank all the workshop participants for the stimulating and inspiring discussions.

The workshop Unsupervised Learning for BCI thankfully acknowledges the support by BrainLinks-BrainTools Cluster of Excellenc (grant EXC 1086) and by the bwHPC initiative (grant INST 39/963-1 FUGG).

The workshop Clinical Applications of Brain-Computer Interfaces in Neurorehabilitation was supported by NIH grant R01NS094748.

Contributor Information

Jane E. Huggins, Department of Physical Medicine and Rehabilitation, Department of Biomedical Engineering, Neuroscience Graduate Program, University of Michigan, Ann Arbor, Michigan, United States, 325 East Eisenhower, Room 3017; Ann Arbor, Michigan 48108-5744

Christoph Guger, g.tec medical engineering GmbH/Guger Technologies OG, Austria, Sierningstrasse 14, 4521 Schiedlberg, Austria.

Erik Aarnoutse, UMC Utrecht Brain Center, Department of Neurology & Neurosurgery, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX Utrecht, The Netherlands.

Brendan Allison, Dept. of Cognitive Science, Mail Code 0515, University of California at San Diego, La Jolla, United States.

Charles W. Anderson, Department of Computer Science, Molecular, Cellular and Integrative Neurosience Program, Colorado State University, Fort Collins, CO 80523

Steven Bedrick, Center for Spoken Language Understanding, Oregon Health & Science University, Portland, OR 97239.

Walter Besio, Department of Electrical, Computer, & Biomedical Engineering and Interdisciplinary Neuroscience Program, University of Rhode Island, Kingston, Rhode Island, USA, CREmedical Corp. Kingston, Rhode Island, USA.

Ricardo Chavarriaga, Defitech Chair in Brain-Machine Interface (CNBI), Center for Neuroprosthetics, Ecole Polytechnique Fédérale de Lausanne - EPFL, Switzerland.

Jennifer L. Collinger, University of Pittsburgh, Department of Physical Medicine and Rehabilitation, VA Pittsburgh Healthcare System, Department of Veterans Affairs, 3520 5th Ave, Pittsburgh, PA, 15213.

An H. Do, UC Irvine Brain Computer Interface Lab, Department of Neurology, University of California, Irvine

Christian Herff, School of Mental Health and Neuroscience, Maastricht University, Maastricht, The Netherlands.

Matthias Hohmann, Max Planck Institute for Intelligent Systems, Department for Empirical Inference, Max-Planck-Ring 4, 72074 Tübingen, Germany.

Michelle Kinsella, Oregon Health & Science University, Institute on Development & Disability, 707 SW Gaines St, #1290, Portland, OR 97239.

Kyuhwa Lee, Swiss Federal Institute of Technology in Lausanne-EPFL.

Fabien Lotte, Inria Bordeaux Sud-Ouest, LaBRI (Univ. Bordeaux/CNRS/Bordeaux INP), 200 avenue de la vieille tour, 33405, Talence Cedex, France.

Gernot Müller-Putz, Graz University of Technology.

Anton Nijholt, Faculty EEMCS, University of Twente, Enschede, The Netherlands.

Elmar Pels, UMC Utrecht Brain Center, Department of Neurology & Neurosurgery, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX Utrecht, The Netherlands.

Betts Peters, Oregon Health & Science University, Institute on Development & Disability, 707 SW Gaines St, #1290, Portland, OR 97239.

Felix Putze, University of Bremen, Germany, Cognitive Systems Lab, University of Bremen, Enrique-Schmidt-Straße 5 (Cartesium), 28359 Bremen.

Rüdiger Rupp, Spinal Cord Injury Center, Heidelberg University Hospital.

Gerwin Schalk, National Center for Adaptive Neurotechnologies, Wadsworth Center, NYS Dept. of Health, Dept. of Neurology, Albany Medical College, Dept. of Biomed. Sci., State Univ. of New York at Albany, Center for Medical Sciences 2003, 150 New Scotland Avenue, Albany, New York 12208.

Stephanie Scott, Department of Media Communications, Colorado State University, Fort Collins, CO 80523.

Michael Tangermann, Brain State Decoding Lab, Cluster of Excellence BrainLinks-BrainTools, Computer Science Dept., University of Freiburg, Germany, Autonomous Intelligent Systems Lab, Computer Science Dept., University of Freiburg, Germany.

Paul Tubig, Department of Philosophy, Center for Neurotechnology, University of Washington, Savery Hall, Room 361, Seattle, WA 98195.

Thorsten Zander, Team PhyPA, Biological Psychology and Neuroergonomics, Technische Universität Berlin, Berlin, Germany, 7 Zander Laboratories B.V., Amsterdam, The Netherlands.

Bibliography

  • [1].Wolpaw JR et al. , “Brain-computer interface technology: a review of the first international meeting,” IEEE transactions on rehabilitation engineering : a publication of the IEEE Engineering in Medicine and Biology Society, vol. 8, no. 2, pp. 164–173, 2000. [DOI] [PubMed] [Google Scholar]
  • [2].Vaughan TM et al. , “Brain-computer interface technology: a review of the Second International Meeting,” IEEE transactions on neural systems and rehabilitation engineering : a publication of the IEEE Engineering in Medicine and Biology Society, vol. 11, no. 2, pp. 94–109, 2003. [DOI] [PubMed] [Google Scholar]
  • [3].Vaughan TM and Wolpaw JR, “The Third International Meeting on Brain-Computer Interface Technology: making a difference,” IEEE transactions on neural systems and rehabilitation engineering : a publication of the IEEE Engineering in Medicine and Biology Society, vol. 14, no. 2, pp. 126–127, 2006. [PubMed] [Google Scholar]
  • [4].Vaughan TM and Wolpaw JR, “Special issue containing contributions from the Fourth International Brain-Computer Interface Meeting,” Journal of neural engineering, vol. 8, no. 2, pp. 020201-2560/8/2/020201. Epub 2011 Mar 24, 2011. [DOI] [PubMed] [Google Scholar]
  • [5].Huggins JE et al. , “Workshops of the Fifth International Brain-Computer Interface Meeting: Defining the Future,” Brain-Computer Interface Journal, vol. 1, no. 1, pp. 27–49, 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [6].Huggins JE and Wolpaw JR, “Papers from the fifth international brain-computer interface meeting. Preface,” Journal of neural engineering, vol. 11, no. 3, pp. 030301-2560/11/3/030301. Epub 2014 May 19, 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [7].Daly JJ and Huggins JE, “Brain-computer interface: current and emerging rehabilitation applications,” Archives of Physical Medicine and Rehabilitation, vol. 96, no. 3 Suppl, pp. S1–7, 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [8].Huggins JE et al. , “Workshops of the Sixth International Brain-Computer Interface Meeting: brain-computer interfaces past, present, and future,” Brain Comput Interfaces (Abingdon), vol. 4, no. 1-2, pp. 3–36, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [9].Huggins JE, Müller-Putz G, and Wolpaw JR, “The Sixth International Brain-Computer Interface Meeting: Advances in Basic and Clinical Research,” (in eng), Brain Comput Interfaces (Abingdon), vol. 4, no. 1-2, pp. 1–2, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [10].Hu K, Chen C, Meng Q, Williams Z, and Xu W, “Scientific profile of brain-computer interfaces: Bibliometric analysis in a 10-year period,” (in eng), Neurosci Lett, vol. 635, pp. 61–66, December 2016. [DOI] [PubMed] [Google Scholar]
  • [11].Karakaş S and Barry RJ, “A brief historical perspective on the advent of brain oscillations in the biological and psychological disciplines,” (in eng), Neurosci Biobehav Rev, vol. 75, pp. 335–347, April 2017. [DOI] [PubMed] [Google Scholar]
  • [12].Louis EKS et al. , Electroencephalography (EEG): An Introductory Text and Atlas of Normal and Abnormal Findings in Adults, Children, and Infants, American Epilepsy Society. 2016. [PubMed]
  • [13].Nasrollaholhosseini SH, Mercier J, Fischer G, and Besio W, “Electrode-Electrolyte Interface Modeling and Impedance Characterizing of Tripolar Concentric Ring Electrode,” (in eng), IEEE Trans Biomed Eng, February 2019. [DOI] [PubMed] [Google Scholar]
  • [14].Mathewson KE, Harrison TJL, and Kizuk SAD, “High and dry? Comparing active dry EEG electrodes to active and passive wet electrodes,” Psychophysiology, vol. 54, no. 1, pp. 74–82, 2017. [DOI] [PubMed] [Google Scholar]
  • [15].Koka K and Besio WG, “Improvement of spatial selectivity and decrease of mutual information of tri-polar concentric ring electrodes,” (in eng), J Neurosci Methods, vol. 165, no. 2, pp. 216–22, September 2007. [DOI] [PubMed] [Google Scholar]
  • [16].“Videos - CREmedical,” 2019.
  • [17].Rogel-Salazar G, Luna-Munguía H, Stevens KE, and Besio WG, “Transcranial focal electrical stimulation via tripolar concentric ring electrodes does not modify the short- and long-term memory formation in rats evaluated in the novel object recognition test,” (in eng), Epilepsy Behav, vol. 27, no. 1, pp. 154–8, April 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [18].McCane LM, Steele P, Mercier J, McFarland DJ, and Besio WG, “Safety of transcranial focal stimulation (TFS) via tripolar concentric ring electrodes (TCREs) in people: initial results,” in Proceedings of the SfN 2018 Annual Meeting, San Diego, 2018. [Google Scholar]
  • [19].Alzahrani S, “A Comparison of Tri-polar Concentric Ring Electrodes to Disc Electrodes For Decoding Real and Imaginary Finger Movements,” Department of Biomedical Engineering, Colorado State University, Fort Collins, CO, 2019. [Google Scholar]
  • [20].Stanslaski S et al. , “A Chronically Implantable Neural Coprocessor for Investigating the Treatment of Neurological Disorders,” (in eng), IEEE Trans Biomed Circuits Syst, vol. 12, no. 6, pp. 1230–1245, December 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [21].Shute JB et al. , “Thalamocortical network activity enables chronic tic detection in humans with Tourette syndrome,” (in eng), Neuroimage Clin, vol. 12, pp. 165–72, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [22].Vansteensel MJ et al. , “Fully Implanted Brain-Computer Interface in a Locked-In Patient with ALS,” (in eng), N Engl J Med, vol. 375, no. 21, pp. 2060–2066, November 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [23].Flesher SN et al. , “Intracortical microstimulation of human somatosensory cortex,” (in eng), Sci Transl Med, vol. 8, no. 361, p. 361ra141, October 2016. [DOI] [PubMed] [Google Scholar]
  • [24].Weiss JM, Flesher SN, Franklin R, Collinger JL, and Gaunt RA, “Artifact-free recordings in human bidirectional brain-computer interfaces,” (in eng), J Neural Eng, vol. 16, no. 1, p. 016002, February 2019. [DOI] [PubMed] [Google Scholar]
  • [25].Maiolo L, Notargiacomo A, Marrani M, Minotti A, Maita F, and Pecora A, “Ultra-Flexible Microelectrode Array Nanostructured by FIB: a Possible Route to Lower the Device Impedance,” Microelectronic Engineering, vol. 121, pp. 10–14, 2014. [Google Scholar]
  • [26].Zhou A et al. , “A wireless and artefact-free 128-channel neuromodulation device for closed-loop stimulation and recording in non-human primates,” (in eng), Nat Biomed Eng, vol. 3, no. 1, pp. 15–26, January 2019. [DOI] [PubMed] [Google Scholar]
  • [27].Zander TO, Brönstrup J, Lorenz R, and Krol LR, “Towards BCI-based implicit control in human–computer interaction,” in Advances in Physiological Computing, Fairclough SH and Gilleade K, Eds. Berlin, Germany: Springer, 2014, pp. 67–90. [Google Scholar]
  • [28].Zander TO and Kothe C, “Towards passive brain-computer interfaces: applying brain-computer interface technology to human-machine systems in general,” Journal of neural engineering, vol. 8, no. 2, pp. 025005-2560/8/2/025005. Epub 2011 Mar 24, 2011. [DOI] [PubMed] [Google Scholar]
  • [29].Zander TO, Kothe C, Jatzev S, and Gaertner M, “Enhancing human-computer interaction with input from active and passive brain-computer interfaces,” in Brain-computer Interfaces. London: Springer, 2010, pp. 181–199. [Google Scholar]
  • [30].Zander TO, Krol LR, Birbaumer NP, and Gramann K, “Neuroadaptive technology enables implicit cursor control based on medial prefrontal cortex activity,” Proceedings of the National Academy of Sciences, vol. 113, no. 52, pp. 14898–14903, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [31].Zander TO and Krol LR, “Team PhyPA: Brain-computer interfacing for everyday human-computer interaction,” Periodica Polytechnica Electrical Engineering and Computer Science, vol. 61, no. 2, pp. 209–216, 2017. [Google Scholar]
  • [32].Arico P, Borghini G, Di Flumeri G, Sciaraffa N, Colosimo A, and Babiloni F, “Passive BCI in Operational Environments: Insights, Recent Advances, and Future Trends,” (in eng), IEEE Trans Biomed Eng, vol. 64, no. 7, pp. 1431–1436, July 2017. [DOI] [PubMed] [Google Scholar]
  • [33].Krol LR and Zander TO, “COGNITIVE PROBING FOR AUTOMATED NEUROADAPTATION,” in The First Biannual Neuroadaptive Technology Conference, 2017, p. 22. [Google Scholar]
  • [34].Kothe C. (2014, October, 26, 2015). Lab streaming layer (LSL) [Online]. Available: https://github.com/sccn/labstreaminglayer.
  • [35].Zander TO et al. , “Evaluation of a Dry EEG System for Application of Passive Brain-Computer Interfaces in Autonomous Driving,” (in eng), Front Hum Neurosci, vol. 11, p. 78, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [36].Zander TO et al. , “A Dry EEG-System for Scientific Research and Brain-Computer Interfaces,” (in eng), Front Neurosci, vol. 5, p. 53, 2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [37].Zander TO, Gaertner M, Kothe C, and Vilimek R, “Combining eye gaze input with a brain–computer interface for touchless human–computer interaction,” Intl. Journal of Human–Computer Interaction, vol. 27, no. 1, pp. 38–51, 2010. [Google Scholar]
  • [38].Krol LR, Freytag SC, and Zander TO, “Meyendtris: A hands-free, multimodal Tetris clone using eye tracking and passive BCI for intuitive neuroadaptive gaming,” in Proceedings of the 19th ACM International Conference on Multimodal Interaction, 2017: ACM, pp. 433–437. [Google Scholar]
  • [39].Schalk G, “A general framework for dynamic cortical function: the function-through-biased-oscillations (FBO) hypothesis,” (in eng), Front Hum Neurosci, vol. 9, p. 352, 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [40].Hermes D, Nguyen M, and Winawer J, “Neuronal synchrony and the relation between the blood-oxygen-level dependent response and the local field potential,” (in eng), PLoS Biol, vol. 15, no. 7, p. e2001461, July 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [41].Morrell MJ and R. N. S. S. i. E. S. Group, “Responsive cortical stimulation for the treatment of medically intractable partial epilepsy,” Neurology, vol. 77, no. 13, pp. 1295–1304, 2011. [DOI] [PubMed] [Google Scholar]
  • [42].Stanslaski S et al. , “Design and validation of a fully implantable, chronic, closed-loop neuromodulation device with concurrent sensing and stimulation,” IEEE transactions on neural systems and rehabilitation engineering : a publication of the IEEE Engineering in Medicine and Biology Society, vol. 20, no. 4, pp. 410–421, 2012. [DOI] [PubMed] [Google Scholar]
  • [43].Miller KJ, Schalk G, Hermes D, Ojemann JG, and Rao RP, “Spontaneous Decoding of the Timing and Content of Human Object Perception from Cortical Surface Recordings Reveals Complementary Information in the Event-Related Potential and Broadband Spectral Change,” (in eng), PLoS Comput Biol, vol. 12, no. 1, p. e1004660, January 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [44].Miller KJ, Schalk G, Fetz EE, den Nijs M, Ojemann JG, and Rao RP, “Cortical activity during motor execution, motor imagery, and imagery-based online feedback,” (in eng), Proc Natl Acad Sci U S A, vol. 107, no. 9, pp. 4430–5, March 2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [45].Miller KJ et al. , “Human motor cortical activity is selectively phase-entrained on underlying rhythms,” (in eng), PLoS Comput Biol, vol. 8, no. 9, p. e1002655, 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [46].de Hemptinne C et al. , “Exaggerated phase-amplitude coupling in the primary motor cortex in Parkinson disease,” (in eng), Proc Natl Acad Sci U S A, vol. 110, no. 12, pp. 4780–5, March 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [47].de Hemptinne C et al. , “Therapeutic deep brain stimulation reduces cortical phase-amplitude coupling in Parkinson’s disease,” (in eng), Nat Neurosci, vol. 18, no. 5, pp. 779–86, May 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [48].Miller KJ, denNijs M, Shenoy P, Miller JW, Rao RP, and Ojemann JG, “Real-time functional brain mapping using electrocorticography,” (in eng), Neuroimage, vol. 37, no. 2, pp. 504–7, August 2007. [DOI] [PubMed] [Google Scholar]
  • [49].Miller KJ, Abel TJ, Hebb AO, and Ojemann JG, “Rapid online language mapping with electrocorticography,” (in eng), J Neurosurg Pediatr, vol. 7, no. 5, pp. 482–90, May 2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [50].Ogawa H et al. , “Clinical Impact and Implication of Real-Time Oscillation Analysis for Language Mapping,” (in eng), World Neurosurg, vol. 97, pp. 123–131, January 2017. [DOI] [PubMed] [Google Scholar]
  • [51].Brunner P et al. , “A practical procedure for real-time functional mapping of eloquent cortex using electrocorticographic signals in humans,” (in eng), Epilepsy Behav, vol. 15, no. 3, pp. 278–86, July 2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [52].Swift JR et al. , “Passive functional mapping of receptive language areas using electrocorticographic signals,” (in eng), Clin Neurophysiol, vol. 129, no. 12, pp. 2517–2524, December 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [53].Tamura Y et al. , “Passive language mapping combining real-time oscillation analysis with cortico-cortical evoked potentials for awake craniotomy,” (in eng), J Neurosurg, vol. 125, no. 6, pp. 1580–1588, December 2016. [DOI] [PubMed] [Google Scholar]
  • [54].Kamada K et al. , “Disconnection of the pathological connectome for multifocal epilepsy surgery,” (in eng), J Neurosurg, vol. 129, no. 5, pp. 1182–1194, November 2018. [DOI] [PubMed] [Google Scholar]
  • [55].Schalk G et al. , “Facephenes and rainbows: Causal evidence for functional and anatomical specificity of face and color processing in the human brain,” (in eng), Proc Natl Acad Sci U S A, vol. 114, no. 46, pp. 12285–12290, November 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [56].Fanelli D, “Negative results are disappearing from most disciplines and countries,” Scientometrics, vol. 90, no. 3, pp. 891–904, 2012. [Google Scholar]
  • [57].Knight J, “Negative results: Null and void,” (in eng), Nature, vol. 422, no. 6932, pp. 554–5, April 2003. [DOI] [PubMed] [Google Scholar]
  • [58].Jayaram V and Barachant A, “MOABB: trustworthy algorithm benchmarking for BCIs,” (in eng), J Neural Eng, vol. 15, no. 6, p. 066011, December 2018. [DOI] [PubMed] [Google Scholar]
  • [59].Lotte F et al. , “A review of classification algorithms for EEG-based brain-computer interfaces: a 10 year update,” (in eng), J Neural Eng, vol. 15, no. 3, p. 031005, June 2018. [DOI] [PubMed] [Google Scholar]
  • [60].Rimbert S, Lindig-León C, and Bougrain L, “Profiling BCI users based on contralateral activity to improve kinesthetic motor imagery detection,” presented at the 8th International IEEE/EMBS Conference on Neural Engineering (NER), Shanghai, China, 25-28 May 2017, 2017. [Google Scholar]
  • [61].Hammer EM et al. , “Psychological predictors of SMR-BCI performance,” (in eng), Biol Psychol, vol. 89, no. 1, pp. 80–6, January 2012. [DOI] [PubMed] [Google Scholar]
  • [62].Jeunet C, N’Kaoua B, and Lotte F, “Advances in user-training for mental-imagery-based BCI control: Psychological and cognitive factors and their neural correlates,” (in eng), Prog Brain Res, vol. 228, pp. 3–35, 2016. [DOI] [PubMed] [Google Scholar]
  • [63].Hohmann MR et al. , “Case series: Slowing alpha rhythm in late-stage ALS patients,” (in eng), Clin Neurophysiol, vol. 129, no. 2, pp. 406–408, February 2018. [DOI] [PubMed] [Google Scholar]
  • [64].Fomina T et al. , “Absence of EEG correlates of self-referential processing depth in ALS,” (in eng), PLoS One, vol. 12, no. 6, p. e0180136, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [65].Lindig-León C, “Multilabel classification of EEG-based combined motor imageries implemented for the 3D control of a robotic arm,” Doctoral dissertation, Université de Lorraine, 2017. [Google Scholar]
  • [66].Saavedra C, “Wavelet-based semblance methods to enhance single-trial ERP detection,” Doctoral Dissertation, Université de Lorraine, 2013. [Google Scholar]
  • [67].Thompson D, Cano R, Dhuyvetter K, and Mowla MR, “Mixed results with affective classification of frontal alpha asymmetry and hjorth parameters,” presented at the International Brain-Computer Interface Meeting, Pacific Grove, California, USA, May 21-25, 2018, 2018. [Google Scholar]
  • [68].Perdikis S, Tonin L, Saeedi S, Schneider C, and Millan JDR, “The Cybathlon BCI race: Successful longitudinal mutual learning with two tetraplegic users,” PLoS Biol, vol. 16, no. 5, p. e2003787, May 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [69].Saeedi S, Chavarriaga R, Leeb R, and Millán JDR, “Adaptive assistance for brain-computer interfaces by online prediction of command reliability,” IEEE Computational Intelligence Magazine, vol. 11, no. 1, pp. 32–39, 2016. [Google Scholar]
  • [70].Saeedi S, Chavarriaga R, and Millan JDR, “Long-Term Stable Control of Motor-Imagery BCI by a Locked-In User Through Adaptive Assistance,” IEEE Trans Neural Syst Rehabil Eng, vol. 25, no. 4, pp. 380–391, April 2017. [DOI] [PubMed] [Google Scholar]
  • [71].Jeunet C, Debener S, Lotte F, Mattout J, Scherer R, and Zich C, “Mind the Traps! Design Guidelines for Rigorous BCI Experiments,” in Handbook of Brain–Computer Interfaces: Technological and Theoretical advances., Nam CS, Nijholt A, and Lotte F, Eds.: CRC Press, 2018, pp. 639–660. [Google Scholar]
  • [72].Speier W, Arnold C, and Pouratian N, “Integrating language models into classifiers for BCI communication: a review,” (in eng), J Neural Eng, vol. 13, no. 3, p. 031002, June 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [73].H. J, L. GW, M. BJ, and R. B, “The Application of Natural Language Processing to Augmentative and Alternative Communication,” Assistive Technology, vol. 24, pp. 14–24, 2012. [DOI] [PubMed] [Google Scholar]
  • [74].Shannon C, “Prediction and entropy of printed English,” Bell System Technical Journal, vol. 30, pp. 51–64, 1951. [Google Scholar]
  • [75].Roark B, Fried-Oken M, and Gibbons C, “Huffman and linear scanning methods with statistical language models,” (in eng), Augment Altern Commun, vol. 31, no. 1, pp. 37–50, March 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [76].Dudy S and Bedrick S, “Compositional Language Modeling for Icon-Based Augmentative and Alternative Communication,” presented at the Workshop on Deep Learning Approaches for Low-Resource NLP, Melbourne, Australia, 19 July 2018, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [77].F. A, P. K, C. C, B. X, O. T, and Z. S, “Effects of Language Modeling and Its Personalization on Touchscreen Typing Performance,,” ACM, pp. 649–658, 2015. [Google Scholar]
  • [78].Levit M et al. , “Personalization of Word-Phrase-Entity Language Models,” presented at the Interspeech 2015, 2015. [Google Scholar]
  • [79].Bellegarda JR, “Statistical language model adaptation: review and perspectives,” Speech Communication, vol. 42, no. 1, pp. 93–108, 2004. [Google Scholar]
  • [80].Baase S and Henry T, A gift of fire : social, legal, and ethical issues for computing technology, Fifth edition. ed. 2018, pp. xviii, 536 pages. [Google Scholar]
  • [81].Kindermans PJ, Schreuder M, Schrauwen B, Muller KR, and Tangermann M, “True zero-training brain-computer interfacing--an online study,” PloS one, vol. 9, no. 7, p. e102504, 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [82].Hubner D, Verhoeven T, Schmid K, Muller KR, Tangermann M, and Kindermans PJ, “Learning from label proportions in brain-computer interfaces: Online unsupervised learning with guarantees,” PLoS One, vol. 12, no. 4, p. e0175856, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [83].Hübner D, Verhoeven T, Müller K-R, Kindermans P-J, and Tangermann M, “Unsupervised learning for brain-computer interfaces based on event-related potentials: Review and online comparison [research frontier],” IEEE Computational Intelligence Magazine, vol. 13, no. 2, pp. 66–77, 2018. [Google Scholar]
  • [84].Iturrate I, Grizou J, Omedes J, Oudeyer PY, Lopes M, and Montesano L, “Exploiting Task Constraints for Self-Calibrated Brain-Machine Interface Control Using Error-Related Potentials,” PLoS One, vol. 10, no. 7, p. e0131491, 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [85].Guger C et al. , “Complete Locked-in and Locked-in Patients: Command Following Assessment and Communication with Vibro-Tactile P300 and Motor Imagery Brain-Computer Interface Tools,” (in eng), Front Neurosci, vol. 11, p. 251, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [86].Guger C et al. , “Assessing Command-Following and Communication With Vibro-Tactile P300 Brain-Computer Interface Tools in Patients With Unresponsive Wakefulness Syndrome,” (in eng), Front Neurosci, vol. 12, p. 423, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [87].Spataro R et al. , “Preserved somatosensory discrimination predicts consciousness recovery in unresponsive wakefulness syndrome,” (in eng), Clin Neurophysiol, vol. 129, no. 6, pp. 1130–1136, June 2018. [DOI] [PubMed] [Google Scholar]
  • [88].Coyle D, Stow J, McCreadie K, Sciacca N, McElligott J, and Carroll Á, “ Motor Imagery BCI with Auditory Feedback as a Mechanism for Assessment and Communication in Disorders of Consciousness,” in Brain-Computer Interface Research, 2017, pp. 51–69. [Google Scholar]
  • [89].C. D, D. N, S. J, M. J, and C. A, “Answering questions in prolonged disorders of consciousness with a brain-computer interface,” presented at the 7th International Brain-Computer Interface Meeting, Pacific Grove, California, USA, 2018. [Google Scholar]
  • [90].Schettini F et al. , “P300 latency Jitter occurrence in patients with disorders of consciousness: Toward a better design for Brain Computer Interface applications,” Conference proceedings : …Annual International Conference of the IEEE Engineering in Medicine and Biology Society.IEEE Engineering in Medicine and Biology Society.Annual Conference, vol. 2015, pp. 6178–6181, 2015. [DOI] [PubMed] [Google Scholar]
  • [91].Riccio A et al. , “On the Relationship Between Attention Processing and P300-Based Brain Computer Interface Control in Amyotrophic Lateral Sclerosis,” (in eng), Front Hum Neurosci, vol. 12, p. 165, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [92].Chatelle C, Spencer CA, Cash SS, Hochberg LR, and Edlow BL, “Feasibility of an EEG-based brain-computer interface in the intensive care unit,” (in eng), Clin Neurophysiol, vol. 129, no. 8, pp. 1519–1525, August 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [93].Ray S, Crone NE, Niebur E, Franaszczuk PJ, and Hsiao SS, “Neural correlates of high-gamma oscillations (60-200 Hz) in macaque local field potentials and their potential implications in electrocorticography,” (in eng), J Neurosci, vol. 28, no. 45, pp. 11526–36, November 2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [94].Crone NE et al. , “Electrocorticographic gamma activity during word production in spoken and sign language,” (in eng), Neurology, vol. 57, no. 11, pp. 2045–53, December 2001. [DOI] [PubMed] [Google Scholar]
  • [95].Chakrabarti S, Sandberg HM, Brumberg JS, and Krusienski DJ, “Progress in speech decoding from the electrocorticogram,” Biomedical Engineering Letters, vol. 5, no. 1, pp. 10–21, 2015. [Google Scholar]
  • [96].Herff C and Schultz T, “Automatic Speech Recognition from Neural Signals: A Focused Review,” Frontiers in neuroscience, vol. 10, p. 429, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [97].Schultz T, Wand M, Hueber T, Krusienski DJ, Herff C, and Brumberg JS, “Biosignal-based spoken communication: A survey,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 25, no. 12, pp. 2257–2271, 2017. [Google Scholar]
  • [98].Lotte F et al. , “Electrocorticographic representations of segmental features in continuous speech,” Frontiers in human neuroscience, vol. 9, p. 97, 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [99].Brumberg JS et al. , “Spatio-Temporal Progression of Cortical Activity Related to Continuous Overt and Covert Speech Production in a Reading Task,” (in eng), PLoS One, vol. 11, no. 11, p. e0166872, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [100].Bouchard KE et al. , “High-Resolution, Non-Invasive Imaging of Upper Vocal Tract Articulators Compatible with Human Brain Recordings,” PloS one, vol. 11, no. 3, p. e0151327, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [101].Chartier J, Anumanchipalli GK, Johnson K, and Chang EF, “Encoding of Articulatory Kinematic Trajectories in Human Speech Sensorimotor Cortex,” (in eng), Neuron, vol. 98, no. 5, pp. 1042–1054.e4, June 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [102].Fabre D, Hueber T, Girin L, Alameda-Pineda X, and Badin P, “Automatic animation of an articulatory tongue model from ultrasound images of the vocal tract,” Speech Communication, vol. 93, pp. 63–75, 2017. [Google Scholar]
  • [103].Bocquelet F, Hueber T, Girin L, Chabardès S, and Yvert B, “Key considerations in designing a speech brain-computer interface,” (in eng), J Physiol Paris, vol. 110, no. 4 Pt A, pp. 392–401, 11 2016. [DOI] [PubMed] [Google Scholar]
  • [104].Mugler EM et al. , “Direct classification of all American English phonemes using signals from functional speech motor cortex,” Journal of neural engineering, vol. 11, no. 3, pp. 035015-2560/11/3/035015. Epub 2014 May 19, 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [105].Salari E, Freudenburg ZV, Vansteensel MJ, and Ramsey NF, “Repeated Vowel Production Affects Features of Neural Activity in Sensorimotor Cortex,” (in eng), Brain Topogr, vol. 32, no. 1, pp. 97–110, January 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [106].Salari E, Freudenburg ZV, Vansteensel MJ, and Ramsey NF, “The influence of prior pronunciations on sensorimotor cortex activity patterns during vowel production,” (in eng), J Neural Eng, vol. 15, no. 6, p. 066025, December 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [107].Herff C et al. , “Brain-to-text: decoding spoken phrases from phone representations in the brain,” Frontiers in neuroscience, vol. 9, p. 217, 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [108].Herff C, Johnson G, Diener L, Shih J, Krusienski D, and Schultz T, “Towards direct speech synthesis from ECoG: A pilot study,” (in eng), Conf Proc IEEE Eng Med Biol Soc, vol. 2016, pp. 1540–1543, August 2016. [DOI] [PubMed] [Google Scholar]
  • [109].Angrick M et al. , “Speech synthesis from ECoG using densely connected 3D convolutional neural networks,” (in eng), J Neural Eng, vol. 16, no. 3, p. 036019, March 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [110].Glanz Iljina O et al. , “Real-life speech production and perception have a shared premotor-cortical substrate,” (in eng), Sci Rep, vol. 8, no. 1, p. 8898, June 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [111].Schirrmeister RT et al. , “Deep learning with convolutional neural networks for EEG decoding and visualization,” (in eng), Hum Brain Mapp, vol. 38, no. 11, pp. 5391–5420, November 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [112].O’Sullivan JA et al. , “Attentional Selection in a Cocktail Party Environment Can Be Decoded from Single-Trial EEG,” Cerebral cortex (New York, N.Y.: 1991), vol. 25, no. 7, pp. 1697–1706, 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [113].O’Sullivan J et al. , “Neural decoding of attentional selection in multi-speaker environments without access to clean sources,” (in eng), J Neural Eng, vol. 14, no. 5, p. 056001, 10 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [114].Riès SK et al. , “Spatiotemporal dynamics of word retrieval in speech production revealed by cortical high-frequency band activity,” (in eng), Proc Natl Acad Sci U S A, vol. 114, no. 23, pp. E4530–E4538, June 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [115].Bartels J et al. , “Neurotrophic electrode: method of assembly and implantation into human motor speech cortex,” (in eng), J Neurosci Methods, vol. 174, no. 2, pp. 168–76, September 2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [116].Guenther FH et al. , “A wireless brain-machine interface for real-time speech synthesis,” (in eng), PLoS One, vol. 4, no. 12, p. e8218, December 2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [117].Brumberg JS, Wright EJ, Andreasen DS, Guenther FH, and Kennedy PR, “Classification of intended phoneme production from chronic intracortical microelectrode recordings in speech-motor cortex,” (in eng), Front Neurosci, vol. 5, p. 65, 2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [118].Rosenbaum P et al. , “A report: the definition and classification of cerebral palsy April 2006,” Developmental medicine and child neurology.Supplement, vol. 109, pp. 8–14, 2007. [PubMed] [Google Scholar]
  • [119].Parkes J, Hill N, Platt MJ, and Donnelly C, “Oromotor dysfunction and communication impairments in children with cerebral palsy: a register study,” Developmental medicine and child neurology, vol. 52, no. 12, pp. 1113–1119, 2010. [DOI] [PubMed] [Google Scholar]
  • [120].Dunbar M and Kirton A, “Perinatal stroke: mechanisms, management, and outcomes of early cerebrovascular brain injury,” (in eng), Lancet Child Adolesc Health, vol. 2, no. 9, pp. 666–676, September 2018. [DOI] [PubMed] [Google Scholar]
  • [121].Cho W et al. , “Hemiparetic Stroke Rehabilitation Using Avatar and Electrical Stimulation Based on Non-invasive Brain Computer Interface,” International Journal of Physical Medicine and Rehabilitation, vol. 5, 2017. [Google Scholar]
  • [122].Irimia DC, Ortner R, Poboroniuc M, Ignat B, and Guger C, “High classification accuracy of a motor imagery based brain-computer interface for stroke rehabilitation training,” Frontiers in Robotics and AI, vol. 5, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [123].Biasiucci A et al. , “Brain-actuated functional electrical stimulation elicits lasting arm motor recovery after stroke,” (in eng), Nat Commun, vol. 9, no. 1, p. 2421, 06 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [124].Young BM et al. , “Brain-Computer Interface Training after Stroke Affects Patterns of Brain-Behavior Relationships in Corticospinal Motor Fibers,” (in eng), Front Hum Neurosci, vol. 10, p. 457, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [125].Zickler C et al. , “BCI applications for people with disabilities: defining user needs and user requirements,” Assistive technology from adapted equipment to inclusive environments, AAATE, vol. 25, pp. 185–189, 2009. [Google Scholar]
  • [126].Müller-Putz GR et al. , “MoreGrasp: Restoration of upper limb function in individuals with high spinal cord injury by multmodal neuroprostheses fpr interaction in daily activities,” presented at the 7th Graz Brain-Computer Interface Conference, Graz, Austria, 2017. [Google Scholar]
  • [127].Müller-Putz GR, Scherer R, Pfurtscheller G, and Rupp R, “EEG-based neuroprosthesis control: a step towards clinical practice,” (in eng), Neurosci Lett, vol. 382, no. 1-2, pp. 169–74, July 1-8 2005. [DOI] [PubMed] [Google Scholar]
  • [128].Pfurtscheller G, Graimann B, Huggins J, Levine S, and Schuh L, “Spatiotemporal patterns of beta desynchronization and gamma synchronization in corticographic data during self-paced movement,” Clinical Neurophysiology, vol. 114, no. 7, pp. 1226–1236, July 2003. 2003. [DOI] [PubMed] [Google Scholar]
  • [129].Rohm M et al. , “Hybrid brain-computer interfaces and hybrid neuroprostheses for restoration of upper limb functions in individuals with high-level spinal cord injury,” Artificial intelligence in medicine, vol. 59, no. 2, pp. 133–42, October 2013. [DOI] [PubMed] [Google Scholar]
  • [130].Osuagwu BC, Wallace L, Fraser M, and Vuckovic A, “Rehabilitation of hand in subacute tetraplegic patients based on brain computer interface and functional electrical stimulation: a randomised pilot study,” J Neural Eng, vol. 13, no. 6, p. 065002, December 2016. [DOI] [PubMed] [Google Scholar]
  • [131].Pereira J, Ofner P, Schwarz A, Sburlea AI, and Müller-Putz GR, “EEG neural correlates of goal-directed movement intention,” (in eng), Neuroimage, vol. 149, pp. 129–140, April 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [132].Pereira J, Sburlea AI, and Müller-Putz GR, “EEG patterns of self-paced movement imaginations towards externally-cued and internally-selected targets,” (in eng), Sci Rep, vol. 8, no. 1, p. 13394, September 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [133].Kobler RJ, Sburlea AI, and Müller-Putz GR, “Tuning characteristics of low-frequency EEG to positions and velocities in visuomotor and oculomotor tracking tasks,” (in eng), Sci Rep, vol. 8, no. 1, p. 17713, December 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [134].Sburlea AI and Müller-Putz GR, “Exploring representations of human grasping in neural, muscle and kinematic signals,” (in eng), Sci Rep, vol. 8, no. 1, p. 16669, November 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [135].Lopes Dias C, Sburlea AI, and Müller-Putz GR, “Masked and unmasked error-related potentials during continuous control and feedback,” (in eng), J Neural Eng, vol. 15, no. 3, p. 036031, June 2018. [DOI] [PubMed] [Google Scholar]
  • [136].Bogue R, “Robotic exoskeletons: a review of recent progress,” Industrial Robot: An International Journal, vol. 42, pp. 5–10, 2015. Emerald Group Publishing Limited [Google Scholar]
  • [137].He Y, Eguren D, Azorín JM, Grossman RG, Luu TP, and Contreras-Vidal JL, “Brain-machine interfaces for controlling lower-limb powered robotic systems,” (in eng), J Neural Eng, vol. 15, no. 2, p. 021004, April 2018. [DOI] [PubMed] [Google Scholar]
  • [138].Millan JD et al. , “Combining Brain-Computer Interfaces and Assistive Technologies: State-of-the-Art and Challenges,” Frontiers in neuroscience, vol. 4, p. 10.3389/fnins.2010.00161, 2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [139].Lee K, Liu D, Perroud L, Chavarriaga R, and Millán J. d. R., “A brain-controlled exoskeleton with cascaded event-related desynchronization classifiers,” Robotics and Autonomous Systems, vol. 90, pp. 15–23, 2017. [Google Scholar]
  • [140].Contreras-Vidal JL, Kilicarslan A, Huang HH, and Grossman RG, “Human-centered design of wearable neuroprostheses and exoskeletons,” AI Magazine, vol. 36, pp. 12–22, 2015. [Google Scholar]
  • [141].Liu D et al. , “Brain-actuated gait trainer with visual and proprioceptive feedback,” (in eng), J Neural Eng, vol. 14, no. 5, p. 056017, October 2017. [DOI] [PubMed] [Google Scholar]
  • [142].Do AH, Wang PT, King CE, Chun SN, and Nenadic Z, “Brain-computer interface controlled robotic gait orthosis,” (in eng), J Neuroeng Rehabil, vol. 10, p. 111, December 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [143].García-Cossio E et al. , “Decoding Sensorimotor Rhythms during Robotic-Assisted Treadmill Walking for Brain Computer Interface (BCI) Applications,” (in eng), PLoS One, vol. 10, no. 12, p. e0137910, 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [144].Zhang Y, Prasad S, Kilicarslan A, and Contreras-Vidal JL, “Multiple Kernel Based Region Importance Learning for Neural Classification of Gait States from EEG Signals,” (in eng), Front Neurosci, vol. 11, p. 170, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [145].Kilicarslan A, Prasad S, Grossman RG, and Contreras-Vidal JL, “High accuracy decoding of user intentions using EEG to control a lower-body exoskeleton,” (in eng), Conf Proc IEEE Eng Med Biol Soc, vol. 2013, pp. 5606–9, 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [146].He Y et al. , “An integrated neuro-robotic interface for stroke rehabilitation using the NASA X1 powered lower limb exoskeleton,” (in eng), Conf Proc IEEE Eng Med Biol Soc, vol. 2014, pp. 3985–8, 2014. [DOI] [PubMed] [Google Scholar]
  • [147].Kwak NS, Müller KR, and Lee SW, “A lower limb exoskeleton control system based on steady state visual evoked potentials,” (in eng), J Neural Eng, vol. 12, no. 5, p. 056009, October 2015. [DOI] [PubMed] [Google Scholar]
  • [148].Belda-Lois JM et al. , “Rehabilitation of gait after stroke: a review towards a top-down approach,” (in eng), J Neuroeng Rehabil, vol. 8, p. 66, December 2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [149].Pons JL, “Rehabilitation exoskeletal robotics. The promise of an emerging field,” (in eng), IEEE Eng Med Biol Mag, vol. 29, no. 3, pp. 57–63, 2010. May-Jun 2010. [DOI] [PubMed] [Google Scholar]
  • [150].Do AH, Wang PT, King CE, Abiri A, and Nenadic Z, “Brain-computer interface controlled functional electrical stimulation system for ankle movement,” (in eng), J Neuroeng Rehabil, vol. 8, p. 49, August 2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [151].Xu R et al. , “A closed-loop brain-computer interface triggering an active ankle-foot orthosis for inducing cortical neural plasticity,” (in eng), IEEE Trans Biomed Eng, vol. 61, no. 7, pp. 2092–101, July 2014. [DOI] [PubMed] [Google Scholar]
  • [152].Donati AR et al. , “Long-Term Training with a Brain-Machine Interface-Based Gait Protocol Induces Partial Neurological Recovery in Paraplegic Patients,” (in eng), Sci Rep, vol. 6, p. 30383, August 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [153].Collinger JL et al. , “High-performance neuroprosthetic control by an individual with tetraplegia,” (in eng), Lancet, vol. 381, no. 9866, pp. 557–64, February 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [154].Wodlinger B, Downey JE, Tyler-Kabara EC, Schwartz AB, Boninger ML, and Collinger JL, “Ten-dimensional anthropomorphic arm control in a human brain-machine interface: difficulties, solutions, and limitations,” Journal of neural engineering, vol. 12, no. 1, pp. 016011-2560/12/1/016011. Epub 2014 Dec 16, 2015. [DOI] [PubMed] [Google Scholar]
  • [155].Hochberg LR et al. , “Reach and grasp by people with tetraplegia using a neurally controlled robotic arm,” Nature, vol. 485, no. 7398, pp. 372–375, 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [156].Bouton CE et al. , “Restoring cortical control of functional movement in a human with quadriplegia,” (in eng), Nature, vol. 533, no. 7602, pp. 247–50, May 2016. [DOI] [PubMed] [Google Scholar]
  • [157].Ajiboye AB et al. , “Restoration of reaching and grasping movements through brain-controlled muscle stimulation in a person with tetraplegia: a proof-of-concept demonstration,” (in eng), Lancet, vol. 389, no. 10081, pp. 1821–1830, May 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [158].Colachis SC et al. , “Dexterous Control of Seven Functional Hand Movements Using Cortically-Controlled Transcutaneous Muscle Stimulation in a Person With Tetraplegia,” (in eng), Front Neurosci, vol. 12, p. 208, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [159].Bensmaia SJ and Miller LE, “Restoring sensorimotor function through intracortical interfaces: progress and looming challenges,” (in eng), Nat Rev Neurosci, vol. 15, no. 5, pp. 313–25, May 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [160].Collinger J, Gaunt R, and Schwartz A, “Progress towards restoring upper limb movement and sensation through intracortical brain-computer interfaces,” Current Opinion in Biomedical Engineering, vol. 8, pp. 84–92, 2018. [Google Scholar]
  • [161].Schiefer MA, Graczyk EL, Sidik SM, Tan DW, and Tyler DJ, “Artificial tactile and proprioceptive feedback improves performance and confidence on object identification tasks,” (in eng), PLoS One, vol. 13, no. 12, p. e0207659, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [162].Graczyk EL, Resnik L, Schiefer MA, Schmitt MS, and Tyler DJ, “Home Use of a Neural-connected Sensory Prosthesis Provides the Functional and Psychosocial Experience of Having a Hand Again,” (in eng), Sci Rep, vol. 8, no. 1, p. 9866, June 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [163].Tan DW, Schiefer MA, Keith MW, Anderson JR, Tyler J, and Tyler DJ, “A neural interface provides long-term stable natural touch perception,” (in eng), Sci Transl Med, vol. 6, no. 257, p. 257ra138, October 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [164].Davis TS et al. , “Restoring motor control and sensory feedback in people with upper extremity amputations using arrays of 96 microelectrodes implanted in the median and ulnar nerves,” (in eng), J Neural Eng, vol. 13, no. 3, p. 036001, June 2016. [DOI] [PubMed] [Google Scholar]
  • [165].Valle G et al. , “Biomimetic Intraneural Sensory Feedback Enhances Sensation Naturalness, Tactile Sensitivity, and Manual Dexterity in a Bidirectional Prosthesis,” (in eng), Neuron, vol. 100, no. 1, pp. 37–45.e7, October 2018. [DOI] [PubMed] [Google Scholar]
  • [166].Saal HP and Bensmaia SJ, “Touch is a team effort: interplay of submodalities in cutaneous sensibility,” (in eng), Trends Neurosci, vol. 37, no. 12, pp. 689–97, December 2014. [DOI] [PubMed] [Google Scholar]
  • [167].Collins KL, Guterstam A, Cronin J, Olson JD, Ehrsson HH, and Ojemann JG, “Ownership of an artificial limb induced by electrical brain stimulation,” (in eng), Proc Natl Acad Sci U S A, vol. 114, no. 1, pp. 166–171, January 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [168].Armenta Salas M et al. , “Proprioceptive and cutaneous sensations in humans elicited by intracortical microstimulation,” (in eng), Elife, vol. 7, April 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [169].Flesher S et al. , “Intracortical Microstimulation as a Feedback Source for Brain-Computer Interface Users,” in 6th International Brain-Computer Interface Meeting, Pacific Grove, California, 2016, no. Conference Proceedings, 2016, pp. 138–138. [Google Scholar]
  • [170].Delhaye BP, Long KH, and Bensmaia SJ, “Neural Basis of Touch and Proprioception in Primate Cortex,” (in eng), Compr Physiol, vol. 8, no. 4, pp. 1575–1602, September 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [171].Bensmaia SJ, “Biological and bionic hands: natural neural coding and artificial perception,” (in eng), Philos Trans R Soc Lond B Biol Sci, vol. 370, no. 1677, p. 20140209, September 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [172].London BM, Jordan LR, Jackson CR, and Miller LE, “Electrical stimulation of the proprioceptive cortex (area 3a) used to instruct a behaving monkey,” (in eng), IEEE Trans Neural Syst Rehabil Eng, vol. 16, no. 1, pp. 32–6, February 2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [173].Milgram P and Colquhoun H, “A taxonomy of real and virtual world display integration,” Mixed reality: Merging real and virtual worlds, vol. 1, pp. 1–26, 1999. [Google Scholar]
  • [174].Putze F, Scherer M, and Schultz T, “Starring into the void?: Classifying Internal vs. External Attention from {EEG},” presented at the 9th Nordic Conference on Human-Computer Interaction, 2016. [Google Scholar]
  • [175].Putze F, Herff C, Tremmel C, Schultz Tanja, and Krusienski D, “Decoding Mental Workload in Virtual Environments: A fNIRS Study using an Immersive n-back Task,” presented at the 41st Engineering Medicine and Biology Conference, Berlin, Germany, 2019. [DOI] [PubMed] [Google Scholar]
  • [176].Mladenović J, Frey J, Bonnet-Save M, Mattout Jérémie, and Lotte F, “The Impact of Flow in an EEG-based Brain Computer Interface,” presented at the 7th International BCI Conference, Graz, Austria, 2017. [Google Scholar]
  • [177].Faller J, Saproo S, Shih V, and Sajda P, “Closed-loop regulation of user state during a boundary avoidance task,” presented at the IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2016. [Google Scholar]
  • [178].Si-Mohammed H, Argelaguet F, Casiez G, Roussel Nicolas, and Lécuyer A, “Brain-Computer Interfaces and Augmented Reality: A State of the Art,” presented at the 7th International BCI Conference, Graz, Austria, 2017. [Online]. Available: https://hal.inria.fr/hal-01625167. [Google Scholar]
  • [179].Tremmel C, Herff C, and Krusienski D, “EEG Movement Artifact Suppression in Interactive Virtual Reality,” presented at the 41st Engineering Medicine and Biology Conference, Berlin, Germany, 2019. [DOI] [PubMed] [Google Scholar]
  • [180].Vourvopoulos A, Ferreira A, and Bermudez i Badia S, “NeuRow: An Immersive VR Environment for Motor-Imagery Training with the Use of Brain-Computer Interfaces and Vibrotactile Feedback,” presented at the 3rd International Conference on Physiological Computing Systems, Lisbon, Portugal, 2016. [Google Scholar]
  • [181].Mullen T and Kothe C. (2018). NeuroPype, Intheon. [Online]. Available. [Google Scholar]
  • [182].King JL, Art therapy, trauma, and neuroscience : theoretical and practical perspectives. pp. xx, 234 pages.
  • [183].Zickler C, Halder S, Kleih SC, Herbert C, and Kubler A, “Brain Painting: Usability testing according to the user-centered design in end users with severe motor paralysis,” Artificial Intelligence in Medicine, vol. 59, no. 2, pp. 99–110, 2013. [DOI] [PubMed] [Google Scholar]
  • [184].Leslie G. (2019).
  • [185].Makeig S, Leslie G, Mullen T, Sarma D, Bigdely-Shamlo N, and Kothe C, “First Demonstration of a Musical Emotion BCI,” in Affective Computing and Intelligent Interaction. ACII 2011. Lecture Notes in Computer Science, vol. 6975, D. M. S., G. A., S. B., and M. JC, Eds. Berlin, Heidelberg: Springer, 2011. [Google Scholar]
  • [186].Mullen T et al. , “MindMusic: Playful and Social Installations at the Interface Between Music and the Brain,” in More Playful User Interfaces. Interfaces that Invite Social and Physical Interaction. Gaming Media and Social Effects series, Nijholt A, Ed. Singapore: Springer, 2015, pp. 197–229. [Google Scholar]
  • [187].Tan DS and Nijholt A, Brain-computer interfaces: Applying our minds to human- computer interaction (Human-Computer Interaction Series). London: Springer-Verlag, 2010. [Google Scholar]
  • [188].Todd DA, McCullagh PJ, Mulvenna MD, and Lightbody G, “Investigating the use of brain-computer interaction to facilitate creativity,” presented at the 3rd Augmented Human International Conference, 2012. [Google Scholar]
  • [189].Morone G et al. , “Proof of principle of a brain-computer interface approach to support poststroke arm rehabilitation in hospitalized patients: design, acceptability, and usability,” (in eng), Arch Phys Med Rehabil, vol. 96, no. 3 Suppl, pp. S71–8, March 2015. [DOI] [PubMed] [Google Scholar]
  • [190].Dhindsa K, Carcone D, and Becker S, “Toward an Open-Ended BCI: A User-Centered Coadaptive Design,” (in eng), Neural Comput, vol. 29, no. 10, pp. 2742–2768, October 2017. [DOI] [PubMed] [Google Scholar]
  • [191].Müller-Putz G et al. , “Towards non-invasive Hybrid Brain-Computer Interfaces: framework, practice, clinical application and beyond,” Proceedings of the IEEE, vol. 103, no. 6, pp. 926–943, 2015. [Google Scholar]
  • [192].Anderson C, Besio W, and A. S., “Comparison of conventional and tripolar EEG electrodes in BCI paradigms. ,” presented at the Seventh International Brain-Computer Interface Meeting Pacific Grove, California, USA, 2018. [Online]. Available: http://bcisociety.org/wp-content/uploads/2018/05/BCI2018AbstractBook.pdf. [Google Scholar]
  • [193].Besio WG et al. , “High-Frequency Oscillations Recorded on the Scalp of Patients With Epilepsy Using Tripolar Concentric Ring Electrodes,” (in eng), IEEE J Transl Eng Health Med, vol. 2, p. 2000111, 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [194].Scott SM and Gehrke L, “Neurofeedback during creative expression as a therapeutic tool,” in Mobile Brain–Body Imaging and the Neuroscience of Art, Innovation and Creativity, Contreras-Vidal JL, Robleto D, Cruz-Garza JG, Azorin JM, and Nam CSCS, Eds. (Bio- and Neurosystems: Springer Nature, 2019. (in press). [Google Scholar]
  • [195].Scott SM, Raftery C, and Anderson C, “Advancing the rehabilitative and therapeutic potentials of BCI and noninvasive systems,” in Brain art: Brain-computer interfaces for artistic expression, Nijholt A, Ed. Switzerland: Springer International Publishing, 2019. [Google Scholar]
  • [196].Kübler A, Holz E, Sellers EW, and Vaughan TM, “Toward Independent Home Use of BCI: a Decision Algorithm for Selection of Potential End-Users,” Arch Phys Med Rehab, 2014. [DOI] [PubMed] [Google Scholar]
  • [197].Nijholt A, “Competing and Collaborating Brains: Multi-Brain Computer Interfacing,” in Brain-Computer Interfaces: Current trends and Applications. Intelligent Systems Reference Library series, Vol. 74, Hassanieu AE and Azar AT, Eds. Berlin: Springer, 2015, pp. 313–335. [Google Scholar]
  • [198].V. D and M.-F. A., “Past and Future of Multi-mind Brain-computer Interfaces,” in Brain-Computer Interfaces Handbook: Technological and Theoretical Advances, Nam C, Nijholt A, and Lotte F, Eds. Oxford, UK: CRC Press, Taylor & Francis Group, 2018, pp. 685–700. [Google Scholar]
  • [199].Nijholt A, Brain Art. Brain-Computer Interfaces and Artistic Expression (Human-Computer Interaction Series). London, UK: Springer, 2019. [Google Scholar]
  • [200].Babiloni F and Astolfi L, “Social neuroscience and hyperscanning techniques: past, present and future,” (in eng), Neurosci Biobehav Rev, vol. 44, pp. 76–93, July 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [201].Stevens RH, Galloway TL, Wang P, and Berka C, “Cognitive neurophysiologic synchronies: what can they contribute to the study of teamwork?,” (in eng), Hum Factors, vol. 54, no. 4, pp. 489–502, August 2012. [DOI] [PubMed] [Google Scholar]
  • [202].Stone B, Skinner A, Stikic M, and Johnson R, “Assessing Neural Synchrony in Tutoring Dyads,” in Advancing Human Performance and Decision-Making through Adaptive Systems, vol. 8534, Schmorrow D and Fidopiastis C, Eds. (Lecture Notes in Computer Science. Foundations of Augmented Cognition.: Springer, 2014, pp. 167–178. [Google Scholar]
  • [203].Berka C and Stikic M, “On the Road to Autonomy: Evaluating and Optimizing Hybrid Team Dynamics,” in Autonomy and Artificial Intelligence: A Threat or Savior?: Springer, 2017, pp. 245–262. [Google Scholar]
  • [204].Poli R, Valeriani D, and Cinel C, “Collaborative brain-computer interface for aiding decision-making,” (in eng), PLoS One, vol. 9, no. 7, p. e102693, 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [205].Valeriani D, Poli R, and Cinel C, “Enhancement of Group Perception via a Collaborative Brain-Computer Interface,” (in eng), IEEE Trans Biomed Eng, vol. 64, no. 6, pp. 1238–1248, June 2017. [DOI] [PubMed] [Google Scholar]
  • [206].Bonnet L, Lotte F, and Lécuyer A, “Two Brains, One Game: Design and Evaluation of a Multi-User BCI Video Game Based on Motor Imagery,” IEEE Transactions on Computational Intelligence and AI in Games (IEEE T-CIAIG) vol. 5, no. 2, pp. 185–198, 2013. [Google Scholar]
  • [207].Zhu L, Lotte F, Cui G, Li J, Zhou C, Cichocki A, “Neural Mechanisms of Social Emotion Perception: An EEG Hyper-scanning Study,” presented at the Cyberworlds, New York, 2018. [Google Scholar]
  • [208].Brouwer AM, Hogervorst M, Reuderink B, van der Werf Y, and van Erp J, “Physiological signals distinguish between reading emotional and non-emotional sections in a novel,” Brain-Computer Interfaces, vol. 2, no. 2-3, pp. 76–89, 2015. [Google Scholar]
  • [209].Van Erp J, Hogervorst M, and Van der Werf Y, “Toward physiological indices of emotional state driving future ebook interactivity,” PeerJ Comput. Sci . vol. 2, no. e60, 2016. [Google Scholar]
  • [210].P. s. C. f. t. S. o. B. Issues, Integrative Approaches for Neuroscience, Ethics, and Society (Gray Matters). Washington D.C., 2014. [Google Scholar]
  • [211].Glannon W, “Stimulating brains, altering minds,” (in eng), J Med Ethics, vol. 35, no. 5, pp. 289–92, May 2009. [DOI] [PubMed] [Google Scholar]
  • [212].Farah MJ, “An ethics toolbox for neurotechnology,” (in eng), Neuron, vol. 86, no. 1, pp. 34–7, April 2015. [DOI] [PubMed] [Google Scholar]
  • [213].P. s. C. f. t. S. o. B. Issues, Topics at the Intersection of Neuroscience, Ethics, and Society (Gray Matters). 2015.
  • [214].Nijboer F, “Technology transfer of brain-computer interfaces as assistive technology: barriers and opportunities,” (in eng), Ann Phys Rehabil Med, vol. 58, no. 1, pp. 35–8, February 2015. [DOI] [PubMed] [Google Scholar]
  • [215].Suryotrisongko H and Samopa F, “Evaluating OpenBCI Spiderclaw V1 Headwear’s Electrodes Placements for Brain-Computer Interface (BCI) Motor Imagery Application,” Procedia Computer Science, vol. 72, pp. 398–405, 2015. [Google Scholar]
  • [216].Spicer R, Anglin J, Krum DM, and Liew SL, “REINVENT: A low-cost, virtual reality brain-computer interface for severe stroke upper limb motor recovery,” presented at the Virtual Reality (VR), 2017. [Google Scholar]
  • [217].Samson VRR, Kitti BP, Kumar SP, Babu DS, and Monica C, “Electroencephalogram-Based OpenBCI Devices for Disabled People,” presented at the 2nd International Conference on Micro-Electronics, Electromagnetics and Telecommunications, Singapore, 2018. [Google Scholar]
  • [218].Hohmann MR et al. , “MYND: A Platform for Large-scale Neuroscientific Studies,” in CHI’19 Extended Abstracts on Human Factors in Computing Systems. Glasgow, UK: ACM, 2019. [Google Scholar]
  • [219].Bergman E and Johnson E, “Towards accessible human-computer interaction,” Advances in human-computer interaction, vol. 5, no. 1, pp. 87–114, 1995. [Google Scholar]
  • [220].Boven L, Jiang QL, and Moss HE, “Diffuse Colour Discrimination as Marker of Afferent Visual System Dysfunction in Amyotrophic Lateral Sclerosis,” (in eng), Neuroophthalmology, vol. 41, no. 6, pp. 310–314, December 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [221].Byrne S, Pradhan F, Ni Dhubhghaill S, Treacy M, Cassidy L, and Hardiman O, “Blink rate in ALS,” (in eng), Amyotroph Lateral Scler Frontotemporal Degener, vol. 14, no. 4, pp. 291–3, May 2013. [DOI] [PubMed] [Google Scholar]
  • [222].Brumberg JS, Nguyen A, Pitt KM, and Lorenz SD, “Examining sensory ability, feature matching and assessment-based adaptation for a brain-computer interface using the steady-state visually evoked potential,” (in eng), Disabil Rehabil Assist Technol, pp. 1–9, January 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [223].Chen SHK and O’Leary M, “Eye Gaze 101: What Speech-Language Pathologists Should Know About Selecting Eye Gaze Augmentative and Alternative Communication Systems,” Perspectives of the ASHA Special Interest Groups, vol. 3, no. 12, pp. 24–32, 2018. [Google Scholar]
  • [224].Graber M, Challe G, Alexandre MF, Bodaghi B, LeHoang P, and Touitou V, “Evaluation of the visual function of patients with locked-in syndrome: Report of 13 cases,” (in eng), J Fr Ophtalmol, vol. 39, no. 5, pp. 437–40, May 2016. [DOI] [PubMed] [Google Scholar]
  • [225].Moss HE et al. , “Cross-sectional evaluation of clinical neuro-ophthalmic abnormalities in an amyotrophic lateral sclerosis population,” (in eng), J Neurol Sci, vol. 314, no. 1-2, pp. 97–101, March 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [226].Speier W, Deshpande A, Cui L, Chandravadia N, Roberts D, and Pouratian N, “A comparison of stimulus types in online classification of the P300 speller using language models,” (in eng), PLoS One, vol. 12, no. 4, p. e0175382, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [227].Kalika D, Collins L, Caves K, and Throckmorton C, “Fusion of P300 and eye-tracker data for spelling using BCI2000,” (in eng), J Neural Eng, vol. 14, no. 5, p. 056010, October 2017. [DOI] [PubMed] [Google Scholar]
  • [228].Higger M et al. , “Recursive Bayesian Coding for BCIs,” (in eng), IEEE Trans Neural Syst Rehabil Eng, vol. 25, no. 6, pp. 704–714, June 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [229].Peters B et al. , “Effects of simulated visual acuity and ocular motility impairments on SSVEP brain-computer interface performance: An experiment with Shuffle Speller,” (in eng), Brain Comput Interfaces (Abingdon), vol. 5, no. 2-3, pp. 58–72, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [230].Nezamfar H, Mohseni Salehi SS, Higger M, and Erdogmus D, “Code-VEP vs. Eye Tracking: A Comparison Study,” (in eng), Brain Sci, vol. 8, no. 7, July 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [231].Aflalo T et al. , “Neurophysiology. Decoding motor imagery from the posterior parietal cortex of a tetraplegic human,” Science (New York, N.Y.), vol. 348, no. 6237, pp. 906–910, 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [232].King CE, Wang PT, McCrimmon CM, Chou CC, Do AH, and Nenadic Z, “The feasibility of a brain-computer interface functional electrical stimulation system for the restoration of overground walking after paraplegia,” Journal of neuroengineering and rehabilitation, vol. 12, pp. 80-015-0068-7, 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [233].McCrimmon CM, King CE, Wang PT, Cramer SC, Nenadic Z, and Do AH, “Brain-controlled functional electrical stimulation therapy for gait rehabilitation after stroke: a safety study,” Journal of neuroengineering and rehabilitation, vol. 12, pp. 57-015-0050-4, 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [234].Anderson KD, “Targeting recovery: priorities of the spinal cord-injured population,” Journal of neurotrauma, vol. 21, no. 10, pp. 1371–1383, 2004. [DOI] [PubMed] [Google Scholar]
  • [235].Tran T et al. , “Electrocorticographic Activity of the Brain During Micturition,” presented at the 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2018. [DOI] [PubMed] [Google Scholar]
  • [236].Chaudhary U, Xia B, Silvoni S, Cohen LG, and Birbaumer N, “Brain-Computer Interface-Based Communication in the Completely Locked-In State,” (in eng), PLoS Biol, vol. 15, no. 1, p. e1002593, January 2017. [DOI] [PMC free article] [PubMed] [Google Scholar] [Retracted]
  • [237].Pels EGM, Aarnoutse EJ, Ramsey NF, and Vansteensel MJ, “Estimated Prevalence of the Target Population for Brain-Computer Interface Neurotechnology in the Netherlands,” (in eng), Neurorehabil Neural Repair, vol. 31, no. 7, pp. 677–685, July 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [238].Hill K, Kovacs T, and Shin S, “Critical issues using brain-computer interfaces for augmentative and alternative communication,” Archives of Physical Medicine and Rehabilitation, vol. 96, no. 3 Suppl, pp. S8–15, 2015. [DOI] [PubMed] [Google Scholar]
  • [239].Blain-Moraes S, Schaff R, Gruis KL, Huggins JE, and Wren PA, “Barriers to and Mediators of Brain-Computer Interface User Acceptance: Focus Group Findings,” Ergonomics, vol. 55, no. 5, pp. 516–525, 2012. [DOI] [PubMed] [Google Scholar]
  • [240].Huggins JE, Moinuddin AA, Chiodo AE, and Wren PA, “What would brain-computer interface users want: opinions and priorities of potential users with spinal cord injury,” Archives of Physical Medicine and Rehabilitation, vol. 96, no. 3 Suppl, pp. S38–45.e1-5, 2015. [DOI] [PubMed] [Google Scholar]
  • [241].Huggins JE, Wren PA, and Gruis KL, “What would brain-computer interface users want? Opinions and priorities of potential users with amyotrophic lateral sclerosis,” Amyotrophic lateral sclerosis : official publication of the World Federation of Neurology Research Group on Motor Neuron Diseases, vol. 12, no. 5, pp. 318–324, 2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [242].Munssinger JI et al. , “Brain Painting: First Evaluation of a New Brain-Computer Interface Application with ALS-Patients and Healthy Volunteers,” Frontiers in neuroscience, vol. 4, p. 182, 2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [243].Holz EM, Botrel L, Kaufmann T, and Kubler A, “Long-term independent brain-computer interface home use improves quality of life of a patient in the locked-in state: a case study,” Archives of Physical Medicine and Rehabilitation, vol. 96, no. 3 Suppl, pp. S16–26, 2015. [DOI] [PubMed] [Google Scholar]
  • [244].van den Boom M et al. , “Utrecht Neuroprosthesis: from Brain Signal to Independent Control,” presented at the 7th Graz Brain-Computer Interface Conference, Graz, Austria, 2017. [Google Scholar]
  • [245].Leinders S et al. , “Using a One-Dimensional Control Ssignal for Two Different Output Commands in an Implanted BCI,” presented at the 7th Graz Brain-Computer Interface Conference, Graz, Austria, 2017. [Google Scholar]
  • [246].DiGiovine CP et al. , “Rehabilitation engineers, technologists, and technicians: Vital members of the assistive technology team,” (in eng), Assist Technol, pp. 1–12, June 2018. [DOI] [PubMed] [Google Scholar]
  • [247].Allison BZ, “Toward ubiquitous BCIs,” in Brain-Computer Interfaces. Berlin, Heidelberg: Springer, 2009, pp. 357–387. [Google Scholar]
  • [248].Allison BZ, Dunne S, Leeb R, Millán JDR, and Nijholt A, “Recent and upcoming BCI progress: overview, analysis, and recommendations,” in Towards Practical Brain-Computer Interfaces. Berlin, Heidelberg: Springer, 2012, pp. 1–13. [Google Scholar]
  • [249].Brunner C et al. , “BNCI Horizon 2020: towards a roadmap for the BCI community,” Brain-computer Interfaces, vol. 2, no. 1, pp. 1–10, 2015. [Google Scholar]
  • [250].Nam CS, Nijholt A, and Lotte F, Brain–Computer Interfaces Handbook: Technological and Theoretical Advances. CRC Press, 2018. [Google Scholar]
  • [251].Jin J, Zhang H, Daly I, Wang X, and Cichocki A, “An improved P300 pattern in BCI to catch user’s attention,” (in eng), J Neural Eng, vol. 14, no. 3, p. 036001, June 2017. [DOI] [PubMed] [Google Scholar]
  • [252].Jin J et al. , “P300 Chinese input system based on Bayesian LDA,” (in eng), Biomed Tech (Berl), vol. 55, no. 1, pp. 5–18, February 2010. [DOI] [PubMed] [Google Scholar]
  • [253].Huang M, Jin J, Zhang Y, Hu D, and Wang X, “Usage of drip drops as stimuli in an auditory P300 BCI paradigm,” Cognitive neurodynamics, vol. 12, no. 1, pp. 85–94, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [254].Cheng J et al. , “Effect of a combination of flip and zooming stimuli on the performance of a visual brain-computer interface for spelling,” Biomedical Engineering/Biomedizinische Technik, 2018. [DOI] [PubMed] [Google Scholar]
  • [255].Vujic A, Starner T, and Jackson M, “MoodLens: towards improving nonverbal emotional expression with an in-lens fiber optic display,” presented at the 2016 ACM International Symposium on Wearable Computers, 2017. [Google Scholar]
  • [256].Vujic A, Starner T, and Jackson M, “Towards Gut-Brain Computer Interfacing: Gastric Myoelectric Activity as an Index of Subcortical Phenomena,” presented at the 3rd International Mobile Brain/Body Imaging Conference, 2018. [Google Scholar]
  • [257].Guger C, Krausz G, Allison BZ, and Edlinger G, “Comparison of dry and gel based electrodes for p300 brain-computer interfaces,” Frontiers in neuroscience, vol. 6, p. 60, 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [258].Guger C, Allison BZ, and Lebedev MA, “Recent Advances in Brain-Computer Interface Research—A Summary of the BCI Award 2016 and BCI Research Trends,” in Brain-Computer Interface Research: Springer, 2017, pp. 127–134. [Google Scholar]
  • [259].(2019). Neurotechnologies for Brain-Machine Interfacing [Online]. Available: https://standards.ieee.org/industry-connections/neurotechnologies-for-brain-machine-interfacing.html.
  • [260].Bigdely-Shamlo N et al. , “Hierarchical Event Descriptors (HED): Semi-Structured Tagging for Real-World Events in Large-Scale EEG,” (in eng), Front Neuroinform, vol. 10, p. 42, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [261].Contreras-Vidal JL et al. , “Powered exoskeletons for bipedal locomotion after spinal cord injury,” (in eng), J Neural Eng, vol. 13, no. 3, p. 031001, June 2016. [DOI] [PubMed] [Google Scholar]
  • [262].He Y, Eguren D, Luu TP, and Contreras-Vidal JL, “Risk management and regulations for lower limb medical exoskeletons: a review,” (in eng), Med Devices (Auckl), vol. 10, pp. 89–107, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [263].Quitadamo LR, Marciani MG, Cardarilli GC, and Bianchi L, “Describing different brain computer interface systems through a unique model: a UML implementation,” (in eng), Neuroinformatics, vol. 6, no. 2, pp. 81–96, 2008. [DOI] [PubMed] [Google Scholar]
  • [264].Bianchi L, Quitadamo LR, Garreffa G, Cardarilli GC, and Marciani MG, “Performances evaluation and optimization of brain computer interface systems in a copy spelling task,” IEEE transactions on neural systems and rehabilitation engineering : a publication of the IEEE Engineering in Medicine and Biology Society, vol. 15, no. 2, pp. 207–216, 2007. [DOI] [PubMed] [Google Scholar]
  • [265].Mason SG, Jackson MM, and Birch GE, “A general framework for characterizing studies of brain interface technology,” (in eng), Ann Biomed Eng, vol. 33, no. 11, pp. 1653–70, November 2005. [DOI] [PubMed] [Google Scholar]
  • [266].“Understanding How Technical Standards are Made & Maintained - IEEE Innovation at Work,” 2018-April-10 2018.
  • [267].“2794 - Standard for Reporting of In Vivo Neural Interface Research,” 2019.

RESOURCES