Skip to main content
Journal of NeuroEngineering and Rehabilitation logoLink to Journal of NeuroEngineering and Rehabilitation
. 2014 Dec 17;11:168. doi: 10.1186/1743-0003-11-168

Non-invasive control interfaces for intention detection in active movement-assistive devices

Joan Lobo-Prat 1,, Peter N Kooren 2, Arno HA Stienen 1,3, Just L Herder 4,5, Bart FJM Koopman 1, Peter H Veltink 6
PMCID: PMC4459663  PMID: 25516421

Abstract

Active movement-assistive devices aim to increase the quality of life for patients with neuromusculoskeletal disorders. This technology requires interaction between the user and the device through a control interface that detects the user’s movement intention. Researchers have explored a wide variety of invasive and non-invasive control interfaces. To summarize the wide spectrum of strategies, this paper presents a comprehensive review focused on non-invasive control interfaces used to operate active movement-assistive devices. A novel systematic classification method is proposed to categorize the control interfaces based on: (I) the source of the physiological signal, (II) the physiological phenomena responsible for generating the signal, and (III) the sensors used to measure the physiological signal. The proposed classification method can successfully categorize all the existing control interfaces providing a comprehensive overview of the state of the art. Each sensing modality is briefly described in the body of the paper following the same structure used in the classification method. Furthermore, we discuss several design considerations, challenges, and future directions of non-invasive control interfaces for active movement-assistive devices.

Electronic supplementary material

The online version of this article (doi:10.1186/1743-0003-11-168) contains supplementary material, which is available to authorized users.

Keywords: Non-invasive control interface, Active movement-assistive devices, Motion intention detection, Biomechatronics, Human movement control system

Introduction

The ability to move in a controlled and stable manner is an essential trait for the human body. The Human Movement Control System (HMCS) can be modeled as in Figure 1A. The HMCS consists of a mechanical structure, the plant, which represents the skeleton and passive tissues, the actuators, which represent the muscles, and a controller, which represents the central nervous system and receives sensory feedback from the physiological sensors[1, 2]. Movement impairments, due to disease or trauma, can occur at various levels of the HMCS, affecting one or several components of the system: blindness affects the “sensors,” while muscular dystrophy affects the “actuators.”

Figure 1.

Figure 1

Schematic block diagram of the Human Movement Control System (A) in parallel with the Artificial Movement Control System (B). Three kinds of interactions between the HMCS and AMCS can be distinguished: (I) detection of the motion intention of the user; (II) provision of feedback to the user regarding the state of the AMCS, the HMCS or the environment; and (III) exchange of mechanical power between plants. Both the human and the artificial systems are depicted as dynamic systems in which both the human muscles and artificial actuators generate forces to transfer power and hence move the combined plant composed of the mechanical structure of the assistive device, the human musculoskeletal system and the environment (depicted as “external load”). Note that the interaction between the actuators and the plant is pictured with a bond graph that represents the energy exchange between them (i.e., movement and force in this particular case). The power 1-junction states a common velocity of all components. The reader is referred to [3] for further information on bond graphs. Modified from [1].

Advances in neuroscience, engineering, and computer science have led to an acceleration in the development of biomechatronic systems, capable of actively assisting the impaired motor functions of patients affected by neuromusculoskeletal disorders [2, 4, 5]. Assistive devices have been classified by the International Organization for Standardization (ISO) in the standard ISO 9999:2011 according to their main function. Artificial Movement Control Systems (AMCSs) function in parallel to the impaired HMCS and can be modeled with the same components as the HMCS: a plant representing the mechanical structure and passive elements, such as springs or dampers, and an artificial controller that receives the data measured from the sensors and generates control signals to operate the actuators (Figure 1B).

Three kinds of interactions between the HMCS and the AMCS can be distinguished [2]: (I) detection of the movement intention of the user; (II) provision of feedback to the user regarding the state of the AMCS, the HMCS or the environment; and (III) exchange of mechanical power between both plants. Note that providing feedback to the user is especially relevant when the human sensory system is disturbed.

Several physiological phenomena occur in every subsystem of the HMCS. Some of these phenomena can be measured and associated to the motion intention of the user and therefore can be exploited for the effective control of an AMCS. Neural signals from the central nervous system, neural activation of the muscles, muscle contraction forces and small movements and forces of the human plant are some examples of signals that are implicitly related to the motion intention. Motion intention can also be derived from explicit commands of the user, for example by pressing command switches, through speech, or through head, tongue or eye movements. Explicit commands are generated by Parallel Systems, which are defined as systems that function in parallel to the supported system (see Figure 1A).

Researchers have explored a wide variety of invasive and non-invasive methods to derive the user’s movement intention. However, a comprehensive overview of these methods is not available, which hampers efficient and well-founded selection of a suitable control interface for a given application. To illustrate the wide spectrum of strategies, this paper presents a comprehensive review of non-invasive sensing modalities for motion intention detection in active movement-assistive devices. The strategies are classified in a systematic way that provides a clear overview of the state of the art and a framework in which new sensing modalities can be included. This review is limited to non-invasive interfaces (specifically meaning that the sensors are not implanted in the human body) as these are easier to use with a wide variety of patients [68].

The paper is structured as follows. Review section introduces the classification method used to categorize the control interfaces, briefly describes each of the interfaces and discusses several design considerations. Finally, Conclusions section presents our conclusions and future directions.

Review

Classification method

The inventory of control interfaces for motion intention detection resulting from the literature search was stratified through a four-level classification (see Table 1 and Figure 2). The first level was defined according to the subsystems of the HMCS (controller, actuators, plant and parallel systems), and the second level was defined according to the physiological phenomena that occur in every subsystem. The set of signals that can be measured for every physiological phenomenon defines the third level of classification, and the sensors used to measure these signals define the fourth level. For each sensor/signal, its transduction principle, interface with the body, area of application, and key references were indicated. Note that the inventory of control interfaces using Parallel Systems is illustrative and not complete and has been added as an additional subsystem of the HMCS in the first level of the classification.

Table 1.

Stratified inventory of non-invasive control interfaces used for motion intention detection in active movement-assistive devices

Human system Physiological phenomena Signal Sensor Transduction principle Interface with body Application Key reference
Electric current EEG Electrode - Skin contact C/P/O/E [914]
MEG MEG machine Induction No contact C/O [15, 16]
Controller Brain activity Hemodynamics fMRI MRI machine Induction No contact C/E [17, 18]
NIRS Spectrometer Photoelectric Near-infrared illumination of the brain C/E [19, 20]
Muscle activation Electric current EMG Electrode - Skin contact O/P/E [2124]
Targeted muscle reinnervation* P [2527]
Vibration MMG Microphone Induction Piezoelectric Skin contact P [2831]
Accelerometer Piezoelectric Skin contact P [3032]
Hall-effect sensor Induction Magnet on the skin P [33, 34]
MK Pneumatic sensor Resistive Capacitive Skin contact P [35, 36]
Actuators Muscle contraction Dimensional change Encoder Photoelectric Skin contact O [37]
SMG Ultrasound scanner Piezoelectric Skin contact P [3840]
Electric impedance Electrode - Electric current to skin P [41]
Radial force and stiffness MT/MK Force-sensitive resistor Resistive Skin contact O/P [4244]
Piezoelectric transducer Piezoelectric Skin contact O/P [45]
Force Deformation Strain gauges Resistive Tunnel muscle cineplasty* P [46]
Hemodynamics NIRS Spectrometer Photoelectric Near-infrared illumination of the muscle P [4750]
Body segment movement Body segment movement IMU Piezoelectric Skin contact P/O [51, 52]
Movement Camera Photoelectric No contact E [53]
Relative joint movement Joint rotations Goniometer Potentiometer Resistive Skin contact P/O [5457]
Plant Bending sensor Resistive Skin contact P/O [58]
Encoder Photoelectric Skin contact/No contact P/O [59]
Force/Pressure Deformation Force/Torque sensor (strain gauges) Resistive No contact P/O/E [6062]
Pressure sensor (force-sensitive resistor) Resistive Skin contact P/O/E [56, 6366]
Eye movement Corneal reflection Video camera Photoelectric Near-infrared illumination of the cornea P/E [67]
EOG Electrode - Skin contact P/E [68, 69]
Accelerometer Piezoelectric Skin contact E [7072]
Head movement Inclination Video camera Photoelectric No contact E [73]
Parallel systems Ultrasonic sensor Piezoelectric Skin contact E [74]
Tongue movement Contact with palate/Movement Induction coil Induction Ferromagnetic material at the tip of the tongue E/C [7577]
Speech Sound Microphone Induction Piezoelectric No contact P/E [7880]
Hand movement Angle Joystick (potentiometers) Resistive Skin contact O/E [8183]

C: communication; P: prosthesis; O: orthosis; E: external devices; (*) indicates one-time invasive method.

Figure 2.

Figure 2

Classification Method. Example of the classification method illustrated with a schematic block diagram. See Table 1 for a full overview.

Interfacing with the controller: brain computer interfaces

Current noninvasive brain-computer interfaces (BCIs) derive the user’s movement intention from electrical and hemodynamic signals from the brain.

Electrical brain activity

Electroencephalography (EEG) and magnetoencephalography (MEG) are well-established non-invasive methods that measure the average dendritic currents of a large proportion of cells from the scalp [84]. Several brain signals have been used for BCIs, including slow cortical potentials, low-frequency changes in filed potentials (such as P300) and α and β rhythms. Furthermore, BCIs can exploit signals related to external sensory stimulation such as auditory or visual stimuli (i.e., evoked potentials), or voluntary mental activity, such as imagining a movement. Even though MEG provides a higher signal quality than EEG and does not require the attachment of scalp electrodes, the latter is portable (i.e. does not require a shielded room) and less expensive. Consequently, EEG-based BCIs are currently commercially available (e.g., by intendiX®, g.tec medical engineering GmbH, Schiedlberg, Austria) for personal use to operate spelling and domestic devices.

While most current research on EEG- and MEG-based BCIs focus on providing basic communication control to people suffering from severe motor impairments [9, 15], researchers have also been exploring their capabilities for providing control of orthotic [12, 13, 16, 8587] (see Figure 3 and Additional file 1), prosthetic [10], and external movement-assistive devices [8, 11, 88]. The main drawbacks of current EEG-based BCIs include the long training periods to learn to modulate specific brain potentials, the need to attach multiple electrodes to the scalp –both a time and cosmetic issue– the low information-transmission rate due to the filtering properties of the skull, low spatial resolution and high variability of the brain signals due to changes in background activity (e.g., motor, sensory, and cognitive activity) and learning processes [89, 90]. All these factors limit the applicability of BCIs as control interfaces of active movement-assistive devices. Today, BCI research is focused on the development of practical systems and their evaluation outside of the laboratory environment by end-users [14, 91, 92].

Download video file (7.9MB, mp4)

Additional file 1: EEG-based interface. An EEG-based BCI used for the control of the Mindwalker lower-extremity exoskeleton [12, 87]. In this setup the BCI is controlled using Steady State Visually Evoked Potentials (note the flickering lights in the inside part of the glasses). Video courtesy of Mindwalker project. (MP4 8 MB)

Figure 3.

Figure 3

EEG-based interface. An EEG-based BCI used for the control of the Mindwalker lower-extremity exoskeleton [12, 87]. In this setup the BCI is controlled using Steady State Visually Evoked Potentials (SSVEP). The glasses that the user is wearing stimulate the retina with several flashing lights at different frequencies, and depending on which flashing light the users looks at, the brain will generate electrical activity at the same (or a multiple) frequency as the visual stimulus. With this method, different control states are assigned to the electrical brain signals with specific frequencies. Additional file 1 shows this EEG-based BCI controlling the Mindwalker lower-extremity exoskeleton. Figure courtesy of Mindwalker project.

Brain hemodynamics

Beyond electric activity, hemodynamic signals from the brain are also used in BCIs. These signals are measured with functional magnetic resonance imaging (fMRI) or near infrared spectroscopy (NIRS). Both methods rely on the measurement of the task-induced blood oxygen level-dependent (BOLD) response, which was proven to be strongly correlated to the electrical brain activity [93]. Most studies using fMRI- and NIRS-based BCIs focused on their application to neurofeedback [9496], and only a few studies have aimed to develop interfaces for communication [17, 97, 98], cursor control [99], environmental control [19, 100], and external robotic arm control [18, 20]. As for EEG, NIRS is also portable and both less expensive and less cumbersome than fMRI. Furthermore, in contrast to fMRI, when using NIRS, subjects can be examined in a sitting or standing position without movement constraints. However, the depth of brain tissue that can be measured using NIRS is 10 to 30 mm, which limits the measurements to cortical regions [93]. While these functional imaging methods are promising for non-invasive recording of activity across the entire brain (or cortical regions in the case of NIRS) at high spatial resolution [93], they suffer from a very poor information transfer rate [101, 102] that limits their functionality. NIRS-based BCIs are still in the early phases of research and development, and therefore their future potential value as control interfaces remains uncertain [103].

Interfacing with the actuators: muscle activation interfaces

The recording of the electrical signals from muscle activation is known as electromyography (EMG). From a biomechatronic perspective, the muscle can be conceived as a biological signal amplifier of the low-amplitude electric potential from the motor neurons. The large majority of active prostheses that exist today, including commercially available devices, are controlled using surface EMG signals [89, 104].

Commercially available myoelectric upper-extremity prostheses are generally operated using proportional or on/off controls by measuring EMG from two independent residual muscles, or by distinguishing different activation levels of one residual muscle. Switching techniques such as muscle co-contraction or the use of mechanical switches or force-sensitive resistors are commonly implemented for enabling the sequential operation of several degrees of freedom (DOF) [89, 104, 105]. Lower-extremity prostheses and upper- and lower-extremity orthoses are commonly controlled estimating the joint angles or torques from EMG signals of muscles that mainly contribute to the supported motion [21, 22, 106, 107].

EMG-based control interfaces are widely used because of its easy access and generation, and its direct correlation to the movement intention. An EMG-based interface presents several drawbacks: It requires significant signal processing before it can be used as control signal due to its broad bandwidth and low amplitude; many patients have difficulties generating isolated and repeatable contractions [89]; and finally, the filtering properties of the limb tissue and the movement of the skin beneath the electrode notably affect long-term recordings [108]. All these factors make the control challenging for the user. To overcome these limitations, researchers are developing muscle synergy-based algorithms, motor neuron spike train decomposition algorithms, and new surgical procedures.

The increasing number of DOF in active prosthetic and orthotic devices to achieve higher functionality has been pushing the scientific community to develop EMG-based control interfaces capable of controlling multiple DOF in an intuitive and natural way. The identification of interactions between multiple muscles, commonly known as muscle synergies, using pattern recognition or regression algorithms is showing promising results toward achieving natural multifunctional control of prosthetic and orthotic devices [109].

EMG pattern-recognition algorithms are based on the assumption that humans can generate different yet repeatable muscle-activation patterns that are associated to specific movements, such as different grasping configurations. During the training of the algorithm, each programmed movement is linked to a stable muscle-activation pattern that is described by a set of features. These features should be repeatable across trials of the same movement and discriminative between different movements [110]. Once the training is completed, the algorithm extracts features from windowed raw EMG data and classifies them into the programmed movements. Many different variations of each of these steps have been investigated [111] trying to find a suitable tradeoff between speed and performance [112]. Pattern-recognition algorithms recently became commercially available (Coapt LLC., Chicago, USA). This technique has the potential to eliminate the need for isolated muscle activation and allow for the control of multiple degrees of freedom [89]. However, an important limitation of pattern-recognition algorithms is that they are only capable of classifying movements in sequence and not simultaneously and lack proportional controllability, which limits user acceptance [110].

EMG regression algorithms, which are based on non- negative matrix factorization, artificial neural networks, or linear regressions, present a functional improvement since they allow for simultaneous and proportional control of multifunctional prostheses [105]. Regression algorithms require a training data set for which a continuous relationship between EMG signals and plant kinematics or dynamics is known. Several studies have reported proportional and simultaneous control of two or more DOF in upper-extremity prosthetic [23, 113, 114] and orthotic [24, 115] devices. An additional movie file shows an amputee operating an upper-extremity prosthesis using both simultaneous and sequential EMG-based control during functional tasks (see Additional file 2).

Download video file (10.2MB, mp4)

Additional file 2: Simultaneous and sequential EMG-based control. An amputee controlling the elbow and hand movements of an upper-extremity prosthesis with simultaneous and sequential EMG-based control. Video reused from [178]. (MP4 10 MB)

Targeted muscle reinnervation (TMR) is a surgical procedure developed by Kuiken et al. [25] that consists of rerouting the nerves that originally innervated the amputated limb to residual chest or upper-arm muscles (Figure 4). TMR is a one-time invasive method that allows a more intuitive control of a larger number of DOF [116] than standard EMG methods, since the prosthesis is controlled by EMG signals from the residual muscles that are activated by the nerves that previously controlled the amputated limb. Moreover, there is evidence that cutaneous sensory feedback of the amputated hand can be regained by reinnervating skin near or overlying the target muscles with residual afferent nerve fibers of the amputated hand [117]. TMR appears to be more suitable for high-level amputations [118] and current implementations still experience some difficulties separating the EMG signals from the different chest muscles. Recent studies aim at combining TMR with muscle synergies-based algorithms [26, 119], developing new targeted sensory reinnervation techniques [120], applying TMR for the control of lower-extremity prostheses [27, 121, 122], and implementing intramuscular EMG electrodes [123].

Figure 4.

Figure 4

EMG-based interface. An amputated patient using a Targeted Muscle Reinnervation (TMR) EMG-based interface for the control of an active prosthetic arm [116]. With the TMR EMG-based interface, the patient could control a 6 DOF prosthesis consisting of shoulder flexion, humeral rotation, elbow flexion, wrist rotation, wrist flexion, and hand opening/closing control. The movement performance of this 6 DOF prosthesis (right arm) controlled with TMR EMG-based interface was evaluated and compared to the commercially available prosthesis (left arm) with 3 DOF (body-powered elbow and wrist rotation, and active terminal device) during several timed tasks: A) cubbies, B) cups, C) Box and blocks, and D) clothespin relocation task. The participant could control up to 4 DOF simultaneously, reach a larger workspace and perform some of the timed tasks faster using the TMR EMG-based interface. Figure reused with permission from Elsevier.

Finally, a recent approach that may lead to a new EMG-based control strategy is the use of high-density electrode grids that can extract spike trains of motor neurons and provide information on muscle-discharge patterns. The decomposition of EMG signals into spike trains of motor neurons can be used for proportional control of assistive devices and has shown a higher accuracy when estimating muscle force compared to conventional surface EMG recordings [110].

Interfacing with the actuators: muscle-contraction interfaces

Several signals derived from the muscle-contraction phenomena have been used to detect motion intention: muscle vibration, dimensional change, radial muscle force and stiffness, longitudinal muscle force and muscle hemodynamics. The main advantage of muscle contraction-interfaces (MCIs) is that they are free from electromagnetic noise and may have a lower cost compared to EMG based interfaces. MCIs have only been used for the control of prosthetic devices.

Muscle vibration

The mechanical vibration that is generated when the muscles contract can be measured with microphones [28, 29], accelerometers [32], or a combination of both [30, 31]. This method is known as phonomyography (PMG), acoustic myography (AMG), or mechanomyography (MMG).

Orizio et al. reported a linear relationship between the root-mean-square (RMS) values of MMG signals recorded from the biceps brachii and the force of the contraction between 20 to 80 percent of the maximum voluntary contraction [124], which makes MMG potentially suitable for prostheses control. MMG offers several advantages over conventional EMG, including robustness to changes on skin impedance, less specific sensor placement, and reduced sensor cost [28, 30]. However, microphones and especially accelerometers are highly prone to limb-movement artifacts that compromises signal detection and classification [29]. To overcome this major problem, Silva and Chau [31] developed a coupled microphone-accelerometer sensor (embedded in silicone) that fuses data from both transducers to reduce dynamic noise. In their design, the accelerometer is used as a dynamic reference sensor to determine the source of the measured vibration (muscle contraction or limb movement). Recently, Posatskiy and Chau [125] developed a novel microphone with cylindrical and conical acoustic chambers that improves the robustness to limb-movement artifacts.

Muscle dimensional change

When the muscle contracts, dimensional changes also occur: the muscle shortens and consequently its cross-section area increases. The measurable signals resulting from this phenomenon are known as myokinemetric (MK) signals. MK signals have been measured with Hall-effect sensors, tendon-activated pneumatic (TAP) foam sensors, angular encoders, ultrasound scanners, and with electrodes that detect changes in electrical impedance.

Evidence from [33, 34] suggests that MK signals are inherently low in noise and that their magnitude can be directly used for control, avoiding any kind of signal processing. The study by Kenny et al. [33] developed a sensor that measured radial displacements of the muscle bulge with a Hall-effect transducer. They found that the MK signals of six upper-limb amputees could be generated with sufficient accuracy to perform a one-dimension tracking task with a low tracking error, and therefore had potential for the control of active upper-limb prostheses. However, it was also found that the MK signals were susceptible to socket slippage with time, as well as socket re-donning. The study by Heath [34] improved the sensor interface and proved the feasibility of using MK signals for proportional position control of prosthetic fingers.

The TAP sensors developed by Abboudi et al. [35] measured pressure differences at the skin surface generated by tendon displacements when finger flexor or extensor muscles contracted. TAP sensors presented a linear relation to finger force. Trials on three upper-limb amputees showed that the TAP sensors could provide effective control of individual finger movement and grasping. Since the performance of the TAP control interface is highly dependent upon accurate sensor placement and specific movement resolution at each sensor location, Curcie et al. [36] developed a pressure vector decoder able to discriminate specific finger flexion commands in real-time. This decoder decreased the dependence on sensor location, offering a more robust and reliable control of the TAP-based interface.

The study by Kim et al. [37] presents the development of a sensor that can measure muscle circumference changes using an angular encoder that was attached to an elastic arm band with a wire. The authors developed a calibration procedure and a biomechanical model to estimate the elbow torques from the measurements of the muscle circumference.

Recent studies propose the use of ultrasound scanners [3840, 126] to measure changes in muscle thickness for the control of prosthetic devices. This method is known as sonomyography (SMG). SMG presents a linear relationship between ultrasound image features of the human forearm and the hand and wrist kinematic configurations [38, 127], suggesting that simple proportional control could be implemented for the operation of active movement-assistive devices. A recent study by González and Castellini [40] shows that a linear relationship also exists between specific image features and fingertip forces. Furthermore the study by Shi et al. [126] shows that SMG can be implemented in real-time using a two-dimensional logarithmic search algorithm. While it has been demonstrated that SMG could be potentially used for the control of active prosthetic devices, the current system used for the SMG measurements (i.e., standard ultrasound scanner) is not suitable for practical implementation because it is expensive and cumbersome [39].

Another signal derived from dimensional changes of the muscle was examined in the early 1970s by Kadefors and Olsson [41]. The study investigated electrical impedance as a measure of motion intent for the control of a robotic hand. The electrical impedance measured on the skin above a muscle varied when the dimensions of the muscle changed due to its contraction (among other factors). A recent study by Silva et al. [128] shows evidence that relates muscle activity and tissue resistivity changes and suggests that this information could be used for the control of assistive devices.

Radial muscle force and muscle stiffness

The stiffness of muscle tissue increases when it contracts. The measured force signals resulting from this phenomenon are known as myotonic (MT) [34] or myokinetic (MKT) [42] signals. These signals have been measured using arrays of force-sensitive resistors (FSRs) [42] or strain gauges [43, 129]. A recent study by Castellini and Koiva [44] used a tactile sensor placed between the arm and the table to measure changes in pressure distribution at the ventral side of the forearm while performing a variable force task with the fingertip. The authors report that the fingertip forces could be estimated with a high degree of accuracy from the pressure distribution measured at the forearm.

Recently, a study by Han et al. [45] presented a novel muscle stiffness sensor that could be worn over clothing, based on the measurement of the muscle resonance frequency. This sensor measured muscle stiffness by generating and sensing resonance vibrations using a piezoelectric transducer: As the muscle became stiffer, the resonance frequency became higher.

Muscle-stiffness-based interfaces are still in the early phases of research and development, and therefore their potential value as control interfaces for active movement-assistive devices remains unclear. None of the aforementioned studies reported results from experiments implementing this sensing modality in an assistive device and testing it with people with movement impairments.

Muscle force

Direct muscle force control by muscle tunnel cineplasty (MTC; Figure 5) was first performed in the early 1900s, becoming popular after World War II [46, 130]. The major advantage of this one-time invasive method is that is capable of providing tactile and proprioceptive feedback from the terminal device back to the user complying with the concept of extended physiological proprioception (EPP) [46, 131]. However, such procedures lost favor in the 1970s due to the advent of clinically available myoelectric prostheses, which do not require any surgery. Moreover, MTC presented the disadvantage that patients sometimes lacked sufficient muscle force to power their prosthesis [130].

Figure 5.

Figure 5

Muscle-force-based interface. The prosthesis is controlled by pulling on cables that mechanically link the tendons attached to the tunnel muscle cineplasty to a force transducer mounted to the thumb of the prosthetic hand. An artificial controller measures the tendon force produced by the muscle to operate the opening and closing of the hand. A) schematic representation of the prosthesis and the control interface, B) an early version of the prototype without a cosmetic hand glove, C) the final prototype of the prosthesis with a cosmetic hand glove. Figure modified from [46].

While traditionally MTCs provided both the control and the actuation of the prosthetic prehensor [132], Weir et al. [46] proposed a hybrid method where the MTC provided the control signal for the prosthetic prehensor, but the grasping force was supplied by an external power source. The authors of the study suggested that the implementation of multiple miniature forearm MTCs could be potentially used as control signals for independent multi-finger control. However, to the best of our knowledge, no further advancements on this method have been made up to date.

Muscle hemodynamics

NIRS has also been used to measure optical changes due to muscle contraction for the operation of active prostheses. Several physiological phenomena have been reported as responsible for the muscle optical changes, including blood perfusion [47, 133], muscle shortening [48], and muscle fiber direction [49]. Numerous studies show that NIRS signals are qualitatively similar to EMG signals in both magnitude and duration for different levels of contraction [47, 48, 50], with the particular difference that isometric and isotonic contractions can be distinguished from NIRS signals [49]. Furthermore, Herrmann et al. [133] proposed to combine EMG and NIRS signals to improve the pattern classification algorithm for the control of a prosthetic hand.

Interfacing with the plant: movement interfaces

The human body moves as a result of the interaction between the forces generated by the muscles and the configuration of the skeletal system. Measurements of relative joint rotations and motion of body segments with respect to a fixed reference have been used to detect motion intention.

Body segment movement

Body segment motion has been measured with inertial measurement units (IMUs) and camera-based systems. In a study by Thomas and Simon [51] and in one by Moreno et al. [52] IMUs were used to control an active knee joint during walking. The study by Jiang et al. [53] presented a hand-gesture recognition interface system based on two Microsoft Kinect cameras (Microsoft; Redmond, Washington) for the control of external robotic arms. The interface was specifically developed for individuals with upper-level spinal cord injuries (SCIs) to perform a variety of simple object-retrieval tasks. One camera was used to interpret the hand gestures and locate the operator’s face for object positioning. The other camera was used to automatically recognize different daily living objects for test subjects to select. A similar interface was developed by Martin et al. [134] and used several infrared sensors to measure hand movements to control an active arm support for patients suffering from muscular weakness.

Relative joint movement

Angular displacement between two adjacent body segments has been measured using electrogoniometers, that are attached to two adjacent body segments and produce an electrical signal proportional to the angle. Several kinds of electrogoniometers have been used to measure angular displacements. Early goniometers used simple angular potentiometers. Doubler and Childress [54] and Gibbons et al. [55] used these sensors to investigate the EPP concept in the control of active upper-extremity prostheses. Recent studies also investigated the performance of controlling prosthetic arms with the residual shoulder motion measured with a two-DOF joystick [56, 57] (see Figure 6 and Additional file 3). Although potentiometers can measure rotations about only one axis and the accuracy of the measurements depends on their alignment with the human joint, these sensors are still a common component in active movement-assistive devices [135, 136]. Another common solution for measuring angular displacement to control active prosthetic and orthotic devices is the use of angular encoders [59] or bending sensors [58, 137]. The control strategy developed by Wang et al. [138] estimates the location of the body center of mass in the sagittal and frontal plane using IMUs and angular encoders to measure the weight shift and trigger the stepping movement of a lower-extremity exoskeleton.

Figure 6.

Figure 6

Joint-rotation-based interface. 2 DOF joystick used for the control of a prosthetic arm (shoulder flexion-extension, shoulder internal-external rotation) with the residual shoulder motion. A) Diagram of the control interface used to measure the residual shoulder motion, B) close-up of the prototype, C) shoulder elevation produces shoulder flexion, D) shoulder depression produces shoulder extension, E) shoulder protraction produces internal shoulder rotation, F) shoulder retraction produces external shoulder rotation. Additional file 3 shows an amputee using this shoulder-joystick-based interface to control the shoulder motions of an active prosthesis. Figure modified from [56].

Interfacing with the plant: force interfaces

The human plant can exert forces to the environment that can provide information about the motion intention of the user. Force-based interfaces have been implemented using force-torque sensors [6062] or simple FSRs for the control of active upper-extremity orthoses [6366] and prostheses [56]. These kind of interfaces generally implement control strategies where the output motion is related to the input force [139]. An advantage of force-based interfaces is that force sensors can be embedded in the mechanical structure of the assistive device, avoiding any preparation for the placement of the sensors on the user. Recently, Ragonesi et al. [140] reported measurements of gravitational and joint stiffness torques in patients with muscle weakness with the end goal of obtaining a robust model that could be used for the control of an active arm support. The authors found that voluntary forces of individuals with muscular weakness were very hard to measure since gravitational forces were much larger. Subject-specific models were suggested as a strategy to optimize the identification of voluntary forces. The study by Lobo-Prat et al. [62] demonstrated that an adult man with Duchenne muscular dystrophy with no arm function left, could successfully control an active elbow orthosis using the low-amplitude force (and EMG) signals that still remained measurable (Figure 7). In the aforementioned study, the gravitational and joint stiffness forces were measured during a calibration procedure in which the orthosis together with the relaxed forearm of the participant slowly moved across its range of motion. The gravitational and joint stiffness forces measured during this calibration procedure were subtracted from the actual measured force to compensate them. Additional movie files show an adult man with Duchenne muscular dystrophy with no arm function left performing the calibration procedure used to measure the gravitational and joint stiffness forces (see Additional file 4), and using the force-based control interface to perform a discrete position-tracking task (see Additional file 5).

Download video file (8.9MB, mp4)

Additional file 4: Force calibration procedure. A critical aspect for the usability of force-based interfaces in individuals with severe muscular weakness is the accurate identification of the gravitational and joint stiffness forces. This video shows the force calibration procedure used to measure the pose-dependent gravitational and joint stiffness forces of the participant. The absence of EMG activity shows that the participant was relaxed during the measurement. Video courtesy of Flextension project. (MP4 9 MB)

Download video file (17.4MB, mp4)

Additional file 5: Force-based interface. An adult man with Duchenne muscular dystrophy with no arm function left using a force-based interface to operate an active elbow orthosis and performing a discrete position task. The plot in the bottom of the video shows the interaction force between the participant and the orthosis used as a control input for the admittance controller. The reader is referred to [62] for further information on the design and control of this active elbow support. Video courtesy of Flextension project. (MP4 17 MB)

Figure 7.

Figure 7

Force-based interface. An adult man with Duchenne muscular dystrophy with no arm function left using a force-based interface to operate an active elbow orthosis [62]. The force sensor measures the interaction force between the orthosis and the user, which is used as a control input for an admittance controller. A critical aspect for the usability of this interface is the accurate identification of the gravitational and joint stiffness forces (which are pose-dependent) required to distinguish the low-amplitude voluntary forces of the user. Additional file 4 shows the force calibration procedure used to identify the gravitational and joint stiffness forces. Additional file 5 shows a man with Duchenne muscular dystrophy performing a discrete tracking task using the force-based interface to control the active elbow orthosis. Figure courtesy of Flextension project.

In patients with severe movement and force limitations, it is very challenging to use movement- and force-based interfaces. These type of control interfaces are more often implemented in rehabilitation robots where patients need training to regain mobility and strength [107, 141].

Interfacing with parallel systems

Apart from deriving the motion intention from signals that originate from the supported systems, several methods have been proposed that exploit signals from parallel systems such as the eyes, the mouth or the head to derive the movement intention of the user. This section reviews six groups of common interfaces that derive the intent of the user through signals generated by parallel systems.

Eye interfaces

Eye tracking systems are a common method for the control of spelling devices or computer cursors in patients with severe movement impairments. Several eye-trackers have been developed, including camera-based methods, which measure changes in corneal reflection while infrared light is projected to the eye [142], and electrical-based methods that measure the electrooculographic (EOG) potential from surface electrodes.

Duvinage et al. [68] proposed an innovative system based on EOG and a programmable central pattern generator to control a lower-limb prosthesis. The control method was composed of two steps: First, an EOG-based eye-tracking system generated high-level control commands (such as faster, slower, or stop), according to specific eye movement sequences executed by the user; and second, a pattern generator, following the high-level commands derived from the user’s eye motion, provided the low-level commands for the control of the actuators.

In the study by Chen and Newman [69], EOG was used to control two-dimensional movements of an external robotic arm that resembled the human arm configuration. Eye movement patterns such us saccades, fixation, or blinks were detected from the raw eye gaze movement data by a pattern-recognition algorithm and converted into control signals according to predefined protocols. The authors suggested that one option to extend the movement control to three-dimensional space was to switch between predefined action planes in which the EOG control would still be two-dimensional.

While eye movement interfaces proved to be very accurate in two-dimensional space, three-dimensional gaze-tracking is more challenging [143]. The three-dimensional gaze-tracking problem consists of mapping pupil coordinates for left and right eye to a three-dimensional point referenced to the user’s head coordinated. A recent study by Abbott and Faisal [67] presents an ultra-low-cost binocular three-dimensional gaze tracking system, which the authors plan to use to control wheelchair navigation or end point control of robotic arms.

Tongue interfaces

Tongue movement has been interfaced using electrical switches [144], Hall-effect sensors [145], pressure sensors [146], and by measuring changes in the inductance of an air-cored induction coil, by moving a ferromagnetic material attached to the tongue into the core of the coil [75, 77, 147].

The tongue-movement-based control interface developed by the research group of Prof. M. Ghovanloo at the GT-Bionics Lab (Georgia Institute of Technology, USA) is a wireless, unobtrusive, and wearable assistive technology that enables SCI patients to control computers and powered wheelchairs [75, 76] (see Figure 8 and Additional file 6). This system has an inductive coil mounted at the lateral part of the mouth and a ferromagnetic material attached at the tip of the tongue. The users generate control commands by moving the tongue to one of the user-defined locations, such as touching a particular tooth with the tongue’s tip. This control interface was tested in 13 high-level SCI patients during a navigation task with a powered wheelchair. The study reported that the subjects were able to perform the experimental task with 82 percent accuracy [148].

Download video file (8.2MB, mp4)

Additional file 6: Tongue-movement-based interface. A men with SCI performing a driving task with a wheelchair that is controlled with the tongue-movement-based interface developed by [76]. Video courtesy of Dr. Maysam Ghovanloo. (MP4 8 MB)

Figure 8.

Figure 8

Tongue-movement-based interfaces. The tongue-movement-based interface developed by [76]. This system has two inductive coils mounted at the lateral parts of the mouth and a ferromagnetic material attached at the tip of the tongue. The users generate control commands by moving the tongue to one of the user-defined locations. Tongue movement interfaces take advantage of the fact that highly paralyzed patients generally have tongue control and they can move it very rapidly and accurately within the oral space. Additional file 6 shows a SCI patient performing a driving task with a wheelchair that is controlled with this tongue-movement-based interface. Figure courtesy of Dr. Maysam Ghovanloo.

The tongue interface developed by Struijk et al. [77] integrated the induction coils under the palate, where 18 sensors allowed real-time proportional control of both speed and direction similar to a conventional joystick. The system’s functionality was demonstrated in a pilot experiment with one healthy subject, where a typing rate of up to 70 characters per minute was obtained with an error rate of 3 percent. Recently, two alternative sensor designs based on the previously described system have been proposed in order to reduce the size of the sensor pad and increase the comfort of the oral interface [149].

Head interfaces

Head movements are generally measured using accelerometers and used to control powered-wheelchairs [7072, 150, 151]. The direction of the head inclination controls the wheelchair’s direction and the velocity of the wheelchair is proportional to the inclination angle. Artificial neural networks are usually implemented in the control interface to detect with higher accuracy the movement intention of the user [70]. However, all the research studies found were tested with healthy subjects, which does not provide reliable evidence of its actual usability in patients with severe movement impairments.

Alternative sensors include ultrasonic sensors [74] and camera-based interfaces [64, 73]. An important disadvantage of camera-based interfaces is that their functionality largely depends on the light conditions that result in the need of repetitive calibrations during the day [152].

Speech interfaces

In speech-based interfaces, the voice commands of the user are recorded using conventional microphones and translated into control signals through speech-recognition algorithms. Generally, speech-recognition requires training, which consists of the recording of the voice commands and their subsequent manual classification.

Fan and Li [78] developed a speech-based interface for the control of an upper-extremity prosthesis that could recognize 15 different voice commands with an accuracy of 96 percent. Speech has been also used to control powered wheelchairs [79] or external robotic arms [80]. The main drawback of speech-based interfaces is their high sensitivity to ambient noise, which compromises signal detection and classification. The recognition accuracy of the speech-based interface developed by Lv et al. [153] for the control of a robot was decreased by 30 percent when ambient noise was present.

Hand interfaces

Hand joysticks are generally implemented for the control of powered wheelchairs [82] and external robotic arms [83, 154]. The study by Johnson et al. [81] controlled the end-point position of a five-DOF upper-extremity orthosis with a joystick operated with the contralateral hand.

Other interfaces with parallel systems

Some of the control interfaces described previously in sections Interfacing with the Plant and Interfacing with the Actuators have also been used to measure signals from parallel systems. For instance, EMG signals measured from muscles of the upper-extremity have been used to control electrically powered wheelchairs [155157]. Another example are the force-based interfaces used to control power-assisted wheelchairs [158] in which the wheelchair detects and amplifies the force applied by the user’s arm on the wheel.

Hybrid control interfaces

Each of the methods described in this review present unique capabilities and limitations. A recent approach to take advantage of this functional heterogeneity is the development of hybrid control interfaces, which fuse data from several sensors. The concept behind hybrid control interfaces is to exploit the advantages and diminish the limitations of each individual system by combining them. Hybrid approaches can improve accuracy, reliability, and robustness compared to individual use of control interfaces [159]. Moreover, hybrid control interfaces have the potential advantage of adapting better to the needs and capabilities of the user since they can rely on several information sources. For instance, if the user gets fatigued and EMG signals present poor quality, the hybrid system could adapt and give priority to other information sources. EEG-EMG-based control interfaces are an example of a hybrid system that has been extensively used to operate active prostheses, orthoses, and robotic wheelchairs [159]. In active lower-extremity prostheses, hybrid interfaces combine EMG and mechanical inputs to identify the phase of the gait cycle and actions like sitting or standing [160162]. Other examples include the combination of EMG with foot switches [163], IMUs [164] and video cameras [165].

Evaluation of control interfaces

While there is a large variety of control interfaces, a small number of studies have focused on their formal performance evaluation and comparison [166173]. The performance of control interfaces is rarely evaluated consistently, which prevents an objective evaluation and comparison. The information provided in most cases is insufficient to determine the advantages and limitations of a given control interface compared to the current state of the art. A significant step toward the standardization of outcome measures has been made by the American Academy of Orthotics & Prosthetics, which provides comprehensive recommendations for the evaluation of upper-limb prostheses [174].

A better understanding of the limitations and capabilities of the different control interfaces, through objective and quantitative evaluations during functional tasks, can provide relevant information for the selection of the most suited control interface for a specific application. One-dimensional screen-based position-tracking task experiments [166, 173] have been used to evaluated the performance of EMG-, force-, joystick- and wrist angle-based control interfaces in terms of tracking error, information transmission rate, human-operator bandwidth or crossover frequency, and effort. Guo et al. [167] compared SMG-, EMG-, force- and wrist angle-based interfaces during a series of screen-based discrete tracking tasks with and without a simultaneous auditory attention task. Even though these studies do not evaluate the interface performance during functional tasks, they can provide insight on their potential value as control interfaces for active movement-assistive devices. Other common evaluation methods are tests based on the Fitts’ Law paradigm in two- or three-dimensional space to compare the performance of different control strategies [168, 172, 175]. Recently, Simon et al. [176] developed the Target Achievement Control test, a real-time virtual test environment that closely resembles reality and allows the evaluation of upper-extremity movement control of multifunctional prostheses. The Target Achievement Control test is being used by several research groups to compare the performance of different control algorithms [177, 178].

Physiological and artificial learning

A common issue in all of the interfacing modalities described in this review is the fact that the user must learn how to use the control interface. Artificial learning is an alternative approach that aims at relieving the user from the training period by rapidly identifying individualized signal patterns that are associated to a specific intention. Casadio et al. [179] developed a control interface for highly paralyzed patients optimizing a subject-specific mapping of residual voluntary movements with control inputs for assistive devices. In practice, this approach is translated into a mutual adaptation in which both user and assistive device learn from each other [90]. Several studies have shown that including artificial learning in BCIs accelerates the adaptation process [11, 180, 181]. Artificial learning/intelligence has also been used for the development of shared-controlled algorithms that aim to combine the intelligence of the human and the assistive device to reduce the concentration effort [90, 182]. With this approach, low-level functions are controlled by the assistive device, and the role of the user is reduced to only give high-level commands, supervise and fine tune the functioning of the system.

In addition to artificial learning, researchers have also been investigating the physiological learning capabilities of the human-controller when subjects are asked to interact with control interfaces that have intuitive and non-intuitive mappings between EMG signals and cursor or device movement. Radhakrishnan et al. [183] and Pistohl et al. [184] found that subjects could learn non-intuitive mappings with a final performance nearly equal to the intuitive mappings using both virtual and real prosthetic control environments. In addition, the recent study by Antuvan et al. [185] extended the aforementioned studies investigating the influence of previously learned mappings on new control tasks. They found that learning curves transferred across subsequent trials have the same mapping, independent of the tasks to be executed, which suggests that maximal performance may be achieved by learning a constant, arbitrary mapping rather than using the common approach of subject- and task-specific mappings.

Design considerations

Intuitiveness

A control interface should be intuitive, enabling the user to operate their active movement-assistive device subconsciously (i.e. the way healthy people control their limbs) with a short training period and to think and do other things while using the device. Control interfaces may require proprioceptive feedback (i.e., EPP; [131]) in cases where sensory feedback has been lost (such as in amputees), to not only rely on visual feedback, which requires considerable mental effort [186].

The reviewed literature shows that most control interfaces are tested in laboratory environments in which users can concentrate on the experimental task with minimal distractions. However, “real-world” users must deal with much more complex situations, where mental effort cannot be entirely (or even primarily) dedicated to the control of the assistive device, as they have to interact with other people and the environment. Therefore, considering that many processes run simultaneously in the brain, it is plausible to conjecture that noninvasive-BCIs would require a considerable effort of concentration to generate those specific brain signals required for the control of an active movement-assistive device. On the other hand, peripheral signals such as force or EMG are more closely linked to the control of movement. Consequently, control interfaces using these signals would appear to be more natural and intuitive for the user than noninvasive-BCIs. Nevertheless, as previously presented in section Physiological and Artificial Learning, shared-control strategies can reduce the concentration effort of BCIs.

Information source for the control interface

A second consideration in the design of a control interface required to gain both user and clinical acceptance of active movement-assistive devices, is that the control interface should avoid the sacrifice of “useful” body functions from parallel systems (e.g. eye, head or tongue movement) for deriving the motion intention of the user. Nevertheless, signals from parallel systems can be used for “supplementary control,” such as tuning control settings or switching on and off control modalities. Note that supplementary control is used on an occasional basis and therefore, does not require continuous attention of the user and never implies the sacrifice of the parallel system functionality.

Response time

Another essential feature of the control interface that has a determinative effect on the performance of the assistive device is the response time or delay [187]. A tradeoff between speed and accuracy exists regarding the control delay. Large delays increase accuracy of the motion intention detection, but at the same time, decreases responsiveness (and therefore performance) of the active assistive device. Farrell and Weir [187] concluded that controller delays should be kept around 100 and 175 ms for proper control of myoelectric prostheses.

Robustness

The system should be robust to disturbances so that variability of sensor placement during donning, slow signal changes due to variation of environmental conditions (e.g. temperature, humidity, or light), user fatigue or external forces with low (e.g., gravity) or high frequencies (e.g. automobile vibrations), do not compromise its performance. Several researchers have identified signal stability and robustness as one of the most critical requirements of control interfaces [89, 110, 188, 189].

Coordination of the degrees of freedom

From a functional point of view, a control interface for active movement-assistive devices should be able to coordinate multiple DOF simultaneously in an effective way. The importance and feasibility of simultaneous control has been investigated using several control modalities such as EEG- [190], EMG- [23, 168] and head movement-based interfaces [172]. Apart from functional advantages, coordinated movements give a more natural appearance than if every DOF is controlled separately. Nevertheless, the user may also need to control an individual DOF when performing a precision task such as unlocking a door or clicking the buttons of a computer mouse.

Independence

Ideally, an active movement-assistive device should not require assistance from other people (e.g., caretakers or family) for the preparation of the equipment, such as installing the sensors in the correct location or calibrating the system. Control interfaces that use implantable sensors or sensors integrated in the artificial plant inherently offer a clear advantage in this respect. Most of the active prosthetic devices can be self-donned [104], and recent studies focus on simplifying the training of control methods such as pattern recognition by developing interfaces that guide the user during the training of the algorithm [191].

Customization

Control interfaces for active movement-assistive devices need to be customized to the characteristics of the pathology, the available physiological signals, morphology of the body, and the mechanical configuration of the device. In this respect, the control interface (and maybe the rest of the device) should be able to adapt to the changing needs and capabilities of the user. Note that changes can occur over the short term (during a day) as well as over a longer term (years) periods. The monitoring of specific biomechanical descriptors could give an indication of the user’s changes and adapt the assistive device to the new situation. Moreover clinicians could use this information to evaluate a disease’s progression.

Conclusions

This paper presented a comprehensive review of the existing non-invasive control interfaces used to operate active movement-assistive devices. A novel systematic classification method is presented to categorize the inventory of existing control interfaces. This method is based on classifying the control interfaces depending on the source of the signal, that is, which subsystem of the HMCS is the signal coming from, which physiological phenomena is generating the signal and which sensors have been used to measure the signal. We found that the classification method could successfully categorize all existing control interfaces providing a comprehensive overview of the state of the art. The classification method also represents a framework in which new sensing modalities can be included. Each method was shortly described in the body of the paper following the same structure as the classification method.

Current non-invasive BCIs appear to be limited in speed, accuracy, and reliability for the control of active movement-assistive devices. However, researchers are improving their performance by implementing artificial learning algorithms, shared-control strategies and combining BCI with other sensor modalities. Myoelectric-based interfaces are nowadays the most common method for operating active prosthetic and orthotic devices. Advanced myoelectric-based techniques, such as TMR and muscle synergy-based algorithms, are overcoming important functional limitations of standard myoelectric control. Despite the fact that muscle contraction-based interfaces are not being implemented in today’s clinical practice, these interfaces are showing significant advancements and in the future may find their way to compete with the well-established myoelectric control. Although movements and forces are the natural way for the human to interact with the environment, interfaces based on these phenomena are very challenging to use in patients with severe movement and force limitations. Movement- and force-based control interfaces are more often implemented in rehabilitation robots where patients need to train to regain mobility and strength. Finally, control interfaces based on signals from parallel systems are generally a suitable solution to derive the movement intention of highly paralyzed patients. Recently, interfaces based on parallel systems have also been implemented in hybrid sensor modalities to expand the sources of information. In this respect, artificial learning is a promising strategy to make use of these extra sources of information and to give some degree of autonomy and adaptability to the artificial controller. In addition, a better understanding on how the human-controller works and learns will have a great impact on how control interfaces are designed.

Regarding the comparison and evaluation of the existing control interfaces, we have found that while many different control interfaces have been developed, their respective performance capabilities and limitations remain unclear. There is a need for a basic consensus on the evaluation methodology to be able to compare control interfaces and select the most suitable for a specific application. An important step toward the standardization of evaluation methods is specially being made in the field of upper-extremity prostheses by the American Academy of Orthotics & Prosthetics. Furthermore, it is clear that most of the studies presenting novel control interfaces lack tests outside the laboratory environment. In this context, it is important for future studies to validate new developments by conducting experiments with real end users in an everyday life situation.

All the reviewed control interfaces (in combination with the rest of the assistive system) are still far below the performance of the human movement control system, which inevitably hinders their user’s acceptance. However, the high rate at which these technologies are advancing is a clear indication that there is global interest in developing natural and intuitive control interfaces for active movement-assistive devices.

Consent

Written informed consent was obtained from the patient for the publication of this report and any accompanying images.

Electronic supplementary material

Download video file (1.4MB, mp4)

Additional file 3: Joint-rotation-based interface. An amputee controlling shoulder flexion-extension and internal-external rotations of an active upper-extremity prosthesis with a two-DOF joystick that measures residual shoulder motion. Video reused from [56]. (MP4 1 MB)

Acknowledgements

This research is supported by the Flextension Foundation through the Duchenne Parent Project, Spieren voor Spieren, Prinses Beatrix Spierfonds, Johanna Kinderfonds and Rotterdams Kinderrevalidatie Fonds Adriaanstichting, Focal Meditech, OIM Orthopedie, Ambroise, InteSpring and the Dutch Technology Foundation STW, which is part of the Netherlands Organisation for Scientific Research (NWO), and which is partly funded by the Ministry of Economic Affairs. Project Number: 11832.

Abbreviations

HMCS

Human movement control system

AMCS

Artificial movement control system

EPP

Extended physiological proprioception

TMR

Targeted muscle reinnervation

BCI

Brain computer interface

EMG

Electromyography

EOG

Electrooculographic

DOF

Degree of freedom

EEG

Electroencephalography

SMG

Sonomyography

ISO

International Organization for Standardization

MEG

Magnetoencephalography

fMRI

Functional magnetic resonance imaging

NIRS

Near infrared spectroscopy

PMG

Phonomyography

AMG

Acoustic myography

MMG

Mechanomyography

RMS

Root-mean-square

MK

Myokinemetric

TAP

Tendon activated pneumatic

MT

Myotonic

MKT

Myokinetic

MTC

Muscle tunnel cineplasty

IMU

Inertial measurement unit

SCI

Spinal cord injury

FSR

force-sensitive resistor.

Footnotes

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

JLP performed the literature search and review of the papers and was the lead manuscript writer and editor. PK contributed to structure of the review and provided several revisions in all sections of the manuscript. PV provided the theoretical framework for the classification method and provided several revisions of the manuscript. AS, JH, and BK have been involved in revising the manuscript and provided significant contributions in the discussion of the design considerations section and conclusions. All authors read and approved the final manuscript.

Contributor Information

Joan Lobo-Prat, Email: j.loboprat@utwente.nl.

Peter N Kooren, Email: p.kooren@vumc.nl.

Arno HA Stienen, Email: arnostienen@gmail.com.

Just L Herder, Email: j.l.herder@tudelft.nl.

Bart FJM Koopman, Email: h.f.j.m.koopman@utwente.nl.

Peter H Veltink, Email: p.h.veltink@utwente.nl.

References

  • 1.Veltink PH. Sensory feedback in artificial control of human mobility. Technol Health Care. 1999;7:383–391. [PubMed] [Google Scholar]
  • 2.Veltink PH, Koopman HFJM, van der Helm FCT, Nene AV. Biomechatronics – assisting the impaired motor system. Arch Physiol Biochem. 2001;109:1–9. doi: 10.1076/apab.109.1.1.4282. [DOI] [PubMed] [Google Scholar]
  • 3.Borutzky W. Bond Graph Methodology: Development and Analysis of Multidisciplinary Dynamic System Models. Switzerland: Springer International publisher; 2010. [Google Scholar]
  • 4.Dollar AM, Herr H. Lower extremity exoskeletons and active orthoses: challenges and state-of-the-Art. IEEE Trans Robot. 2008;24:144–158. doi: 10.1109/TRO.2008.915453. [DOI] [Google Scholar]
  • 5.Pons JL. Wearable Robots: Biomechatronic Exoskeletons. 1 2008. [Google Scholar]
  • 6.Nicolas-Alonso LF, Gomez-Gil J. Brain computer interfaces, a review. Sensors. 2012;12:1211–1279. doi: 10.3390/s120201211. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Zecca M, Micera S, Carrozza MC, Dario P. Control of multifunctional prosthetic hands by processing the electromyographic signal. Crit Rev Biomed Eng. 2002;30:459–485. doi: 10.1615/CritRevBiomedEng.v30.i456.80. [DOI] [PubMed] [Google Scholar]
  • 8.McFarland DJ, Wolpaw JR. Brain-computer interface operation of robotic and prosthetic devices. Computer. 2008;41:52–56. doi: 10.1109/MC.2008.409. [DOI] [Google Scholar]
  • 9.Kübler A, Birbaumer N. Brain–computer interfaces and communication in paralysis: extinction of goal directed thinking in completely paralysed patients? Clin Neurophysiol. 2008;119:2658–2666. doi: 10.1016/j.clinph.2008.06.019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Muller-Putz GR, Pfurtscheller G. Control of an electrical prosthesis with an SSVEP-based BCI. IEEE Trans Biomed Eng. 2008;55:361–364. doi: 10.1109/TBME.2007.897815. [DOI] [PubMed] [Google Scholar]
  • 11.Galán F, Nuttin M, Lew E, Ferrez PW, Vanacker G, Philips J, Millán J del R. A brain-actuated wheelchair: asynchronous and non-invasive Brain–computer interfaces for continuous control of robots. Clin Neurophysiol. 2008;119:2159–2169. doi: 10.1016/j.clinph.2008.06.001. [DOI] [PubMed] [Google Scholar]
  • 12.Gancet J, Ilzkovitz M, Motard E, Nevatia Y, Letier P, de Weerdt D, Cheron G, Hoellinger T, Seetharaman K, Petieau M, Ivanenko Y, Molinari M, Pisotta I, Tamburella F, Labini FS, D’ Avella A, van der Kooij H, Wang L, van der Helm F, Wang S, Zanow F, Hauffe R, Thorsteinsson F. 4th IEEE RAS EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob): 24-27 June 2012. 2012. MINDWALKER: going one step further with assistive lower limbs exoskeleton for SCI condition subjects; pp. 1794–1800. [Google Scholar]
  • 13.Looned R, Webb J, Xiao ZG, Menon C. Assisting drinking with an affordable BCI-controlled wearable robot and electrical stimulation: a preliminary investigation. J Neuroeng Rehabil. 2014;11:51. doi: 10.1186/1743-0003-11-51. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Leeb R, Perdikis S, Tonin L, Biasiucci A, Tavella M, Creatura M, Molina A, Al-Khodairy A, Carlson T, Millán JDR. Transferring brain-computer interfaces beyond the laboratory: successful application control for motor-disabled users. Artif Intell Med. 2013;59:121–132. doi: 10.1016/j.artmed.2013.08.004. [DOI] [PubMed] [Google Scholar]
  • 15.Mellinger J, Schalk G, Braun C, Preissl H, Rosenstiel W, Birbaumer N, Kübler A. An MEG-based brain-computer interface (BCI) NeuroImage. 2007;36:581–593. doi: 10.1016/j.neuroimage.2007.03.019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Buch E, Weber C, Cohen LG, Braun C, Dimyan MA, Ard T, Mellinger J, Caria A, Soekadar S, Fourkas A, Birbaumer N. Think to move: a neuromagnetic brain-computer interface (BCI) system for chronic stroke. Stroke. 2008;39:910–917. doi: 10.1161/STROKEAHA.107.505313. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Sorger B, Reithler J, Dahmen B, Goebel R. A real-time fMRI-based spelling device immediately enabling robust motor-independent communication. Curr Biol. 2012;22:1333–1338. doi: 10.1016/j.cub.2012.05.022. [DOI] [PubMed] [Google Scholar]
  • 18.Lee J-H, Ryu J, Jolesz FA, Cho Z-H, Yoo S-S. Brain-machine interface via real-time fMRI: preliminary study on thought-controlled robotic arm. Neurosci Lett. 2009;450:1–6. doi: 10.1016/j.neulet.2008.11.024. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Sagara K, Kido K. Evaluation of a 2-channel NIRS-based optical brain switch for motor disabilities’ communication tools. IEICE Trans Inf Syst. 2012;E95-D:829–834. doi: 10.1587/transinf.E95.D.829. [DOI] [Google Scholar]
  • 20.Misawa T, Goto K, Takano S, Hirobayashi S. A development of NIRS-based brain-computer interface for robot control. IEEJ Transact Sensors Micromachines. 2012;132:355–361. doi: 10.1541/ieejsmas.132.355. [DOI] [Google Scholar]
  • 21.Hargrove LJ, Simon AM, Lipschutz R, Finucane SB, Kuiken TA. Non-weight-bearing neural control of a powered transfemoral prosthesis. J Neuroeng Rehabil. 2013;10:62. doi: 10.1186/1743-0003-10-62. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Lenzi T, De Rossi SMM, Vitiello N, Carrozza MC. Intention-based EMG control for powered exoskeletons. IEEE Trans Biomed Eng. 2012;59:2180–2190. doi: 10.1109/TBME.2012.2198821. [DOI] [PubMed] [Google Scholar]
  • 23.Jiang N, Rehbaum H, Vujaklija I, Graimann B, Farina D. Intuitive, online, simultaneous, and proportional myoelectric control over Two degrees-of-freedom in upper limb amputees. IEEE Trans Neural Syst Rehabil Eng. 2014;22:501–510. doi: 10.1109/TNSRE.2013.2278411. [DOI] [PubMed] [Google Scholar]
  • 24.Artemiadis PK, Kyriakopoulos KJ. EMG-based control of a robot Arm using Low-dimensional embeddings. IEEE Trans Robot. 2010;26:393–398. doi: 10.1109/TRO.2009.2039378. [DOI] [Google Scholar]
  • 25.Kuiken TA, Miller LA, Lipschutz RD, Lock BA, Stubblefield K, Marasco PD, Zhou P, Dumanian GA. Targeted reinnervation for enhanced prosthetic arm function in a woman with a proximal amputation: a case study. Lancet. 2007;369:371–380. doi: 10.1016/S0140-6736(07)60193-7. [DOI] [PubMed] [Google Scholar]
  • 26.Kuiken TA, Li G, Lock BA, Lipschutz RD, Miller LA, Stubblefield KA, Englehart K. Targeted muscle reinnervation for real-time myoelectric control of multifunction artificial arms. JAMA. 2009;301:619–628. doi: 10.1001/jama.2009.116. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Hargrove LJ, Simon AM, Young AJ, Lipschutz RD, Finucane SB, Smith DG, Kuiken TA. Robotic Leg control with EMG decoding in an amputee with nerve transfers. N Engl J Med. 2013;369:1237–1242. doi: 10.1056/NEJMoa1300126. [DOI] [PubMed] [Google Scholar]
  • 28.Barry DT, Leonard JA, Jr, Gitter AJ, Ball RD. Acoustic myography as a control signal for an externally powered prosthesis. Arch Phys Med Rehabil. 1986;67:267–269. [PubMed] [Google Scholar]
  • 29.Posatskiy AO, Chau T. The effects of motion artifact on mechanomyography: a comparative study of microphones and accelerometers. J Electromyogr Kinesiol. 2012;22:320–324. doi: 10.1016/j.jelekin.2011.09.004. [DOI] [PubMed] [Google Scholar]
  • 30.Silva J, Heim W, Chau T. A self-contained, mechanomyography-driven externally powered prosthesis. Arch Phys Med Rehabil. 2005;86:2066–2070. doi: 10.1016/j.apmr.2005.03.034. [DOI] [PubMed] [Google Scholar]
  • 31.Silva J, Chau T. Coupled microphone-accelerometer sensor pair for dynamic noise reduction in MMG signal recording. Electron Lett. 2003;39:1496–1498. doi: 10.1049/el:20031003. [DOI] [Google Scholar]
  • 32.Antonelli MG, Beomonte Zobel P, Giacomin J. Use of MMG signals for the control of powered orthotic devices: development of a rectus femoris measurement protocol. Assist Technol. 2009;21:1–12. doi: 10.1080/10400430902945678. [DOI] [PubMed] [Google Scholar]
  • 33.Kenney LP, Lisitsa I, Bowker P, Heath G, Howard D. Dimensional change in muscle as a control signal for powered upper limb prostheses: a pilot study. Med Eng Phys. 1999;21:589–597. doi: 10.1016/S1350-4533(99)00089-2. [DOI] [PubMed] [Google Scholar]
  • 34.Heath GH. Control of proportional grasping using a myokinemetric signal. Technol Disabil. 2003;15:73–83. [Google Scholar]
  • 35.Abboudi RL, Glass CA, Newby NA, Flint JA, Craelius W. A biomimetic controller for a multifinger prosthesis. IEEE Trans Rehabil Eng. 1999;7:121–129. doi: 10.1109/86.769401. [DOI] [PubMed] [Google Scholar]
  • 36.Curcie DJ, Flint JA, Craelius W. Biomimetic finger control by filtering of distributed forelimb pressures. IEEE Trans Neural Syst Rehabil Eng. 2001;9:69–75. doi: 10.1109/7333.918278. [DOI] [PubMed] [Google Scholar]
  • 37.Kim WS, Lee HD, Lim DH, Han JS, Shin KS, Han CS. Development of a muscle circumference sensor to estimate torque of the human elbow joint. Sensors Actuators A Phys. 2014;208:95–103. doi: 10.1016/j.sna.2013.12.036. [DOI] [Google Scholar]
  • 38.Zheng YP, Chan MMF, Shi J, Chen X, Huang QH. Sonomyography: monitoring morphological changes of forearm muscles in actions with the feasibility for the control of powered prosthesis. Med Eng Phys. 2006;28:405–415. doi: 10.1016/j.medengphy.2005.07.012. [DOI] [PubMed] [Google Scholar]
  • 39.Chen X, Zheng Y-P, Guo J-Y, Shi J. Sonomyography (smg) control for powered prosthetic hand: a study with normal subjects. Ultrasound Med Biol. 2010;36:1076–1088. doi: 10.1016/j.ultrasmedbio.2010.04.015. [DOI] [PubMed] [Google Scholar]
  • 40.González DS, Castellini C. A realistic implementation of ultrasound imaging as a human-machine interface for upper-limb amputees. Front Neurorobot. 2013;7:17. doi: 10.3389/fnbot.2013.00017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Kadefors R, Olsson T. Electrical impedance as a source of information in man–machine systems. Proc IEEE. 1972;60:724–725. doi: 10.1109/PROC.1972.8732. [DOI] [Google Scholar]
  • 42.Wininger M, Kim N-H, Craelius W. Pressure signature of forearm as predictor of grip force. J Rehabil Res Dev. 2008;45:883–892. doi: 10.1682/JRRD.2007.11.0187. [DOI] [PubMed] [Google Scholar]
  • 43.Moromugi S, Koujina Y, Ariki S, Okamoto A, Tanaka T, Feng MQ, Ishimatsu T. Muscle stiffness sensor to control an assistance device for the disabled. Artif Life Robotics. 2004;8:42–45. doi: 10.1007/s10015-004-0286-8. [DOI] [Google Scholar]
  • 44.Castellini C, Koiva R. IEEE International Conference on Rehabilitation Robotics (ICORR): 24-26 June 2013. WA: Seattle; 2013. Using a high spatial resolution tactile sensor for intention detection; pp. 1–7. [DOI] [PubMed] [Google Scholar]
  • 45.Han H, Han H, Kim J. Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC): Aug. 28 2012-Sept. 1 2012. CA: San Diego; 2012. Development of real-time muscle stiffness sensor based on resonance frequency for physical Human Robot Interactions; pp. 2367–2370. [DOI] [PubMed] [Google Scholar]
  • 46.Weir RF, Heckathorne CW, Childress DS. Cineplasty as a control input for externally powered prosthetic components. J Rehabil Res Dev. 2001;38:357–363. [PubMed] [Google Scholar]
  • 47.Bianchi T, Zambarbieri D, Beltrami G, Verni G. NIRS monitoring of muscle contraction to control a prosthetic device. Proc SPIE 3570, Biomed Sensors, Fibers, Opt Deliv Syst. 1999;3570:157–163. doi: 10.1117/12.336926. [DOI] [Google Scholar]
  • 48.Cen L, Han H, Kim J. IEEE Instrumentation and Measurement Technology Conference (I2MTC): 13-16 May 2012. 2011. Optical muscle activation sensors for estimating upper limb force level; pp. 1–4. [Google Scholar]
  • 49.Chianura A, Giardini ME. An electrooptical muscle contraction sensor. Med Biol Eng Comput. 2010;48:731–734. doi: 10.1007/s11517-010-0626-x. [DOI] [PubMed] [Google Scholar]
  • 50.Schneider DM, Ripplinger CM, Gilbertson K, Katti R, Schroeder MJ. Optically-Based Control of a Prosthetic Device. Florida: Key Biscayne; 2003. [Google Scholar]
  • 51.Thomas G, Simon D. Proceedings of the IASTED International Symposia on Imaging and Signal Processing in Health Care and Technology, ISPHT 2012: May 14 – 16, 2012. USA: Baltimore; 2012. Inertial thigh angle sensing for a semi-active knee prosthesis; pp. 119–124. [Google Scholar]
  • 52.Moreno JC, de Lima ER, Ruíz AF, Brunetti FJ, Pons JL. Design and implementation of an inertial measurement unit for control of artificial limbs: application on leg orthoses. Sensors Actuators B Chem. 2006;118:333–337. doi: 10.1016/j.snb.2006.04.039. [DOI] [Google Scholar]
  • 53.Jiang H, Wachs JP, Duerstock BS. 2013 IEEE International Conference on Rehabilitation Robotics (ICORR): 24-26 June 2013. WA: Seattle; 2013. Integrated vision-based robotic arm interface for operators with upper limb mobility impairments; pp. 1–6. [DOI] [PubMed] [Google Scholar]
  • 54.Doubler JA, Childress DS. An analysis of extended physiological proprioception as a prosthesis-control technique. J Rehabil Res Dev. 1984;21:5–18. [PubMed] [Google Scholar]
  • 55.Gibbons DT, O’riain MD, Philippe-Auguste S. An above-elbow prosthesis employing programmed linkages. IEEE Trans Biomed Eng. 1987;BME-34:493–498. doi: 10.1109/TBME.1987.325978. [DOI] [PubMed] [Google Scholar]
  • 56.Lipschutz RD, Lock B, Sensinger J, Schultz AE, Kuiken TA. Use of two-axis joystick for control of externally powered shoulder disarticulation prostheses. J Rehabil Res Dev. 2011;48:661–667. doi: 10.1682/JRRD.2010.04.0072. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57.Losier Y, Englehart K, Hudgins B. Evaluation of shoulder complex motion-based input strategies for endpoint prosthetic-limb control using dual-task paradigm. J Rehabil Res Dev. 2011;48:669–678. doi: 10.1682/JRRD.2010.08.0165. [DOI] [PubMed] [Google Scholar]
  • 58.Takagi M, Iwata K, Takahashi Y, Yamamoto S-I, Koyama H, Komeda T. IEEE International Conference on Robotics and Automation, 2009 ICRA’09: 12-17 May 2009. 2009. Development of a grip aid system using air cylinders; pp. 2312–2317. [Google Scholar]
  • 59.Font-Llagunes JM, Pàmies-Vilà R, Alonso J, Lugrís U. Simulation and design of an active orthosis for an incomplete spinal cord injured subject. Procedia IUTAM. 2011;2:68–81. doi: 10.1016/j.piutam.2011.04.007. [DOI] [Google Scholar]
  • 60.Ragonesi D, Agrawal S, Sample W, Rahman T. Series elastic actuator control of a powered exoskeleton. Conf Proc IEEE Eng Med Biol Soc. 2011;2011:3515–3518. doi: 10.1109/IEMBS.2011.6090583. [DOI] [PubMed] [Google Scholar]
  • 61.Rahman T, Ramanathan R, Stroud S, Sample W, Seliktar R, Harwin W, Alexander M, Scavina M. Towards the control of a powered orthosis for people with muscular dystrophy. Proc Inst Mech Eng H J Eng Med. 2001;215:267–274. doi: 10.1243/0954411011535858. [DOI] [PubMed] [Google Scholar]
  • 62.Lobo-Prat J, Kooren PN, Keemink AQL, Paalman MI, Hekman EEG, Veltink PH, Stienen AHA, Koopman BFJM. Biomedical Robotics and Biomechatronics (2014 5th IEEE RAS EMBS International Conference on 12-15 Aug. 2014. Brazil: Sao Paulo; 2014. Design and control of an experimental active elbow support for adult Duchenne Muscular Dystrophy patients; pp. 187–192. [Google Scholar]
  • 63.Abbruzzese K, Lee D, Swedberg A, Talasan H, Paliwal M. Bioengineering Conference (NEBEC), 2011 IEEE 37th Annual Northeast: 1-3 April 2011. New York: Troy; 2011. An innovative design for an Assistive Arm Orthosis for stroke and muscle dystrophy; pp. 1–2. [Google Scholar]
  • 64.Baklouti M, Guyot P-A, Monacelli E, Couvet S. IEEE/RSJ International Conference on Intelligent Robots and Systems, 2008 IROS 2008: 22-26 Sept. 2008. 2008. Force controlled upper-limb powered exoskeleton for rehabilitation; p. 4202. [Google Scholar]
  • 65.Huo W, Huang J, Wang Y, Wu J. International Symposium on Micro-NanoMechatronics and Human Science (MHS): 7-10 Nov. 2010. Japan: Nagoya; 2010. Control of a rehabilitation robotic exoskeleton based on intentional reaching direction; pp. 357–362. [Google Scholar]
  • 66.Nilsson M, Ingvast J, Wikander J, von Holst H. 2012 IEEE EMBS Conference on Biomedical Engineering and Sciences (IECBES): 17-19 Dec. 2012. 2012. The Soft Extra Muscle system for improving the grasping capability in neurological rehabilitation; pp. 412–417. [Google Scholar]
  • 67.Abbott WW, Faisal AA. Ultra-low-cost 3D gaze estimation: an intuitive high information throughput compliment to direct brain–machine interfaces. J Neural Eng. 2012;9:046016. doi: 10.1088/1741-2560/9/4/046016. [DOI] [PubMed] [Google Scholar]
  • 68.Duvinage M, Castermans T, Dutoit T. IEEE Symposium on Computational Intelligence, Cognitive Algorithms, Mind, and Brain (CCMB): 11-15 April 2011. 2011. Control of a lower limb active prosthesis with eye movement sequences; pp. 1–7. [Google Scholar]
  • 69.Chen Y, Newman WS. IEEE International Conference on Robotics and Automation, 2004 Proceedings ICRA’04: April 26-May 1, 2004. 2004. A human-robot interface based on electrooculography; pp. 243–248. [Google Scholar]
  • 70.Craig DA, Nguyen HT. Engineering in Medicine and Biology Society, 2005 IEEE-EMBS 2005 27th Annual International Conference of the: 17-18 Jan. 2006. 2006. Wireless real-time head movement system using a personal digital assistant (PDA) for control of a power wheelchair; pp. 772–775. [DOI] [PubMed] [Google Scholar]
  • 71.Fusco DA, Balbinot A. Prototype for managing the wheelchair movements by accelerometry. Sens Transducers. 2011;126:31–41. [Google Scholar]
  • 72.Hinkel JB., III . ASSETS’10 - Proceedings of the 12th International ACM SIGACCESS Conference on Computers and Accessibility: 25-27 Oct. 2010. 2010. Head-guided wheelchair control system; pp. 313–314. [Google Scholar]
  • 73.Baklouti M, Monacelli E, Guitteny V, Couvet S. Intelligent assistive exoskeleton with vision based interface. In: Helal S, Mitra S, Wong J, Chang CK, Mokhtari M, editors. Smart Homes and Health Telematics. Berlin Heidelberg: Springer; 2008. pp. 123–135. [Google Scholar]
  • 74.Coyle ED. IEE Colloquium on New Developments in Electric Vehicles for Disabled Persons: 7 Mar 1997. 1995. Electronic wheelchair controller designed for operation by hand-operated joystick, ultrasonic noncontact head control and utterance from a small word-command vocabulary. [Google Scholar]
  • 75.Huo X, Wang J, Ghovanloo M. A magneto-inductive sensor based wireless tongue-computer interface. IEEE Trans Neural Syst Rehabil Eng. 2008;16:497–504. doi: 10.1109/TNSRE.2008.2003375. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 76.Kim J, Park H, Bruce J, Sutton E, Rowles D, Pucci D, Holbrook J, Minocha J, Nardone B, West D, Laumann A, Roth E, Jones M, Veledar E, Ghovanloo M. The tongue enables computer and wheelchair control for people with spinal cord injury. Sci Transl Med. 2013;5:213ra166. doi: 10.1126/scitranslmed.3006296. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 77.Struijk JJ, Lontis ER, Bentsen B, Christensen HV, Caltenco HA, Lund ME. Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2009 EMBC 2009: 3-6 Sept. 2009. Minneapolis: MN; 2009. Fully integrated wireless inductive tongue computer interface for disabled people; pp. 547–550. [DOI] [PubMed] [Google Scholar]
  • 78.Fan BH, Li KY. Proceedings - 2010 2nd WRI Global Congress on Intelligent Systems, IEEE - GCIS 2010: 16-17 Dec. 2010. 2010. The speech control system of intelligent robot prosthesis; pp. 407–409. [Google Scholar]
  • 79.Simpson RC, Levine SP. Voice control of a powered wheelchair. IEEE Trans Neural Syst Rehabil Eng. 2002;10:122–125. doi: 10.1109/TNSRE.2002.1031981. [DOI] [PubMed] [Google Scholar]
  • 80.Rogalla O, Ehrenmann M, Zollner R, Becher R, Dillmann R. 11th IEEE International Workshop on Robot and Human Interactive Communication, 2002 Proceedings: 27-27 Sept. 2002. 2002. Using gesture and speech control for commanding a robot assistant; pp. 454–459. [Google Scholar]
  • 81.Johnson GR, Carus DA, Parrini G, Scattareggia Marchese S, Valeggi R. The design of a five-degree-of-freedom powered orthosis for the upper limb. Proc Inst Mech Eng H J Eng Med. 2001;215:275–284. doi: 10.1243/0954411011535867. [DOI] [PubMed] [Google Scholar]
  • 82.Dicianno BE, Cooper RA, Coltellaro J. Joystick control for powered mobility: current state of technology and future directions. Phys Med Rehabil Clin N Am. 2010;21:79–86. doi: 10.1016/j.pmr.2009.07.013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 83.Maheu V, Frappier J, Archambault PS, Routhier F. 2011 IEEE International Conference on Rehabilitation Robotics (ICORR): June 29 2011-July 1 2011. 2011. Evaluation of the JACO robotic arm: clinico-economic study for powered wheelchair users with upper-extremity disabilities; pp. 1–5. [DOI] [PubMed] [Google Scholar]
  • 84.Wolpaw JR, Birbaumer N, McFarland DJ, Pfurtscheller G, Vaughan TM. Brain–computer interfaces for communication and control. Clin Neurophysiol. 2002;113:767–791. doi: 10.1016/S1388-2457(02)00057-3. [DOI] [PubMed] [Google Scholar]
  • 85.Pfurtscheller G, Guger C, Müller G, Krausz G, Neuper C. Brain oscillations control hand orthosis in a tetraplegic. Neurosci Lett. 2000;292:211–214. doi: 10.1016/S0304-3940(00)01471-3. [DOI] [PubMed] [Google Scholar]
  • 86.Do AH, Wang PT, King CE, Chun SN, Nenadic Z. Brain-computer interface controlled robotic gait orthosis. J Neuroeng Rehabil. 2013;10:111. doi: 10.1186/1743-0003-10-111. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 87.Wang S, Wang L, Meijneke C, van Asseldonk E, Hoellinger T, Cheron G, Ivanenko Y, La Scaleia V, Sylos-Labini F, Molinari M, Tamburella F, Pisotta I, Thorsteinsson F, Ilzkovitz M, Gancent J, Nevatia Y, Hauffe R, Zanow F, van der Kooij H. Design and Control of the MINDWALKER Exoskeleton. IEEE Trans Neural Syst Rehabil Eng. 2014;99:1. doi: 10.1109/TNSRE.2014.2341275. [DOI] [PubMed] [Google Scholar]
  • 88.Kaufmann T, Herweg A, Kübler A. Toward brain-computer interface based wheelchair control utilizing tactually-evoked event-related potentials. J Neuroeng Rehabil. 2014;11:7. doi: 10.1186/1743-0003-11-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 89.Schultz AE, Kuiken TA. Neural interfaces for control of upper limb prostheses: the state of the Art and future possibilities. PM&R. 2011;3:55–67. doi: 10.1016/j.pmrj.2010.06.016. [DOI] [PubMed] [Google Scholar]
  • 90.Millán J del R. Brain–Computer Interfaces. In: Farina D, Jensen W, Akay M, editors. Introduction to Neural Engineering for Motor Rehabilitation. Hoboken, NJ, USA: John Wiley & Sons, Inc; 2013. pp. 237–252. [Google Scholar]
  • 91.Allison BZ, Dunne S, Leeb R, Millán J del R, Nijholt A. Recent and Upcoming BCI Progress: Overview, Analysis, and Recommendations. In: Allison BZ, Dunne S, Leeb R, Millán JDR, Nijholt A, editors. Towards Practical Brain-Computer Interfaces. Berlin Heidelberg: Springer; 2013. pp. 1–13. [Google Scholar]
  • 92.Perdikis S, Leeb R, Williamson J, Ramsay A, Tavella M, Desideri L, Hoogerwerf E-J, Al-Khodairy A, Murray-Smith R, Millán JDR. Clinical evaluation of BrainTree, a motor imagery hybrid BCI speller. J Neural Eng. 2014;11:036003. doi: 10.1088/1741-2560/11/3/036003. [DOI] [PubMed] [Google Scholar]
  • 93.Sitaram R, Caria A, Birbaumer N. Hemodynamic brain–computer interfaces for communication and rehabilitation. Neural Netw. 2009;22:1320–1328. doi: 10.1016/j.neunet.2009.05.009. [DOI] [PubMed] [Google Scholar]
  • 94.Kanoh S, Murayama YM, Miyamoto K, Yoshinobu T, Kawashima R. A NIRS-based brain-computer interface system during motor imagery: system development and online feedback training. Conf Proc IEEE Eng Med Biol Soc. 2009;2009:594–597. doi: 10.1109/IEMBS.2009.5333710. [DOI] [PubMed] [Google Scholar]
  • 95.Weiskopf N. Real-time fMRI and its application to neurofeedback. NeuroImage. 2012;62:682–692. doi: 10.1016/j.neuroimage.2011.10.009. [DOI] [PubMed] [Google Scholar]
  • 96.Weiskopf N, Veit R, Erb M, Mathiak K, Grodd W, Goebel R, Birbaumer N. Physiological self-regulation of regional brain activity using real-time functional magnetic resonance imaging (fMRI): methodology and exemplary data. NeuroImage. 2003;19:577–586. doi: 10.1016/S1053-8119(03)00145-9. [DOI] [PubMed] [Google Scholar]
  • 97.Naito M, Michioka Y, Ozawa K, Ito Y, Kiguchi M, Kanazawa T. A communication means for totally locked-in ALS patients based on changes in cerebral blood volume measured with near-infrared light. IEICE Trans Inf Syst. 2007;E90-D:1028–1037. doi: 10.1093/ietisy/e90-d.7.1028. [DOI] [Google Scholar]
  • 98.Sorger B, Dahmen B, Reithler J, Gosseries O, Maudoux A, Laureys S, Goebel R. Another kind of “BOLD Response”: answering multiple-choice questions via online decoded single-trial brain signals. Prog Brain Res. 2009;177:275–292. doi: 10.1016/S0079-6123(09)17719-1. [DOI] [PubMed] [Google Scholar]
  • 99.Yoo S-S, Fairneny T, Chen N-K, Choo S-E, Panych LP, Park H, Lee S-Y, Jolesz FA. Brain-computer interface using fMRI: spatial navigation by thoughts. NeuroReport. 2004;15:1591–1595. doi: 10.1097/01.wnr.0000133296.39160.fe. [DOI] [PubMed] [Google Scholar]
  • 100.Ayaz H, Shewokis PA, Bunce S, Onaral B. Engineering in Medicine and Biology Society,EMBC, 2011 Annual International Conference of the IEEE: Aug. 30 2011-Sept. 3 2011. MA: Boston; 2011. An optical brain computer interface for environmental control; pp. 6327–6330. [DOI] [PubMed] [Google Scholar]
  • 101.Ward BD, Mazaheri Y. Information transfer rate in fMRI experiments measured using mutual information theory. J Neurosci Methods. 2008;167:22–30. doi: 10.1016/j.jneumeth.2007.06.027. [DOI] [PubMed] [Google Scholar]
  • 102.Power SD, Kushki A, Chau T. Towards a system-paced near-infrared spectroscopy brain-computer interface: differentiating prefrontal activity due to mental arithmetic and mental singing from the no-control state. J Neural Eng. 2011;8:066004. doi: 10.1088/1741-2560/8/6/066004. [DOI] [PubMed] [Google Scholar]
  • 103.Shih JJ, Krusienski DJ, Wolpaw JR. Brain-computer interfaces in medicine. Mayo Clin Proc. 2012;87:268–279. doi: 10.1016/j.mayocp.2011.12.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 104.Spires MC, Kelly BM, Davis AJ. Prosthetic Restoration and Rehabilitation of the Upper and Lower Extremity. New York, USA: Demos Medical Publishing; 2013. [Google Scholar]
  • 105.Roche AD, Rehbaum H, Farina D, Aszmann OC. Prosthetic myoelectric control strategies: a clinical perspective. Curr Surg Rep. 2014;2:1–11. doi: 10.1007/s40137-013-0044-8. [DOI] [Google Scholar]
  • 106.Jiménez-Fabián R, Verlinden O. Review of control algorithms for robotic ankle systems in lower-limb orthoses, prostheses, and exoskeletons. Med Eng Phys. 2012;34:397–408. doi: 10.1016/j.medengphy.2011.11.018. [DOI] [PubMed] [Google Scholar]
  • 107.Anam K, Al-Jumaily AA. Active exoskeleton control systems: state of the Art. Proc Eng. 2012;41:988–994. doi: 10.1016/j.proeng.2012.07.273. [DOI] [Google Scholar]
  • 108.Muzumdar A, editor. Powered Upper Limb Prostheses: Control, Implementation and Clinical Application. 1. Berlin Heidelberg: Springer; 2004. [Google Scholar]
  • 109.Ison M, Artemiadis P. The role of muscle synergies in myoelectric control: trends and challenges for simultaneous multifunction control. J Neural Eng. 2014;11:051001. doi: 10.1088/1741-2560/11/5/051001. [DOI] [PubMed] [Google Scholar]
  • 110.Farina D, Jiang N, Rehbaum H, Holobar A, Graimann B, Dietl H, Aszmann OC. The extraction of neural information from the surface EMG for the control of upper-limb prostheses: emerging avenues and challenges. IEEE Trans Neural Syst Rehabil Eng. 2014;22:797–809. doi: 10.1109/TNSRE.2014.2305111. [DOI] [PubMed] [Google Scholar]
  • 111.Scheme E, Englehart K. Electromyogram pattern recognition for control of powered upper-limb prostheses: state of the art and challenges for clinical use. J Rehabil Res Dev. 2011;48:643–659. doi: 10.1682/JRRD.2010.09.0177. [DOI] [PubMed] [Google Scholar]
  • 112.Smith LH, Hargrove LJ, Lock B, Kuiken T. Determining the optimal window length for pattern recognition-based myoelectric control: balancing the competing effects of classification error and controller delay. IEEE Trans Neural Syst Rehabil Eng. 2011;19:186–192. doi: 10.1109/TNSRE.2010.2100828. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 113.Muceli S, Jiang N, Farina D. Extracting signals robust to electrode number and shift for online simultaneous and proportional myoelectric control by factorization algorithms. IEEE Trans Neural Syst Rehabil Eng. 2014;22:623–633. doi: 10.1109/TNSRE.2013.2282898. [DOI] [PubMed] [Google Scholar]
  • 114.Rehbaum H, Jiang N, Paredes L, Amsuess S, Graimann B, Farina D. 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC): Aug. 28 2012-Sept. 1 2012. CA: San Diego; 2012. Real time simultaneous and proportional control of multiple degrees of freedom from surface EMG: preliminary results on subjects with limb deficiency; pp. 1346–1349. [DOI] [PubMed] [Google Scholar]
  • 115.Cavallaro EE, Rosen J, Perry JC, Burns S. Real-time myoprocessors for a neural controlled powered exoskeleton Arm. IEEE Trans Biomed Eng. 2006;53:2387–2396. doi: 10.1109/TBME.2006.880883. [DOI] [PubMed] [Google Scholar]
  • 116.Miller LA, Lipschutz RD, Stubblefield KA, Lock BA, Huang H, Williams TW, III, Weir RF, Kuiken TA. Control of a Six degree of freedom prosthetic Arm after targeted muscle reinnervation surgery. Arch Phys Med Rehabil. 2008;89:2057–2065. doi: 10.1016/j.apmr.2008.05.016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 117.Kuiken TA, Marasco PD, Lock BA, Harden RN, Dewald JPA. Redirection of cutaneous sensation from the hand to the chest skin of human amputees with targeted reinnervation. PNAS. 2007;104:20061–20066. doi: 10.1073/pnas.0706525104. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 118.Bueno RA, Jr, French B, Cooney D, Neumeister MW. Targeted muscle reinnervation of a muscle-free flap for improved prosthetic control in a shoulder amputee: case report. J Hand Surg. 2011;36:890–893. doi: 10.1016/j.jhsa.2011.02.020. [DOI] [PubMed] [Google Scholar]
  • 119.Tkach DC, Young AJ, Smith LH, Rouse EJ, Hargrove LJ. Real-time and offline performance of pattern recognition myoelectric control using a generic electrode grid with targeted muscle reinnervation patients. IEEE Trans Neural Syst Rehabil Eng. 2014;22:727–734. doi: 10.1109/TNSRE.2014.2302799. [DOI] [PubMed] [Google Scholar]
  • 120.Hebert JS, Olson JL, Morhart MJ, Dawson MR, Marasco PD, Kuiken T, Chan KM. Novel targeted sensory reinnervation technique to restore functional hand sensation after transhumeral amputation. IEEE Trans Neural Syst Rehabil Eng. 2014;22:765–773. doi: 10.1109/TNSRE.2013.2294907. [DOI] [PubMed] [Google Scholar]
  • 121.Souza JM, Fey NP, Cheesborough JE, Agnew SP, Hargrove LJ, Dumanian GA. Advances in transfemoral amputee rehabilitation: early experience with targeted muscle reinnervation. Curr Surg Rep. 2014;2:1–9. doi: 10.1007/s40137-014-0051-4. [DOI] [Google Scholar]
  • 122.Hargrove LJ, Simon AM, Lipschutz RD, Finucane SB, Kuiken TA. Real-time myoelectric control of knee and ankle motions for transfemoral amputees. JAMA. 2011;305:1542–1544. doi: 10.1001/jama.2011.465. [DOI] [PubMed] [Google Scholar]
  • 123.Smith LH, Hargrove LJ. 2013 6th International IEEE/EMBS Conference on Neural Engineering (NER): 6-8 Nov. 2013. CA: San Diego; 2013. Intramuscular EMG after targeted muscle reinnervation for pattern recognition control of myoelectric prostheses; pp. 1155–1158. [Google Scholar]
  • 124.Orizio C. Muscle sound: bases for the introduction of a mechanomyographic signal in muscle studies. Crit Rev Biomed Eng. 1993;21:201–243. [PubMed] [Google Scholar]
  • 125.Posatskiy AO, Chau T. Design and evaluation of a novel microphone-based mechanomyography sensor with cylindrical and conical acoustic chambers. Med Eng Phys. 2012;34:1184–1190. doi: 10.1016/j.medengphy.2011.12.007. [DOI] [PubMed] [Google Scholar]
  • 126.Shi J, Chang Q, Zheng Y-P. Feasibility of controlling prosthetic hand using sonomyography signal in real time: preliminary study. J Rehabil Res Dev. 2010;47:87–98. doi: 10.1682/JRRD.2009.03.0031. [DOI] [PubMed] [Google Scholar]
  • 127.Castellini C, Passig G. 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS): 25-30 Sept. 2011. CA: San Francisco; 2011. Ultrasound image features of the wrist are linearly related to finger positions; pp. 2108–2114. [Google Scholar]
  • 128.Silva OL, Sousa THS, Hoffman IO, de Camargo EDLB, de Moura FS, Martins ARC, Biasi C, Fantoni DT, Lima RG. Biomedical Robotics and Biomechatronics (2014 5th IEEE RAS EMBS International Conference on; 12-15 Aug. 2014. Brazil: Sao Paulo; 2014. A proposal to monitor muscle contraction through the change of electrical impedance inside a muscle; pp. 763–767. [Google Scholar]
  • 129.Moromugi S, Kumano S, Ueda M, Ishimatsu T, Feng MQ, Tanaka T. 5th IEEE Conference on Sensors, 2006: 14-16 Aug. 2006. 2006. A sensor to measure hardness of human tissue; pp. 388–391. [Google Scholar]
  • 130.Childress DS. Presentation highlights: tunnel cineplasty. J Rehabil Res Dev. 2002;39(3 SUPPL.):9–10. [Google Scholar]
  • 131.Simpson DC. The choice of control system for the multimovement prosthesis: extended physiological proprioception (epp) Control Upper-Extremity Prostheses Orthoses. 1974;29:146–150. [Google Scholar]
  • 132.Brav EA, Spittler AW, Lusombe HB, Kuitert JH, Macdonald WF, Vulteejr FE, Woodard GH, Fletcher MJ, Leonard F. Cineplasty an end-result study. J Bone Joint Surg Am. 1957;39:59–76. [PubMed] [Google Scholar]
  • 133.Herrmann S, Attenberger A, Buchenrieder K. Prostheses control with combined near-infrared and myoelectric signals. In: Moreno-Díaz R, Pichler F, Quesada-Arencibia A, editors. Computer Aided Systems Theory – EUROCAST 2011. Berlin Heidelberg: Springer; 2012. pp. 601–608. [Google Scholar]
  • 134.Martin H, Chevallier S, Monacelli E. ESANN 2012 proceedings, European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. Bruges (Belgium): ESANN; 2012. Fast calibration of hand movement-based interface; pp. 573–578. [Google Scholar]
  • 135.Blaya JA, Herr H. Adaptive control of a variable-impedance ankle-foot orthosis to assist drop-foot gait. IEEE Trans Neural Syst Rehabil Eng. 2004;12:24–31. doi: 10.1109/TNSRE.2003.823266. [DOI] [PubMed] [Google Scholar]
  • 136.Herr H, Wilkenfeld A. User-adaptive control of a magnetorheological prosthetic knee. Ind Robot: Int J. 2003;30:42–55. doi: 10.1108/01439910310457706. [DOI] [Google Scholar]
  • 137.Orengo G, Giovannini L, Latessa G, Saggio G, Giannini F. 1st International Conference on Wireless Communication, Vehicular Technology, Information Theory and Aerospace Electronic Systems Technology, 2009 Wireless VITAE 2009: 17-20 May 2009. 2009. Characterization of piezoresistive sensors for goniometric glove in hand prostheses; pp. 684–687. [Google Scholar]
  • 138.Wang L, Wang S, van Asseldonk EHF, Van Der Kooij H. 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS): 3-7 Nov. 2013. 2013. Actively controlled lateral gait assistance in a lower limb exoskeleton; pp. 965–970. [Google Scholar]
  • 139.Zeng G, Hemami A. An overview of robot force control. Robotica. 1997;15:473–482. doi: 10.1017/S026357479700057X. [DOI] [Google Scholar]
  • 140.Ragonesi D, Agrawal S, Sample W, Rahman T. Quantifying anti-gravity torques in the design of a powered exoskeleton. Conf Proc IEEE Eng Med Biol Soc. 2011;2011:7458–7461. doi: 10.1109/IEMBS.2011.6091749. [DOI] [PubMed] [Google Scholar]
  • 141.Lo HS, Xie SQ. Exoskeleton robots for upper-limb rehabilitation: state of the art and future prospects. Med Eng Phys. 2012;34:261–268. doi: 10.1016/j.medengphy.2011.10.004. [DOI] [PubMed] [Google Scholar]
  • 142.Betke M, Gips J, Fleming P. The Camera Mouse: visual tracking of body features to provide computer access for people with severe disabilities. IEEE Trans Neural Syst Rehabil Eng. 2002;10:1–10. doi: 10.1109/TNSRE.2002.1021581. [DOI] [PubMed] [Google Scholar]
  • 143.Morimoto CH, Mimica MRM. Eye gaze tracking techniques for interactive applications. Comput Vis Image Underst. 2005;98:4–24. doi: 10.1016/j.cviu.2004.07.010. [DOI] [Google Scholar]
  • 144.Clayton C, Platts RGS, Steinberg M, Hennequin JR. Palatal tongue controller. J Microcomput Appl. 1992;15:9–12. doi: 10.1016/0745-7138(92)90042-4. [DOI] [Google Scholar]
  • 145.Buchhold N. Apparatus for controlling peripheral devices through tongue movement, and method of processing control signals. 1995. US 5460186 A. [Google Scholar]
  • 146.Wakumoto M, Masaki S, Honda K, Ohue T. Proceedings of the 5th Int. Conf. on Spoken Language Processing: Nov. 30-Dec. 4 1988. 1998. A pressure sensitive palatography: application of new pressure sensitive sheet formeasuring tongue-palatal contact pressure; pp. 3151–3154. [Google Scholar]
  • 147.Struijk LNS. An inductive tongue computer interface for control of computers and assistive devices. IEEE Trans Biomed Eng. 2006;53:2594–2597. doi: 10.1109/TBME.2006.880871. [DOI] [PubMed] [Google Scholar]
  • 148.Huo X, Ghovanloo M. Evaluation of a wireless wearable tongue–computer interface by individuals with high-level spinal cord injuries. J Neural Eng. 2010;7:026008. doi: 10.1088/1741-2560/7/2/026008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 149.Lontis ER, Andreasen Struijk LNS. 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC): Aug. 28 2012-Sept. 1 2012. CA: San Diego; 2012. Alternative design of inductive pointing device for oral interface for computers and wheelchairs; pp. 3328–3331. [DOI] [PubMed] [Google Scholar]
  • 150.Joseph T, Nguyen H. Engineering in Medicine and Biology Society, 1998 Proceedings of the 20th Annual International Conference of the IEEE: 29 Oct-1 Nov 1998. 1998. Neural network control of wheelchairs using telemetric head movement; pp. 2731–2733. [Google Scholar]
  • 151.Taylor PB, Nguyen HT. Proceedings of the 25th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2003: 17-21 Sept. 2003. 2003. Performance of a head-movement interface for wheelchair control; pp. 1590–1593. [Google Scholar]
  • 152.Bergasa LM, Mazo M, Gardel A, Barea R, Boquete L. 15th International Conference on Pattern Recognition, 2000 Proceedings: 3-7 Sept. 2000. 2000. Commands generation by face movements applied to the guidance of a wheelchair for handicapped people; pp. 660–663. [Google Scholar]
  • 153.Lv X, Zhang M, Li H. IEEE International Conference on Automation and Logistics, 2008 ICAL 2008: 1-3 Sept. 2008. 2008. Robot control based on voice command; pp. 2490–2494. [Google Scholar]
  • 154.Romer GRBE, Stuyt HJA, Peters A. 9th International Conference on Rehabilitation Robotics, 2005 ICORR 2005: 28 June-1 July 2005. IL: Chicago; 2005. Cost-savings and economic benefits due to the assistive robotic manipulator (ARM) pp. 201–204. [Google Scholar]
  • 155.Felzer T, Freisleben B. Proc ASSETS 2002. 2002. HaWCoS: the “Hands-free” wheelchair control system; pp. 127–134. [Google Scholar]
  • 156.Han J-S, Zenn Bien Z, Kim D-J, Lee H-E, Kim J-S. Proceedings of the 25th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2003: 17-21 Sept. 2003. 2003. Human-machine interface for wheelchair control with EMG and its evaluation; pp. 1602–1605. [Google Scholar]
  • 157.Moon I, Lee M, Chu J, Mun M. Proceedings of the 2005 IEEE International Conference on Robotics and Automation, 2005 ICRA 2005: 18-22 April 2005. 2005. Wearable EMG-based HCI for Electric-Powered Wheelchair Users with Motor Disabilities; pp. 2649–2654. [Google Scholar]
  • 158.Cooper RA, Corfman TA, Fitzgerald SG, Boninger ML, Spaeth DM, Ammer W, Arva J. Performance assessment of a pushrim-activated power-assisted wheelchair control system. IEEE Trans Control Syst Technol. 2002;10:121–126. doi: 10.1109/87.974345. [DOI] [Google Scholar]
  • 159.Lalitharatne TD, Teramoto K, Hayashi Y, Kiguchi K. Towards hybrid EEG-EMG-based control approaches to be used in Bio-robotics applications: current status, challenges and future directions. Paladyn, J Behav Robotics. 2013;0:11–18. [Google Scholar]
  • 160.Simon AM, Fey NP, Ingraham KA, Young AJ, Hargrove LJ. 2013 6th International IEEE/EMBS Conference on Neural Engineering (NER): 6-8 Nov. 2013. CA: San Diego; 2013. Powered prosthesis control during walking, sitting, standing, and non-weight bearing activities using neural and mechanical inputs; pp. 1174–1177. [Google Scholar]
  • 161.Huang H, Zhang F, Hargrove LJ, Dou Z, Rogers DR, Englehart KB. Continuous locomotion-mode identification for prosthetic legs based on neuromuscular-mechanical fusion. IEEE Trans Biomed Eng. 2011;58:2867–2875. doi: 10.1109/TBME.2011.2161671. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 162.Zhang F, Huang H. 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC): 3-7 July 2013. 2013. Decoding movement intent of patient with multiple sclerosis for the powered lower extremity exoskeleton; pp. 4957–4960. [DOI] [PubMed] [Google Scholar]
  • 163.Resnik L, Borgia M, Latlief G, Sasson N, Smurr-Walters L. Self-reported and performance-based outcomes using DEKA Arm. J Rehabil Res Dev. 2014;51:351–362. doi: 10.1682/JRRD.2013.08.0180. [DOI] [PubMed] [Google Scholar]
  • 164.Carbonaro N, Anania G, Bacchereti M, Donati G, Ferretti L, Pellicci G, Parrini G, Vitetta N, Rossi DD, Tognetti A. An Innovative Multisensor Controlled Prosthetic Hand. In: Romero LMR, editor. XIII Mediterranean Conference on Medical and Biological Engineering and Computing 2013. 2014. pp. 93–96. [Google Scholar]
  • 165.Došen S, Cipriani C, Kostić M, Controzzi M, Carrozza MC, Popovič DB. Cognitive vision system for control of dexterous prosthetic hands: experimental evaluation. J Neuroeng Rehabil. 2010;7:42. doi: 10.1186/1743-0003-7-42. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 166.Corbett EA, Perreault EJ, Kuiken TA. Comparison of electromyography and force as interfaces for prosthetic control. J Rehabil Res Dev. 2011;48:629. doi: 10.1682/JRRD.2010.03.0028. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 167.Guo J-Y, Zheng Y-P, Kenney LPJ, Bowen A, Howard D, Canderle JJ. A comparative evaluation of sonomyography, electromyography, force, and wrist angle in a discrete tracking task. Ultrasound Med Biol. 2011;37:884–891. doi: 10.1016/j.ultrasmedbio.2011.03.008. [DOI] [PubMed] [Google Scholar]
  • 168.Wurth SM, Hargrove LJ. A real-time comparison between direct control, sequential pattern recognition control and simultaneous pattern recognition control using a Fitts’ law style assessment procedure. J Neuroeng Rehabil. 2014;11:91. doi: 10.1186/1743-0003-11-91. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 169.Kaufmann T, Holz EM, Kubler A. Comparison of tactile, auditory, and visual modality for brain-computer interface use: a case study with a patient in the locked-in state. Front Neurosci. 2013;11:7. doi: 10.3389/fnins.2013.00129. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 170.Simon AM, Stern K, Hargrove LJ. A comparison of proportional control methods for pattern recognition control. Conf Proc IEEE Eng Med Biol Soc. 2011;2011:3354–3357. doi: 10.1109/IEMBS.2011.6090909. [DOI] [PubMed] [Google Scholar]
  • 171.Koyama S, Chase SM, Whitford AS, Velliste M, Schwartz AB, Kass RE. Comparison of brain–computer interface decoding algorithms in open-loop and closed-loop control. J Comput Neurosci. 2010;29:73–87. doi: 10.1007/s10827-009-0196-9. [DOI] [PubMed] [Google Scholar]
  • 172.Williams MR, Kirsch RF. Evaluation of head orientation and neck muscle EMG signals as command inputs to a human #x2013; computer interface for individuals with high tetraplegia. IEEE Trans Neural Syst Rehabil Eng. 2008;16:485–496. doi: 10.1109/TNSRE.2008.2006216. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 173.Lobo-Prat J, Keemink AQ, Stienen AH, Schouten AC, Veltink PH, Koopman BH. Evaluation of EMG, force and joystick as control interfaces for active arm supports. J Neuroeng Rehabil. 2014;11:68. doi: 10.1186/1743-0003-11-68. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 174.Hill W, Kyberd P, Norling Hermansson L, Hubbard S, Stavdahl Ø, Swanson S. Upper limb prosthetic outcome measures (ULPOM): a working group and their findings. JPO J Prosthet Orthot. 2009;21(Supplement):P69–P82. doi: 10.1097/JPO.0b013e3181ae970b. [DOI] [Google Scholar]
  • 175.Scheme EJ, Englehart KB. Validation of a selective ensemble-based classification scheme for myoelectric control using a three-dimensional Fitts’ Law test. IEEE Trans Neural Syst Rehabil Eng. 2013;21:616–623. doi: 10.1109/TNSRE.2012.2226189. [DOI] [PubMed] [Google Scholar]
  • 176.Simon AM, Hargrove LJ, Lock BA, Kuiken TA. Target achievement control test: evaluating real-time myoelectric pattern-recognition control of multifunctional upper-limb prostheses. J Rehabil Res Dev. 2011;48:619–627. doi: 10.1682/JRRD.2010.08.0149. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 177.Ortiz-Catalan M, Håkansson B, Brånemark R. Real-time and simultaneous control of artificial limbs based on pattern recognition algorithms. IEEE Trans Neural Syst Rehabil Eng. 2014;22:756–764. doi: 10.1109/TNSRE.2014.2305097. [DOI] [PubMed] [Google Scholar]
  • 178.Young AJ, Smith LH, Rouse EJ, Hargrove LJ. A comparison of the real-time controllability of pattern recognition to conventional myoelectric control for discrete and simultaneous movements. J Neuroeng Rehabil. 2014;11:5. doi: 10.1186/1743-0003-11-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 179.Casadio M, Pressman A, Acosta S, Danzinger Z, Fishbach A, Mussa-Ivaldi FA, Muir K, Tseng H, Chen D. Body machine interface: remapping motor skills after spinal cord injury. IEEE Int Conf Rehabil Robot. 2011;2011:5975384. doi: 10.1109/ICORR.2011.5975384. [DOI] [PubMed] [Google Scholar]
  • 180.Bradberry TJ, Gentili RJ, Contreras-Vidal JL. Reconstructing three-dimensional hand movements from noninvasive electroencephalographic signals. J Neurosci. 2010;30:3432–3437. doi: 10.1523/JNEUROSCI.6107-09.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 181.Kim HK, Biggs SJ, Schloerb DW, Carmena JM, Lebedev MA, Nicolelis MAL, Srinivasan MA. Continuous shared control for stabilizing reaching and grasping with brain-machine interfaces. IEEE Trans Biomed Eng. 2006;53:1164–1173. doi: 10.1109/TBME.2006.870235. [DOI] [PubMed] [Google Scholar]
  • 182.Cowan RE, Fregly BJ, Boninger ML, Chan L, Rodgers MM, Reinkensmeyer DJ. Recent trends in assistive technology for mobility. J Neuroeng Rehabil. 2012;9:20. doi: 10.1186/1743-0003-9-20. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 183.Radhakrishnan SM, Baker SN, Jackson A. Learning a novel myoelectric-controlled interface task. J Neurophysiol. 2008;100:2397–2408. doi: 10.1152/jn.90614.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 184.Pistohl T, Cipriani C, Jackson A, Nazarpour K. Abstract and proportional myoelectric control for multi-fingered hand prostheses. Ann Biomed Eng. 2013;41:2687–2698. doi: 10.1007/s10439-013-0876-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 185.Antuvan CW, Ison M, Artemiadis P. Embedded human control of robots using myoelectric interfaces. IEEE Trans Neural Syst Rehabil Eng. 2014;22:820–827. doi: 10.1109/TNSRE.2014.2302212. [DOI] [PubMed] [Google Scholar]
  • 186.Farrell TR, Weir RF, Heckathorne CW, Childress DS. The effects of static friction and backlash on extended physiological proprioception control of a powered prosthesis. J Rehabil Res Dev. 2005;42:327–341. doi: 10.1682/JRRD.2004.05.0052. [DOI] [PubMed] [Google Scholar]
  • 187.Farrell TR, Weir RF. The optimal controller delay for myoelectric prostheses. IEEE Trans Neural Syst Rehabil Eng. 2007;15:111–118. doi: 10.1109/TNSRE.2007.891391. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 188.Gijsberts A, Bohra R, Sierra Gonzalez D, Werner A, Nowak M, Caputo B, Roa MA, Castellini C. Stable myoelectric control of a hand prosthesis using non-linear incremental learning. Front Neurorobot. 2014;8:8. doi: 10.3389/fnbot.2014.00008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 189.Collinger JL, Boninger ML, Bruns TM, Curley K, Wang W, Weber DJ. Functional priorities, assistive technology, and brain-computer interfaces after spinal cord injury. J Rehabil Res Dev. 2013;50:145–160. doi: 10.1682/JRRD.2011.11.0213. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 190.McFarland DJ, Sarnacki WA, Wolpaw JR. ElectroencephalographiC (EEG) Control of three-dimensional movement. J Neural Eng. 2010;7:036007. doi: 10.1088/1741-2560/7/3/036007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 191.Lock BA, Simon AM, Stubblefield K, Hargrove LJ. Prosthesis-guided training for practical use of pattern recognition control of prostheses. J Prosthet Orthot. 2012;24(2):56–64. doi: 10.1097/JPO.0b013e3182515437. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Download video file (1.4MB, mp4)

Additional file 3: Joint-rotation-based interface. An amputee controlling shoulder flexion-extension and internal-external rotations of an active upper-extremity prosthesis with a two-DOF joystick that measures residual shoulder motion. Video reused from [56]. (MP4 1 MB)


Articles from Journal of NeuroEngineering and Rehabilitation are provided here courtesy of BMC

RESOURCES