Abstract
Surface electromyography (sEMG) is a promising computer access method for individuals with motor impairments. However, optimal sensor placement is a tedious task requiring trial-and-error by an expert, particularly when recording from facial musculature likely to be spared in individuals with neurological impairments. We sought to reduce sEMG sensor configuration complexity by using quantitative signal features extracted from a short calibration task to predict human-machine interface (HMI) performance. A cursor control system allowed individuals to activate specific sEMG-targeted muscles to control an onscreen cursor and navigate a target selection task. The task was repeated for a range of sensor configurations to elicit a range of signal qualities. Signal features were extracted from the calibration of each configuration and examined via a principle component factor analysis in order to predict HMI performance during subsequent tasks. Feature components most influenced by energy and complexity of the EMG signal and muscle activity between sensors were significantly predictive of HMI performance. However, configuration order had a greater effect on performance than the configurations, suggesting that non-experts can place sEMG sensors in the vicinity of usable muscle sites for computer access and healthy individuals will learn to efficiently control the HMI system.
Index Terms: electromyography, feature extraction, human-machine interfaces, myoelectric control
I. Introduction
Facial electromyography has been demonstrated as a robust input modality for assistive technology devices [1–10]. Using surface electromyography (sEMG), the electrical activity of face and neck musculature can be detected by electrodes placed on the surface of the skin. Individuals can then volitionally activate certain muscles to control a human-machine interface (HMI), such as a computer pointing device [1, 2, 4–7, 11] or even a power wheelchair [9–11]. Unlike the use of eye-tracking systems for computer access, sEMG is insensitive to lighting conditions and is suitable for all types of skin with proper skin preparation [12–14].
However, when using facial EMG as a computer access method, control is dependent on appropriate muscle coordination. Sensor configuration affects this coordination because the signal recorded using sEMG is mediated by source characteristics as a function of position (i.e., what is being recorded) and tissues separating the sources from recording electrodes [15–17]. Electrode configuration is therefore a crucial factor when using sEMG [17]. As such, the Surface Electromyography for Non-Invasive Muscle Assessment (SENIAM) project developed recommendations for sEMG sensor configuration as a concerted effort to standardize sensors, sensor configurations, signal processing methods, and modeling methods within the field of sEMG [14, 18]. However, SENIAM does not include facial musculature, for which the methodology of sensor configuration grows more complex: these small muscles interdigitate and overlap, and there is considerable variation in facial structure that can impact the approximate location, thickness, and use of these muscles [19–21]. Not only do face and neck muscles vary from person-to-person due to different facial structures [19, 22], but the ability to use these muscles also varies. For instance, individuals with neurological deficits may have volitional control over their face and neck musculature, but this control may be incomplete. Even though these individuals are an ideal target population for facial EMG-based augmentative systems, sensor configuration must be conducted on a subjective macro-anatomical basis due to the complexity of facial configuration [20].
It is therefore a tedious task for a trained operator to place sEMG sensors according to facial anatomy such that sensor configuration is optimized on a person-to-person basis. The operator must place each sensor based on knowledge of optimal sensor configuration, qualitatively evaluate the amplitude and stability of the signal extracted from this configuration, and repeat until a signal of reasonable amplitude and stability during contraction is subjectively achieved. When placing multiple sensors, the time required to configure the sensors increases. Additionally, when using multiple sensors, complexity arises from the possibility of co-activation, or the presence of multiple contracted muscles near each sEMG sensor. A method of reducing this tedious, time-consuming task of configuring sEMG sensors on a user-specific basis is crucial.
Pattern recognition—an advanced signal processing method for reliably controlling multiple degrees-of-freedom (DOF) based on user intent—has been evaluated for EMG-based HMI systems. However, these systems require that performance remain invariant over time in order to accurately classify muscle contraction patterns [23], and ultimately, performance has been found to degrade within hours after initial classifier training [24]. Therefore, although EMG-based augmentative systems are capable of targeting spared musculature, such systems that take advantage of classification algorithms are not necessarily ideal for individuals suffering from a loss of complete functional movement.
Instead, direct control methods are an attractive alternative to classification methods for adequate computer access. Most notably, direct control via facial sEMG has been shown to enable 360° cursor movements with four direction-based sEMG sources [2], while pattern recognition techniques would only enable cursor movement in the four cardinal directions with the same number of sensors. Yet, previous studies using sEMG cursor control (e.g., [1–3, 5, 7, 8]) have not evaluated small adjustments in sensor configuration, but instead relied on trained operators and trial-and-error to arrive at an “optimal” sensor configuration. Therefore, we investigated quantitative sEMG signal features extracted from a short calibration process in order to determine if user control performance within an HMI could be quickly and accurately predicted. Using this short calibration process to predict HMI performance would bypass the laborious and subjective sensor configurations obtained by a trained operator, thus minimizing configuration time and maximizing HMI performance.
II. Current Investigation
This study sought to determine if there are quantitative signal features that accurately and rapidly predict future HMI performance. We extracted EMG signal features across a range of sensor configurations in order to create a range of signal qualities. Sensor configuration was varied with respect to location (the position of the sensor on a muscle site) and orientation (the direction of the sensor compared to that of the muscle fibers) [14, 16, 18, 21]. For each sensor configuration, we then used a multidirectional tapping paradigm to isolate and assess psychomotor performance when using a sEMG-controlled cursor system as an input device for an HMI. In this task, participants continuously activated individual facial muscles to control the four directional cursor actions (i.e., left, right, up, down) to navigate to a specified target, and the click action to select the target. We recorded performance during the task in terms of speed, accuracy, and movement efficiency through outcome measures of information transfer rate (ITR) and path efficiency (PE).
We hypothesized that creating a range of sensor configurations would result in a range of signal qualities and ensuing HMI performance. This would allow us to determine which quantitative features of signal quality best predict HMI performance. The development of quick and accurate HMI prediction methods will mitigate the tedious, trial-and-error calibration process for EMG-based HMI access modalities.
III. Methods
A. Participants
Eighteen healthy adults (6 male, 12 female; M = 20.4 years, SD = 1.3 years) with no history of motor impairments or facial sEMG control participated in the study. All participants completed written consent in compliance with the Boston University Institutional Review Board and were compensated for their participation.
B. Experimental Design
Participants completed one experimental session that lasted up to one hour. In the session, participants first carried out a Fitts’ law-based multidirectional tapping task (MTT), described in C.4. Multidirectional Tapping Task, with an ordinary computer mouse in order to become familiarized with the task. Then, participants underwent skin preparation and sEMG sensor application (see C.1. Sensor Preparation & Configuration), followed by a short calibration process (described in detail in C.3. Calibration). After, participants completed the MTT again, this time by controlling the on-screen cursor using facial sEMG rather than a computer mouse.
Sensor application (section C.1), calibration (section C.3), and task execution (section C.4) were repeated for three qualitatively-evaluated sEMG sensor configurations: poor, satisfactory, and optimal. The order of presentation of each sensor configuration was randomized per subject, as explained in C.1. Sensor Preparation & Configuration. The calibration and MTT were carried out using custom Python software.
A principal component factor analysis (PCFA) was performed on a battery of quantitative signal features extracted from the calibration process in order to reduce multicollinearity. Resulting feature components were then implemented into general linear mixed models (GLMMs) to determine their usefulness in predicting the outcome measures of ITR and PE.
C. Data Acquisition
1) Sensor Preparation & Configuration
All sEMG was recorded using the Delsys Trigno™ Wireless EMG System (Delsys, Boston, MA) using factory default settings. The Trigno™ sensors are active sEMG sensors and could be used continuously for the duration of the session without the need for recharging. Prior to sensor placement, the surface of each participant’s skin was slightly abraded using alcohol wipes, then exfoliated with tape to remove excess skin cells, oils, and hairs [12–14]. Single differential MiniHead sensors (25×12×7 mm) were then placed over the fibers of the following muscles: (1) left risorius and orbicularis oris, (2) right risorius and orbicularis oris, (3) frontalis, (4) mentalis, and (5) orbicularis oculi (see Fig. 1). The corresponding enclosures (27×37×15 mm) were attached to the following areas (in order): (R1) left clavicle, (R2) right clavicle, (R3) glabella, and (R4 & R5) mastoid processes (see Fig. 1). Each enclosure was employed as a reference to the collected sEMG signals, where any bioelectrical noise common to the reference enclosure and its respective sensor (i.e., common mode voltages) was rejected. Table I provides an overview of the sensor locations.
Fig. 1.

Depiction of sensor configurations, with sensors 1–5 located over the muscle site of interest and sensors R1–R5 as the respective references. Sensors were placed over the (1) left risorius and orbicularis oris, (2) right risorius and orbicularis oris, (3) frontalis, (4) mentalis, and (5) orbicularis oculi. Sensors 3 and 5 were placed contralateral to each other according to each participant’s winking preference. If the participant had no preference, sensor configuration defaulted to that shown in the figure.
Table I.
Description of Sensor Locations
| Sensor Number | Sensor Position | Muscle Group | Reference Position | Facial Gesture | Cursor Action |
|---|---|---|---|---|---|
| 1 | Left of mouth | Left risorius & orbicularis oris | Left clavicle | Contract left cheek | Move left |
| 2 | Right of mouth | Right risorius & orbicularis oris | Right clavicle | Contract right cheek | Move right |
| 3 | Above eyebrow | Frontalis | Glabella | Eyebrow raise | Move up |
| 4 | Chin | Mentalis | Mastoid process | Contract chin | Move down |
| 5 | Angled below eye | Orbicularis oculi | Mastoid process | Hard wink or blink | Click |
Sensors 1–4, controlling the directional movements (i.e., move left, right, up, and down) were placed in either “optimal” or “suboptimal” locations and/or orientations by a trained operator. These sensors were configured by qualitatively balancing specific factors: 1) direction of electrode bars with respect to muscle fiber orientation, 2) position of electrode bars with respect to the approximate location of the muscle belly, 3) ease of electrode attachment to the desired site (e.g., avoiding bony prominences or excessive skin), 4) location of electrode bars necessary to minimize crosstalk, and 5) ability of the participant to volitionally produce and isolate each facial gesture [16, 25]. Resulting signal quality was qualitatively judged based on operator review of the raw sEMG signals and knowledge of general anatomical structures of the face. “Optimal” referred to a configuration in which sensor placement was parallel to the underlying muscle fibers (with electrode bars lying perpendicular to the muscle fibers), centered on isolated muscle tissue [14, 16, 18]. “Suboptimal” referred to location and/or orientation manipulations from what the trained operator considered to be optimal. Specifically, suboptimal location corresponded to a configuration at a distance from the optimal site, while suboptimal orientation corresponded to a configuration some degrees from the optimal angle. We chose to manipulate the sensors at a 12 mm distance (i.e., one sensor width) and 45° angle, respectively, with manipulation direction depending on the facial gesture (e.g., the “right” sensor was manipulated 12 mm distal to the mouth and/or 45° counter-clockwise to avoid sensor placement over the lips). Sensor locations and orientations were randomly for each participant; however, the fifth sensor, controlling the “click” of the cursor, was consistently placed in an optimal configuration to preserve target selection abilities.
Each participant experienced three sensor configurations, described in detail in Table II, termed optimal, satisfactory, and poor according to their qualitative placements. These three sensor configurations were implemented to elicit a range of signal qualities. Participants were pseudorandomly assigned to one of six potential combinations of configuration orders.
Table II.
Sensor Configurations Used to Vary Signal Quality
| Configuration | Description |
|---|---|
| Optimal | All five sensors (i.e., four directional and one click) were placed at optimal locations and orientations. |
| Satisfactory | One randomly chosen directional sensor was placed at a suboptimal location and/or orientation, while the remaining three directional sensors were placed at optimal locations and orientations. |
| Poor | Four directional sensors were placed at suboptimal locations and/or orientations. |
The sEMG signals were recorded at 2000 Hz, band-pass filtered with roll-off frequencies of 20 and 450 Hz, and amplified by a gain of 300 using the open-source PyGesture [26] and custom Python software.
2) Cursor Movement
Combinations of different facial gestures allowed for complete 360° movement within the two-dimensional onscreen interface. The five sEMG sensors were mapped to the following cursor actions: move left, move right, move up, move down, and click (see Table I; adapted from Cler and Stepp [2]). Participants used the four directional sensors (i.e., 1–4) to navigate to the target, and the click sensor (i.e., 5) to select the target. Each directional source enabled a 1-degree of freedom (DOF) cursor movement; specifically, x (horizontal) and y (vertical) movements were determined by Eq. (1) and (2), respectively, as adapted from [2, 6].
| (1) |
| (2) |
The root-mean-square (RMS) value calculated from each directional electrode (i.e., left, right, up, down) was divided by a threshold, T, measured for the particular sensor during the calibration period. The threshold for each sensor was calculated as a set percentage of the maximum value of the RMS; these threshold multipliers, adopted from Cler and Stepp [2], were as follows: 0.3 for left, 0.3 for right, 0.5 for up, 0.3 for down, and 0.7 for click. The resulting threshold values were squared and subtracted from the opposite direction (i.e., left from right and down from up). We adopted the gain factor used in Cler and Stepp [2] to define the magnitude of cursor movement. This enabled the strength of the muscle contraction of each facial gesture to be proportional to cursor movement velocity. Concurrent activation of non-opposing directional sources enabled 2-DOF movements. For instance, simultaneous activation of left and down sources would produce a cursor movement towards the bottom-left of the screen; the magnitude and speed of this movement would be determined by the strength of each contraction.
3) Calibration
The sEMG system was calibrated per participant and per configuration prior to executing each MTT. The first of the three calibrations lasted 5–20 minutes, while the remaining two calibrations lasted approximately five minutes. The first calibration process varied in length according to the amount of time participants needed to learn how to produce and isolate each facial gesture. Participants were instructed to contract each muscle to construct the following calibration sequence: left, left, right, right, up, up, down, down, click, click. An example calibration can be seen in Fig. 2. The calibration sequence was delimited by the participant using mouse clicks, such that individual maximal voluntary contractions (MVCs) were isolated from the baseline and from other contractions (see Fig. 2 – “Mouse Log”). After the participant completed the calibration sequence, a three second quiet period was recorded, during which the participant was instructed to “not move your head or neck, or swallow” in order to obtain a recording of the physiological baseline activity recorded by each sensor.
Fig. 2.

Schematic of an example calibration. Participants were instructed to contract twice at each sensor site in the following order: left, right, up, down, click. In between contractions, participants were asked to click the mouse (“Mouse Log”), which was used to delimit each contraction. Yellow lines represent raw sEMG traces, while purple lines represent the RMS.
The RMS was calculated over 50 ms windows for each sensor. Within each channel, the minimum amount of activation required by the participant for the sEMG system to recognize the gesture as a deliberate movement was defined by a set percentage of the maximum RMS value (adopted from Cler and Stepp [2]). This prevented involuntary or unintentional muscle contractions (e.g., normal blinking) from being recognized as an intent of movement within the psychomotor task.
4) Multidirectional Tapping Task
Following calibration, each participant completed a multidirectional tapping task (MTT), as schematized in Fig. 3. This paradigm was developed using Fitts’ law, which describes the limitation of controlled body movements by the information processing capacity of the human nervous system [27, 28]. In particular, the time it takes to select a target via a rapidly-aimed movement is described by the distance to the target and the individual’s precision in selecting the correct target. Here, this movement time and target selection precision to calculate the outcome measures of ITR and PE.
Fig. 3.

Multidirectional tapping task schematic. a) Task design, with five circles of equivalent diameter W, spaced evenly within a larger circle at distance D apart. b) Task execution, in which participants navigated from the centroid of the screen to the first target (“1”), clicked it (shown here by black rings), then moved to the next highlighted target (“2”) on the diametric opposite side of the large circle depicted in a). Participants then selected the second target and repeated until all five targets were selected. Ideal cursor movements are depicted as black lines.
The MTT lasted between 5–15 minutes and was completed three times for each participant. The design of the task was identical for each of the three rounds. Specifically, a circle was displayed at the center of a monitor with a resolution of 1920×1080 pixels. Five isometric circles were placed equidistantly around the perimeter of the circle (see Fig. 3a). Four of the circles were uniformly colored, with one highlighted circle to be designated as the target (the target is depicted in Fig. 3b as a red circle). The user was instructed to navigate from the center of the screen to the colored circle and select it (i.e., hard wink or blink). Once the participant successfully navigated to and selected the target, a new circle was designated as the target. The participant then navigated to this new target, located diametrically opposite to the previous target; this eliminated the cognitive task of assessing where the next target would appear prior to navigating to it, thereby isolating psychomotor performance. The sequence of selection is schematized in Fig. 3b, while Fig. 4 shows an example cursor trajectory from a participant starting at the first target.
Fig. 4.

Example of a cursor trace for a trial where index of difficulty = 2 bits, information transfer rate = 52.7 bits/min, and path efficiency = 85.7%. Yellow rings designate where the participant clicked the target. Numbers adjacent to each yellow circle define the order in which targets were selected. Cursor starting location is labeled by “Start.”
The starting position of the target was randomized in each block, defined below. Each block was defined by five targets (“trials”) of a specific index of difficulty (ID). ID is a measure of the distance between targets (D) and width of targets (W), as demonstrated in Eq. (3) using the Shannon formulation of the ID from Fitts’ law [29].
| (3) |
The ratio of distance-to-width was altered such that participants were presented with seven ID blocks per configuration (see Table III). The distance and width between targets of each block were determined during pilot testing. Each block was initiated when the user navigated from the centroid of the screen to the first target.
Table III.
Target Width and Distance for Each Index of Difficulty
| Index of Difficulty (ID; bits) | Distance between targets (D; pixels) | Width of targets (W; pixels) |
|---|---|---|
| 1.67 | 218 | 100 |
| 2.00 | 225 | 75 |
| 2.33 | 299 | 73 |
| 2.67 | 380 | 71 |
| 3.00 | 490 | 70 |
| 3.33 | 545 | 60 |
| 3.67 | 585 | 50 |
| (5) |
IV. Data Analysis
A. MTT Performance Metrics
ITR was used to compare the speed and accuracy of cursor movement during the MTT within each configuration. ITR (bits/min) was calculated using Wolpaw’s method as in Eq. (4) on a trial-to-trial basis within a block as a function of accuracy (α), number of targets (NT), number of selections (NS), and movement time (t, min) [30].
| (4) |
The number of targets available to click was set to a constant value of five within the study. The total number of selections corresponded to the number of clicks the user made within a trial to attempt to successfully select the target. Total movement time corresponded to the amount of time the user took to complete each trial. Accuracy was equal to zero or one for each trial: 100% accuracy (α = 1) was used to calculate ITR when participants were able to successfully select the correct target, but 0% accuracy (α = 0) was used if participants failed to select it. In addition to clicking the correct target, movement time must not exceed 180 seconds and the number of clicks must be less than 10 to be considered 100% accurate; these criteria were determined via preliminary testing and were implemented in the current paradigm to limit user interaction time with the MTT. Each ITR was calculated using custom MATLAB 8.2 (Mathworks, Natick, MA) scripts, and was averaged across trials in a block. These ITRs were averaged within sensor configuration for additional comparison and analysis.
PE, which evaluated user movement over time, was calculated per trial as the ratio of the ideal path between targets to the actual path traveled by the participant (see Eq. 5) [6]. The coordinates where the participant “clicked” (i.e., winked or blinked) on the previous target were designated as (x0, y0) and the coordinates that were clicked within the current target were defined as (xn, yn). The Euclidean distance between these start and end points was then divided by the actual distance traveled, which was calculated by summing the distance between each coordinate and its previous coordinate along the path traveled [6]. PE values were averaged within sensor configuration for further comparison and analysis.
B. Feature Extraction
A variety of feature extraction methods were applied to the raw sEMG signals captured during the calibrations of each configuration. First, each contraction was segmented into a 256 ms window using the cursor delimiter signals on each side of the contraction (see Fig. 2 – “Mouse Log”). A window size of 256 ms was chosen in order to adhere to the constraint of real-time engineering applications in which the response time should be no greater than 300 ms [31, 32]. Contractions were segmented by locating the index of the maximum RMS value within the window and taking 128 ms sections to the left and right of this index from the raw sEMG signals. Since there were two contractions for each of five channels, the contractions were concatenated to produce five 512-sample chunks.
When applying feature extraction methods, detailed below, only time-domain features were considered, as frequency-domain EMG features have been shown to perform inadequately in EMG signal classification [32]. Six features were ultimately considered to maximize processing efficiency and minimize feature redundancy [31, 33]. The first author developed custom MATLAB software to extract (1) mean absolute value, (2) zero crossing, (3) slope sign change, (4) Willison amplitude, (5) waveform length, and a (6) coactivation percentage. The thresholds described in section C.3. Calibration were used in calculations for the zero crossing, slope sign change, Willison amplitude, and coactivation percentage. Within each extracted feature, an epoch length of 64 ms was used to compromise between time sensitivity and quality of the estimated features [21, 32].
1) Mean Absolute Value
Mean absolute value (MAV) is the time-windowed average of the absolute value of the EMG signal [32–34]. MAV was preferred over other amplitude detectors (e.g., RMS) because previous studies indicate that MAV has a smaller variance in predicting amplitude [35]. The MAV of an EMG segment is defined in Eq. (6), in which N corresponds to the window length and xi corresponds to the i-th sample within segment s of S total segments in the signal. MAV was calculated as a signal-to-noise ratio with units of decibels, in which the MAV of each contraction was compared to the MAV of the physiological baseline (measured during the quiet period discussed in C.3. Calibration).
| (6) |
2) Zero Crossing
Zero crossing (ZC) is a time-domain measure that contains frequency information of the EMG signal [31–34]. The number of times the amplitude of the raw EMG signal crossed the zero-amplitude level within a time-window was summed. In addition to crossing the zero-amplitude level, the signal must also exceed a threshold (T) in order to mitigate noise-induced ZCs. ZC was calculated for an EMG time-window by:
| (7) |
3) Slope Sign Change
Similar to ZC, slope sign change (SSC) is a measure of frequency information of the EMG signal, as calculated in the time domain [31–34]. Specifically, SSC is calculated as a count of the number of times the slope of the raw EMG signal changes sign within a time-window. The signal must also exceed a threshold (T) in order to mitigate noise-induced SSCs. SSC can be defined for an EMG time-window as follows:
| (8) |
4) Willison Amplitude
Willison amplitude (WAMP), similar to ZC and SSC, is a time-domain measure of frequency information of the EMG signal and is related to muscle contraction level [32, 36, 37]. WAMP is a summation of the number of times the difference between two adjoining segments of a time-window of the raw EMG signal exceeds a threshold (T). WAMP is defined by:
| (9) |
5) Waveform Length
Waveform length (WFL) is a measure of the complexity of the EMG signal regarding time, frequency, and amplitude [31–34]. It is the summed absolute difference between two adjoining segments of a time-window of the raw EMG signal, described as the cumulative length of the EMG waveform over a time-window [31–33]. WFL can be calculated as follows:
| (10) |
6) Coactivation Percentage
We developed coactivation percentage (CAP) as a measure to quantify the degree of simultaneous activation of two channels using the raw EMG signal. Unique from the other features we selected, CAP compares muscle activity between sensors via energy and time. Quantifying the CAP for each channel was a multi-step process that required comparison of the activity in each channel to that of the other channels. Fig. 5 exemplifies this process using channels 1 and 4 when channel 1 is voluntarily activated (i.e., participant is instructed to contract the muscle site).
Fig. 5.

Example of the process used to compute the CAP between two channels within a 150 ms interval during which channel 1 (C1) is active and channel 4 (C4) is inactive, where any contraction in C4 represents the participant involuntarily contracting the muscle site. a) Channel-based thresholds (dotted black line), distinguish when channels are considered active. b) Thresholds are applied to respective channels to produce a logical channel activation vector for each channel. c) C1 and C4 activation vectors are multiplied together to produce a logical coactivation vector, which is 1 when both channels are active and 0 otherwise. d) Rectified sEMG signals for C1 and C4 are each multiplied by the logical coactivation vector, then normalized to produce channel coactivation vectors, which only contain the signal from a) at time points where both channels are active. e) Channel coactivation vectors for C1 and C4 are added together to produce an additive coactivation vector.
Thresholds for each of the five channels were implemented as a way to distinguish contraction activity from physiological baseline activity, in which a rectified signal above the defined threshold is considered “active,” (Fig. 5a). A logical array was computed using methods adapted from Roy et al. [38] in which the value was one if the rectified signal was above the threshold for that channel, and zero otherwise (Fig. 5b).
The activation vector for each channel was then multiplied by that of each of the other four channels to produce logical coactivation vectors comparing the five channels, two at a time. Each logical coactivation vector contained a one whenever the two compared channels were simultaneously active, and a zero otherwise (Fig 5c).
When comparing two channels at a time, each of the two rectified sEMG signals (Fig. 5a) were multiplied by their resulting logical coactivation vector (Fig. 5c), and were normalized. This produced a normalized channel coactivation vector (NCCV), as shown in Fig. 5d, in which the signal was zero when both channels were not simultaneously active. The NCCVs are then summed together to produce an additive coactivation vector (ACV), as displayed in Fig. 5e, containing the summed signal of the two channels only during simultaneous activation. The CAP of each ACV was then calculated as follows:
| (11) |
Equation 11 shows the calculation for CAP between channel X (CX) and channel Y (CY), in which X ≠ Y and CX is voluntarily activated as instructed. The sum of the ACV between CX and CY is divided by twice the sum of the NCCV for CX. This effectively creates a percent of coactivation between two channels (e.g., CX and CY in Eq. 11) within the 512-sample window of the voluntarily activated channel (e.g., CX in Eq. 11). The total degree of coactivation between two channels is then calculated by averaging each CAP and its complement CAP (e.g., CAPCX:CY and CAPCY:CX). The resulting CAP estimates the overlap of muscle activation of two distinct channels in time and magnitude. Comparing each set of distinct channels generated 10 CAP values; these values were averaged together to produce one CAP per configuration.
C. Statistical Analysis
Data analysis was performed using Minitab 18 Statistical Software (Minitab Inc., State College, PA; [39]). A multivariate PCFA was conducted on the sEMG features (CAP, MAV, ZCR, SSC, WAMP, WFL) extracted from the optimal configuration calibration period. Principal component analysis was chosen as the extraction method in order to mitigate multicollinearity, and varimax rotation was performed on the factor loadings to maximize the variable loadings according to the factor (“feature component”) on which each variable exerted the largest degree of influence [40]. Criterion for the number of selected feature components was such that the number of extracted components cumulatively explained 90% of the variation in the data [41–43]. Feature component scores were computed from the factor score coefficient matrix and the centered, standardized features were extracted from all sensor configurations. The resulting scores were used in subsequent GLMM analyses.
Two GLMMs—one for ITR, one for PE—were constructed to evaluate the effects of participant, configuration, configuration order, and selected feature components in predicting HMI performance across each configuration. Order of configuration presentation was used to account for variance due to the potential effects of learning. The restricted maximum likelihood estimation method was implemented in each model. An alpha level of 0.05 was used for significance testing in each linear regression analysis. Effect sizes for the factors were calculated using a squared partial curvilinear correlation (ηp2).
The first GLMM analysis was constructed with ITR as a response when using participant, configuration, and configuration order as factors and the selected feature components as covariates. Configuration and order were considered to be fixed factors, whereas participant was considered a random factor. Resulting statistically significant factors were used to identify the feature components that were most relevant in predicting control over the sEMG-controlled cursor system. A second GLMM analysis was then run in the same manner using the outcome measure of PE. Tukey’s simultaneous tests were performed to compare each outcome measure as a function of the different configurations and configuration orders, which were significant in each GLMM.
V. Results
A. Feature Selection
Six features were selected to predict outcome measures of ITR and PE: (1) CAP, (2) MAV, (3) ZC, (4) SSC, (5) WAMP, and (6) WFL. Results of the PCFA indicated that three feature components were necessary to explain approximately 92% of the variation in the data (see Table IV). Therefore, only three feature components were selected for subsequent processing.
Table IV.
Varimax-Rotated Factor Loadings and Communalities
| Feature | Feature Component | Communality | ||
|---|---|---|---|---|
| 1 | 2 | 3 | ||
| CAP | 0.115 | 0.055 | 0.947 | .913 |
| MAV | 0.375 | 0.689 | −0.425 | .796 |
| ZCR | 0.898 | 0.360 | −0.167 | .964 |
| SSC | 0.982 | 0.081 | −0.076 | .976 |
| WAMP | 0.824 | 0.493 | 0.169 | .951 |
| WFL | 0.203 | 0.916 | 0.193 | .917 |
|
| ||||
| Variance | 2.645 | 1.695 | 1.177 | 5.517 |
| % Variance | .441 | .283 | .196 | .919 |
NOTE. Bolded numbers signify the loadings ≥ 0.5, regardless of sign.
Table IV shows the varimax-rotated factor loadings of each feature for the first three feature components. Here, the factor loadings describe the relationship between each a feature component and an underlying sEMG feature. Feature component 1 was most strongly associated with the time-domain frequency information features: ZCR, SSC, and WAMP. Feature component 2 was influenced to the largest degree by MAV and WFL, while feature component 3 was most strongly influenced by CAP. According to the resulting variable communalities, SSC was represented to the largest degree, and MAV to the lowest degree. In particular, 96.7% of the variance in SSC was explained by the three components, while only 79.6% of the variance was explained in MAV.
B. Predicting ITR and PE using sEMG Features
Table V displays the model summaries constructed for ITR and PE. Approximately 30% percent of the variance (29% adjusted) of the data was explained by the model for ITR. Only the third feature component had a significant effect on ITR (p = 0.048). More than 48% of the variance (47% adjusted) of the data was explained by the model for PE. Feature components 2 (p = 0.025) and 3 (p = 0.039) had a significant effect on PE, but not feature component 1. In both GLMMs, configuration (ITR: p = 0.028, PE: p = 0.007) and order of configuration presentation (ITR: p < 0.001, PE: p < 0.001) had statistically significant effects on the outcome measures; order of configuration presentation had a larger effect size (ITR: ηp2 = 0.10, PE: ηp2 = 0.09) than configuration (ηp2 = 0.03 for ITR and PE) in both models [44].
Table V.
Results of GLMMs on ITR and PE
| Model | Effect | df | ηp2 | F | p |
|---|---|---|---|---|---|
| ITR | Configuration | 2 | 0.03 | 4.18 | .016 |
| Order | 2 | 0.10 | 14.44 | <.001 | |
| FC1 | 1 | 0.01 | 1.55 | .214 | |
| FC2 | 1 | 0.01 | 2.40 | .122 | |
| FC3 | 1 | 0.01 | 3.94 | .048 | |
|
| |||||
| PE | Configuration | 2 | 0.03 | 5.00 | .007 |
| Order | 2 | 0.09 | 17.37 | <.001 | |
| FC1 | 1 | 0 | 0.72 | .396 | |
| FC2 | 1 | 0.01 | 5.09 | .025 | |
| FC3 | 1 | 0.01 | 4.27 | .039 | |
NOTE. Order = configuration order, FC = feature component.
Post-hoc Tukey tests for both ITR and PE indicated that participants demonstrated an improved performance in the task for configurations presented second or third when compared to the first configuration (p < 0.001 for each comparison for both ITR and PE; see Fig. 6 and Table VI). When averaged across participants (N = 18) and configurations (i.e., poor, satisfactory, optimal), the mean ITR was 17.0 bits/min (SD = 12.9). ITRs elicited from the satisfactory configuration (M = 19.0 bits/min, SD = 13.2) were significantly larger than those of the optimal (M = 16.3 bits/min, SD = 12.9) and poor (M = 15.6, SD = 11.9) configurations (optimal and satisfactory: p = 0.036; poor and satisfactory: p = 0.031). Mean PE was measured to be 56.7% (SD = 13.9%) when averaged across participants (N = 18) and configurations. The satisfactory configuration PEs were significantly larger than those of the poor configuration (p = 0.005); however, optimal and poor configurations and optimal and satisfactory configurations were not statistically different.
Fig. 6.

Results of Tukey post-hoc tests comparing difference in means (± SE; CI = 95%) of a) ITR and b) PE with respect to configuration order. *p < 0.05
Table VI.
Post-Hoc Analysis on ITR and PE
| Model | Comparison | Difference of Means | SE | 95% CI | p |
|---|---|---|---|---|---|
| Config(ITR) | O – S | −0.24 | 1.11 | (0.01, 0.52) | .036 |
| O – P | −0.02 | 1.11 | (−0.24, 0.28) | .998 | |
| S – P | 0.27 | 1.11 | (−0.51, 0.02) | .031 | |
|
| |||||
| Order(ITR) | 2 – 1 | 0.57 | 0.11 | (0.30, 0.84) | <.001 |
| 3 – 1 | 0.51 | 0.11 | (0.24, 0.77) | <.001 | |
| 3 – 2 | −0.06 | 0.11 | (−0.32, 0.20) | .885 | |
|
| |||||
| Config(PE) | O – S | −0.16 | 0.10 | (−0.39, 0.07) | .201 |
| O – P | 0.14 | 0.10 | (−0.09, 0.36) | .388 | |
| S – P | 0.30 | 0.09 | (0.08, 0.52) | .005 | |
|
| |||||
| Order(PE) | 2 – 1 | 0.52 | 0.10 | (0.28, 0.75) | <.001 |
| 3 – 1 | 0.53 | 0.10 | (0.31, 0.76) | <.001 | |
| 3 – 2 | 0.02 | 0.09 | (−0.21, 0.24) | .983 | |
NOTE. Config = configuration, Order = configuration order, O = optimal, S = satisfactory, P = poor.
VI. Discussion
Surface electromyography is a simple and non-invasive way to quantitatively assess muscle activity. Within the past decade, sEMG has been proposed as an attractive choice for HMI control, as it provides an inexpensive and accessible means of communication and movement for those who suffer from disorders or impairments that limit their daily life [3–9]. EMG sensor placement is an enigmatic, yet crucial factor in detecting signals from desired muscles and in recording these signals with maximum fidelity [14–16, 21]. Not only does the quality of a sEMG signal depend on the physical and electrical characteristics of the sensors used to record muscle activity, but it also depends on the characteristics of the person whose muscles are being recorded [14, 16, 20]. Calibrating sEMG sensors can thus be a complex and time-consuming endeavor.
The present study sought to determine if quantitative features exist that are capable of accurately and rapidly predicting HMI performance when using sEMG as an input modality. The identification of these features was desired so to mitigate the qualitative nature of sEMG sensor calibration that requires trained operators and can still lead to day-to-day differences in cursor performance. For example, if a specific combination of features was highly predictive of ITR, a calibration procedure could be used to quantify the quality of the sensor configuration and suggest possible configuration adjustments. Performance was assessed via ITR and PE. Six features were extracted from each calibration signal: MAV, ZC, SSC, WAMP, WFL, and CAP [33, 45]. A PCFA was performed using these features to reduce the dimensionality of the model from six feature components (FCs) to three FCs. The scores of these three FCs were then implemented in GLMMs to predict ITR and PE.
Only the third FC was significant in predicting ITR, yet the second and third FCs each had a significant effect on predicting PE. The time-domain frequency information measures (i.e., ZC, SSC, and WAMP) had the largest association with the first FC. The MAV and WFL measures were the most influential features comprising the second FC, while CAP influenced the third FC to the greatest degree. MAV is a measure of the signal energy, whereas WFL is a measure of signal complexity (i.e., time, amplitude, and frequency). Yet, CAP was the only feature we selected that represents a quantified comparison of muscle activity between sensors via energy and time. Thus, the FCs most associated with the energy and complexity of the EMG signal, in addition to the muscle activity between sensors, had a significant effect, albeit a small effect size [44], in estimating HMI performance via PE within the sEMG cursor system.
In each model, configuration had a small effect size. Upon examining mean performance across configuration in post-hoc analysis, the difference between suboptimal and poor configurations was significantly different in both models, with the difference between optimal and suboptimal significant only for ITR. Since sEMG sensor configuration is considered to be a precise and complex task [14–16, 20], these results were unexpected for such sensitive and specific recording of facial muscles during typical movement (it must be taken into account that the present study is co-opting sEMG for non-homologous control). These findings may be a result of recording from muscle groups rather than individual muscles due to the relatively small and interdigitating nature of facial musculature. The sEMG sensors employed here are configured with detection surfaces spaced only 1 cm apart to create a differential capable of recording from difficult-to-isolate muscles. Still, perhaps the comparatively large size of the sEMG sensors to that of facial musculature meant that small changes in configuration did not substantially affect the signal. This, however, may represent one benefit of using sEMG as opposed to invasive options: although our sensor manipulations may not have affected the quantitative measures of the resulting signals, gross muscle group activity was such that an individual could still exercise adequate control when source configurations were within the vicinity of the targeted muscle. In contrast, however, the order of configuration presentation had a larger effect size than the configurations themselves in each model. An improvement in task performance was observed during the second and third rounds when compared to the first, no matter the configuration of the sensors. Configuration order was included in the GLMMs to minimize the effects of learning; indeed, the results of each model show that, even over a very short period of time, learning occurred.
Our findings suggest that when sEMG sensors are configured within the vicinity of usable muscle sites, the duration of time that a healthy individual is exposed to using the sEMG system will have a greater effect on HMI performance than the precise placement of the sEMG sensors. As such, our results support less rigid recommendations for configuring facial sEMG sensors when used by healthy individuals to control a HMI. These results are encouraging for individuals requiring an alternate access method (e.g., adaptive scanners, brain interfaces, eye- or head-trackers) [46–48]. At present, one of the main clinically-used augmentative systems is eye-tracking [2, 49]; however, the calibration and setup processes for such systems are laborious, in that the level of support required to provide adequate control over the system is prohibitive [49, 50]. Our results further support sEMG as a promising input modality [3–5, 7, 8, 11], specifically as an alternative to eye-tracking. The sEMG sensors must only be calibrated once per session, whereas eye trackers must be recalibrated multiple times per session [51, 52]. Eye-tracking systems also require experienced operators or competent users to ensure that the user’s point-of-regard accurately corresponds to the location on the display [51, 53, 54]; conversely, sEMG is simple enough that a trained operator can place sensors within the vicinity of usable muscle sites and the participant will learn how to control the system.
One limitation in this study is that it was not designed to characterize the ability of EMG features to predict control performance over time; yet, it is possible that EMG signal characteristics were altered with respect to time due to changes in the factors that control muscular force production. For instance, a study by Tkach, Huang, and Kuiken (2010) found that although muscle fatigue did not significantly affect EMG characteristics over time, variation in level of effort had a substantial, significant effect [23]. Future studies should therefore investigate the ability of EMG features to predict control performance as a result of such changes in the facial sEMG signal characteristics. Also, the threshold multipliers for each sEMG sensor were static throughout this study in order to determine the relationship between sensor configuration and HMI performance; however, it is unclear whether different thresholds could systematically change the results. Thus, more work is needed to examine the effects of changing these threshold multipliers on resulting HMI performance. Moreover, the present study demonstrates that trained operators can place sEMG sensors in the vicinity of usable muscle sites and healthy individuals will rapidly learn to control the system using the gross measure of electrical activity. Additional work should focus on instructing untrained operators to configure sEMG sensors in a similar paradigm and evaluating cursor control performance. This would provide insight into streamlining sensor application so that healthy individuals could perform at an adequate level of control with minimal setup and calibration time, and without the need for a skilled operator to configure the sensors. Finally, the identification of quantitative features was assessed in a healthy population, whereas the methods we developed aim to advance the use of sEMG as a computer access method for individuals with neurological impairments. Previous work by Cler, et al. [1] included a case study of an individual with Guillain-Barré Syndrome who used the same sEMG-controlled cursor system and calibration task; the results of this study showed promise in the ability of handicapped persons to perform the calibration task. However, given the diverse manifestation of neurological disorders, it is difficult to generalize the ability of all handicapped persons to perform the calibration. As such, the ability of individuals with neurological impairments to perform sensor configuration and calibration tasks must be assessed. Therefore, the current study should be repeated using similar methodology in these users in order to fully assess the effects of our configuration prediction methods in the target population.
VII. Conclusion
We have presented a method of predicting optimal facial sEMG configurations for HMI control. Six features were extracted from the EMG signals during participant calibration: MAV, ZC, SSC, WAMP, WFL, and CAP. A principal component factor analysis was performed using these features to develop a set of feature components influenced by the time-domain frequency information, energy, complexity, and muscle activity between sensors. Following, three feature components were incorporated into general linear mixed models to predict HMI performance. Feature components most influenced by the energy and complexity of the EMG signal, in addition to the muscle activity between sensors, were significant in predicting HMI performance, while the component influenced by frequency information within the time-domain was not. Three sensor configurations were evaluated; however, the order of configuration presentation was more predictive of HMI performance than the individual configurations. In this regard, our results show that sEMG sensors can be configured approximately in the vicinity of a usable muscle site, and healthy individuals will learn to efficiently control the system. Future development will focus on repeating the present study in a disordered population, such as in those with severe speech and motor impairments who rely on augmentative devices.
Acknowledgments
This work was supported by the National Science Foundation under grant 1452169, National Science Foundation Graduate Research Fellowship under grant 1247312, and the National Institutes of Health’s National Institute on Deafness and Other Communication Disorders under grant DC014872.
The authors would like to thank Jacob Noordzij, Jr. for his assistance with data processing.
Contributor Information
Jennifer M. Vojtech, Department of Biomedical Engineering, Boston University, Boston, MA, 02215 USA and the Department of Speech, Language, and Hearing Sciences, Boston University, Boston, MA, 02215 USA
Gabriel J. Cler, Graduate Program for Neuroscience – Computational Neuroscience and the Department of Speech, Language, and Hearing Sciences, Boston University, Boston, MA, 02215 USA
Cara E. Stepp, Department of Speech, Language, and Hearing Sciences, Boston University, Boston, MA, 02215 USA, the Department of Biomedical Engineering, Boston University, Boston, MA, 02215 USA, and the Department of Otolaryngology–Head and Neck Surgery, Boston University School of Medicine, Boston, MA, 02118 USA
References
- 1.Cler MJ, Nieto-Castañon A, Guenther FH, Fager SK, Stepp CE. Surface electromyographic control of a novel phonemic interface for speech synthesis. Augmentative and Alternative Communication. 2016;32(2):120–30. doi: 10.3109/07434618.2016.1170205. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Cler MJ, Stepp CE. Discrete versus continuous mapping of facial electromyography for human-machine interface control: Performance and training effects. IEEE Trans Neural Syst Rehabil Eng. 2015;23(4):571–80. doi: 10.1109/TNSRE.2015.2391054. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Huang CN, Chen CH, Chung HY. Application of facial electromyography in computer mouse access for people with disabilities. Disability and Rehabilitation. 2006;28(4):231–237. doi: 10.1080/09638280500158349. [DOI] [PubMed] [Google Scholar]
- 4.Williams MR, Kirsch RF. Evaluation of head orientation and neck muscle EMG signals as three-dimensional command sources. J Neuroeng Rehabil. 2015;12(1):25. doi: 10.1186/s12984-015-0016-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Williams MR, Kirsch RF. Case study: Head orientation and neck electromyography for cursor control in persons with high cervical tetraplegia. J Rehabil Res Dev. 2016;53(4):519–530. doi: 10.1682/JRRD.2014.10.0244. [DOI] [PubMed] [Google Scholar]
- 6.Williams MR, Kirsch RF. Evaluation of head orientation and neck muscle EMG signals as command inputs to a human-computer interface for individuals with high tetraplegia. IEEE Trans Neural Syst Rehabil Eng. 2008;16(5):485–96. doi: 10.1109/TNSRE.2008.2006216. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Andrade AO, Pereira AA, Pinheiro CG, Kyberd PJ. Mouse emulation based on facial electromyogram. Biomedical Signal Processing and Control. 2013;8(2):142–152. [Google Scholar]
- 8.Pinheiro CG, Andrade AO. The simulation of click and double-click through EMG signals. 34th Annual Int Conf Proc IEEE Eng Med Biol Soc; 2012; pp. 1984–1987. [DOI] [PubMed] [Google Scholar]
- 9.Silva AN, Morére Y, Naves ELM, RdSá AA, Soares AB. Virtual electric wheelchair controlled by electromyographic signals. ISSNIP Biosignals and Biorobotics Conference; 2013; pp. 1–5. [Google Scholar]
- 10.Bastos-Filho TF, et al. Towards a new modality-independent interface for a robotic wheelchair. IEEE Trans Neural Syst Rehabil Eng. 2014;22(3):567–84. doi: 10.1109/TNSRE.2013.2265237. [DOI] [PubMed] [Google Scholar]
- 11.José MA, de Deus Lopes R. Human-Computer Interface Controlled by the Lip. IEEE Journal of Biomedical and Health Informatics. 2015;19(1):302–308. doi: 10.1109/JBHI.2014.2305103. [DOI] [PubMed] [Google Scholar]
- 12.Roy SH, De Luca G, Cheng MS, Johansson A, Gilmore LD, De Luca CJ. Electro-mechanical stability of surface EMG sensors. Med Bio Eng Comput. 2007;45(5):447–57. doi: 10.1007/s11517-007-0168-z. [DOI] [PubMed] [Google Scholar]
- 13.de Talhouet H, Webster JG. The origin of skin-stretch-caused motion artifacts under electrodes. Physiol Meas. 1996;17(2):81–93. doi: 10.1088/0967-3334/17/2/003. [DOI] [PubMed] [Google Scholar]
- 14.Hermens HJ, Freriks B, Disselhorst-Klug C, Rau G. Development of recommendations for SEMG sensors and sensor placement procedures. J Electromyogr Kinesiol. 2000;10(5):361–74. doi: 10.1016/s1050-6411(00)00027-4. [DOI] [PubMed] [Google Scholar]
- 15.Campaninia I, Merloa A, Degolaa P, Merlettib R, Vezzosia G, Farinac D. Effect of electrode location on EMG signal envelope in leg muscles during gait. Journal of Electromyography and Kinesiology. 2006;17(4):515–526. doi: 10.1016/j.jelekin.2006.06.001. [DOI] [PubMed] [Google Scholar]
- 16.De Luca CJ. The use of surface electromyography in biomechanics. Journal of Applied Biomechanics. 1997;13:135–63. [Google Scholar]
- 17.De Luca CJ. D Incorporated, editor. Surface Electromyography: Detection and Recording. 2002 Online Available: delsys.com/Attachments_pdf/WP_SEMGintro.pdf.
- 18.Stegeman D, Hermens HJ. Standards for surface electromyography: The European project Surface EMG for non-invasive assessment of muscles (SENIAM) 2007 [Google Scholar]
- 19.Şatıroğlu F, Arun T, Işık F. Comparative data on facial morphology and muscle thickness using ultrasonography. European Journal of Orthodontics. 2005;27:562–567. doi: 10.1093/ejo/cji052. [DOI] [PubMed] [Google Scholar]
- 20.Lapatki BG, Oostenveld R, Van Dijk JP, Jonas IE, Zwarts MJ, Stegeman DF. Optimal placement of bipolar surface EMG electrodes inthe face based on single motor unit analysis. Psychophysiology. 2009;47(2):299–314. doi: 10.1111/j.1469-8986.2009.00935.x. [DOI] [PubMed] [Google Scholar]
- 21.Stepp CE. Surface electromyography for speech and swallowing systems: measurement, analysis, and interpretation. J Speech Lang Hear Res. 2012;55(4):1232–46. doi: 10.1044/1092-4388(2011/11-0214). [DOI] [PubMed] [Google Scholar]
- 22.Thakur A, BA, Parmar SK. Multiple variations in neck musculature and their surgical implications. Int J Ana Var. 2011;4:171–173. [Google Scholar]
- 23.Tkach D, Huang H, Kuiken TA. Study of stability of time-domain features for electromyographic pattern recognition. J Neuroeng Rehabil. 2010;7(1):21. doi: 10.1186/1743-0003-7-21. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Sensinger JW, Lock BA, Kuiken TA. Adaptive pattern recognition of myoelectric signals: exploration of conceptual framework and practical algorithms. IEEE Trans Neural Syst Rehabil Eng. 2009;17(3):270–8. doi: 10.1109/TNSRE.2009.2023282. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Fridlund AJ, Cacioppo JT. Guidelines for human electromyographic research. Psychophysiology. 1986 Sep;23(5):567–89. doi: 10.1111/j.1469-8986.1986.tb00676.x. [DOI] [PubMed] [Google Scholar]
- 26.Lyons KR. Gesture recording and recognition via surface electromyography. 2015. PyGesture. [Google Scholar]
- 27.Fitts PM. The information capacity of the human motor system in controlling the amplitude of movement. Journal of Experimental Psychology. 1954;47(6):381–91. [PubMed] [Google Scholar]
- 28.Soukoreff RW, MacKenzie IS. Towards a standard for pointing device evaluation, perspectives on 27 years of Fitts’ law research in HCI. Int J Hum-Comput Stud. 2004;61(6):751–789. [Google Scholar]
- 29.Shannon CE, Weaver W. The Mathematical Theory of Communication. Urbana, IL: The University of Illinois Press; 1964. [Google Scholar]
- 30.Wolpaw JR, et al. Brain-computer interface technology: A review of the first international meeting. IEEE Trans Neural Syst Rehabil Eng. 2000;8(2):164–73. doi: 10.1109/tre.2000.847807. [DOI] [PubMed] [Google Scholar]
- 31.Englehart K, Hudgins B. A robust, real-time control scheme for multifunction myoelectric control. IEEE Trans Biomed Eng. 2003;50(7):848–854. doi: 10.1109/TBME.2003.813539. [DOI] [PubMed] [Google Scholar]
- 32.Phinyomark A, Phukpattaranont P, Limsakul C. Feature reduction and selection for EMG signal classification. Expert Systems with Applications. 2012;39(8):7420–31. [Google Scholar]
- 33.Hudgins B, Parker P, Scott RN. A new strategy for multifunction myoelectric control. IEEE Trans Biomed Eng. 1993;40(1):82–94. doi: 10.1109/10.204774. [DOI] [PubMed] [Google Scholar]
- 34.Ahsan MR, Ibrahimy MI, Khalifa OO. Neural Network Classifier for Hand Motion Detection from EMG Signal. 5th Kuala Lumpur Int Conf Proc Biomed Eng; Kuala Lumpur, Malaysia. 2011; Berlin, Heidelberg: Springer Berlin Heidelberg; pp. 536–541. [Google Scholar]
- 35.Clancy EA, Hogan N. Probability density of the surface electromyogram and its relation to amplitude detectors. IEEE Trans Biomed Eng. 1999;46(6):730–9. doi: 10.1109/10.764949. [DOI] [PubMed] [Google Scholar]
- 36.Willison RG. A method of measuring motor unit activity in human muscle. Journal of Physiology. 1963;168:35P, 36P. [Google Scholar]
- 37.Phinyomark A, Limsakul C, Phukpattaranont P. EMG feature extraction for tolerance of 50 Hz interference. 4th PSU-UNS Int Conf Eng Tech; 2009; pp. 289–293. [Google Scholar]
- 38.Roy SH, et al. A Combined sEMG and Accelerometer System for Monitoring Functional Activity in Stroke. IEEE Trans Neural Syst Rehabil Eng. 2009;17(6):585–594. doi: 10.1109/TNSRE.2009.2036615. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Minitab 18 Statistical Software. State College, PA: Minitab, Inc; 2010. [Google Scholar]
- 40.Forina M, Armanino C, Lanteri S, Leardi R. Methods of varimax rotation in factor analysis with applications in clinical and food chemistry. Journal of Chemometrics. 1989;3(S1):115–125. [Google Scholar]
- 41.Enders H, Maurer C, Baltich J, Nigg BM. Task-oriented control of muscle coordination during cycling. Med Sci Sports Exerc. 2013;45(12):2298–305. doi: 10.1249/MSS.0b013e31829e49aa. [DOI] [PubMed] [Google Scholar]
- 42.Matrone GC, Cipriani C, Secco EL, Magenes G, Carrozza MC. Principal components analysis based control of a multi-DoF underactuated prosthetic hand. J Neuroeng Rehabil. 2010;7:16. doi: 10.1186/1743-0003-7-16. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Soechting JF, Flanders M. Sensorimotor control of contact force. Curr Opin Neurobiol. 2008;18(6):565–72. doi: 10.1016/j.conb.2008.11.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Witte RS, Witte JS. Statistics. Hoboken, NJ: Wiley; 2010. [Google Scholar]
- 45.Phinyomark A, Limsakul C, Phukpattaranont P. A novel feature extraction for robust EMG pattern recognition. J Comput. 2010;1(1):71–80. [Google Scholar]
- 46.Garcia LJ, Laroche C, Barrette J. Work integration issues go beyond the nature of the communication disorder. Journal of Communication Disorders. 2002;35(2):187–211. doi: 10.1016/s0021-9924(02)00064-3. [DOI] [PubMed] [Google Scholar]
- 47.Hegde MN, Freed DB. Assessment of Communication Disorders in Adults. Plural Publishing; 2011. [Google Scholar]
- 48.Lúcio GS, Perilo TV, Vicente LC, Friche AA. The impact of speech disorders quality of life: a questionnaire proposal. CoDAS. 2013;25:610–613. doi: 10.1590/S2317-17822013.05000011. [DOI] [PubMed] [Google Scholar]
- 49.Higginbotham DJ, Shane H, Russell S, Caves K. Access to AAC: Present, past, and future. Augmentative and Alternative Communication. 2007;23(3):243–257. doi: 10.1080/07434610701571058. [DOI] [PubMed] [Google Scholar]
- 50.Donegan M, et al. Understanding users and their needs. Universal Access in the Information Society. 2009 Jun 03;8(4):259. [Google Scholar]
- 51.Goldberg JH, Wichansky AM. The Mind’s Eyes: Cognitive and Applied Aspects of Eye Movements. Oxford: Elsevier Science; 2002. Eye tracking in usability evaluation: a practitioners guide. [Google Scholar]
- 52.Schnipke SK, Todd MW. CHI’00 Extended Abstracts on Human Factors in Computing Systems. ACM; 2000. Trials and tribulations of using an eye-tracking system; pp. 273–274. [Google Scholar]
- 53.Duchowski AT. Eye tracking methodology. Theory and Practice. 2007;328 [Google Scholar]
- 54.Nyström M, Andersson R, Holmqvist K, van de Weijer J. The influence of calibration method and eye physiology on eyetracking data quality. Behavior Research Methods. 2013;45(1):272–288. doi: 10.3758/s13428-012-0247-4. [DOI] [PubMed] [Google Scholar]
