Abstract
Prosthetic devices for hand difference have advanced considerably in recent years, to the point where the mechanical dexterity of a state-of-the-art prosthetic hand approaches that of the natural hand. Control options for users, however, have not kept pace, meaning that the new devices are not used to their full potential. Promising developments in control technology reported in the literature have met with limited commercial and clinical success. We have previously described a biomechanical model of the hand that could be used for prosthesis control. The goal of this study was to evaluate the feasibility of this approach in terms of kinematic fidelity of model-predicted finger movement and the computational performance of the model. We show the performance of the model in replicating recorded hand and finger kinematics and find average correlations of 0.89 between modelled and recorded motions; we show that the computational performance of the simulations is fast enough to achieve real-time control with a robotic hand in the loop; and we describe the use of the model for controlling object gripping. Despite some limitations in accessing sufficient driving signals, the model performance shows promise as a controller for prosthetic hands when driven with recorded EMG signals. User-in-the-loop testing with amputees is necessary in future work to evaluate the suitability of available driving signals, and to examine translation of offline results to online performance.
I. Introduction
DEXTEROUS and natural finger movement, including manipulation of objects, is an important goal for upper limb prosthesis users [1]. Increasingly sophisticated prosthetic devices have become available over the last few years, offering individual digit movement and degrees of freedom approaching those of the natural hand [2]. However, a lack of sophisticated control options for users limits full exploitation of these devices, and control is characterized by predefined patterns of grasp and sequential actions [3].
With access to more input signals from muscles [4] or nerves [5], the potential for natural and simultaneous control of multiple degree-of-freedom (DOF) movement is increasing. However, for this to become a reality, an intuitive means of control for these sophisticated devices is needed. We have proposed the use of a biomechanical hand model as a controller for the prosthetic device whereby control signals based on electromyography (EMG) recorded from the user’s residual muscles drive a dynamic simulation of hand motion [6]. The resulting modelled digit actions, based on the biomechanics of the natural hand, can be replicated in real time by the prosthesis, producing natural hand movements.
The current state-of-the-art in myoelectric prosthesis control is dominated by machine learning techniques, whereby a decoding algorithm maps residual muscle signals to desired actions. This approach presupposes no particular relationship between the muscle signals and desired actions, but trains the controller by recordings made from prosthesis users attempting to carry out desired actions [7]. Training on data recorded during dynamic movements, rather than being limited to static postures, has been shown to improve the robustness of these systems [8], [9] and users have been shown to adapt to the dynamics of a physical device as well as controlling kinematic signals [10]. In addition to mapping different grip postures, recent work has also attempted to map surface EMG signals to individual finger movement using various pattern recognition algorithms [4], [11], [12]. In some cases, where access to the recording sites is lost as a consequence of the amputation, targeted muscle reinnervation may be used to transfer residual nerves into alternative muscles. These can then be used as recording sites for EMG sensors to produce the control command [13].
Some commercial systems are available using this type of nerve transfer technology, however its use requires significant training on the part of the user. More commonly seen in clinical or commercially available devices is proportional control, where a user can modulate the degree of movement (speed, angle, force) by controlling the amplitude of generated muscle signals [14]. In order to achieve multiple grip patterns, these systems require mode switching between grips, or sequential control of single degrees of freedom.
In our approach, we take advantage of the known biomechanics of the limb to simulate the actions that would result from particular muscle activation patterns if the limb were still present. As long as the biomechanical model is a reasonable approximation of the missing limb, the EMG-driven, simulated movements should be a good approximation of the desired movements. Using the biomechanics of the limb in this way to interpret residual muscle signals means that the predicted movements are constrained to occur in a physiologically feasible space. Moreover, the dynamics of the limb may help to reduce the effects of noisy measurements of muscle activation, reducing the ambiguity of intended actions. A further benefit of this approach is that the explicit representation of muscle elements in the model, in contrast to pure machine learning approaches, allows the generation of proprioceptive signals that can be fed back to the prosthesis user to provide truly closed-loop control of movement [15]. Muscle length and velocity information provided by modelled muscle spindle output, and tendon force feedback generated by a simple model of the Golgi Tendon Organ (GTO), could be given to the user via peripheral nerve stimulation.
Several recent studies have shown the potential for this model-based approach in controlling wrist and hand movement from recorded EMG signals. Crouch & Huang [16] used a realtime, two-DOF musculoskeletal model to control the fingertip of a virtual hand with EMG signals from four forearm muscles. Sartori et al. [17] included an EMG-driven musculoskeletal model that decoded joint moments in a control scheme for wrist movement and hand opening-closing and Kapelner et al. [18] have attempted to improve the human-machine interface by decomposing recorded EMG signals into the underlying neural drive. This neuromechanical approach shows promise for real-world applications due to the robustness of the control to movement artifacts and the physiologically-constrained solution space for control signals.
We have previously shown stand-alone biomechanical hand model simulations of individual finger movements that run faster than real time [6]. In this study, we attempt to answer three key questions that will enable translation of the theoretical model approach to actual device implementation:
Are simulated model movements the same as those intended by the user?
Can a model of realistic complexity, with hardware in the loop, run fast enough to enable real-time control?
For interaction with the environment, could the model be used to control object gripping as well as open-chain movements?
To answer these three questions, we describe this study in three parts: part 1 compares the kinematics of natural hand movement against those of an EMG-driven biomechanical model in a group of normally-limbed individuals; part 2 drives a robotic hand with the EMG-controlled biomechanical model, and assesses the model in terms of its computational speed; and part 3 demonstrates the use of the model in simulated gripping of a cup being filled with liquid.
II. Methods
A. Biomechanical hand model
The biomechanical hand model is a modified version of the one we have presented previously [6]. OpenSim [19] is used to modify and visualise the structure of the model; the metacarpophalangeal (MCP) joints are modelled as two orthogonal hinge joints, the proximal and distal interphalangeal (PIP, DIP) joints are simple hinges, and the muscle lines of action are represented by elements passing from origin to insertion via wrapping objects to achieve the correct moment arms for the muscles across a range of postures. The multibody dynamics are described by the equation of motion:
| (1) |
where M is the mass matrix; the 2nd term accounts for centrifugal, Coriolis and gravitational forces and the final term includes the effects of joint moments τ via the coefficient matrix C. τ is the summation of muscle moments and passive joint moments. To speed up the computational performance of the model, the muscle lengths and lines of action are preprocessed to a polynomial representation to avoid the runtime calculation of wrapping paths [20].
To match the structure of the robotic hand used as part of the study (Section II-C), the model wrist was fixed, all finger abduction/adduction degrees of freedom were removed, and all muscles crossing only the wrist were removed. This simplifies the model somewhat compared to the full model published in Blana et al. [6], and makes control with surface EMG recordings more feasible. The resulting model has 16 degrees of freedom (four at the thumb, three at each of the fingers) and 18 muscles (see Fig. 1).
Fig. 1.

OpenSim visualisation of the hand model, showing muscle lines of action and included joints (CMC, MCP, IP at the thumb, MCP, PIP and DIP for the fingers).
Muscle dynamics are simulated with a first order delay, and integration of system dynamics is carried out using an implicit formulation to allow for larger stable integration step sizes:
| (2) |
The state vector x contains 68 variables: 16 angles q, 16 angular velocities , 18 muscle contraction state variables s, and 18 muscle active states a. The implicit formulation of system dynamics enables faster than real time, forward-dynamic simulation but requires explicit calculation of Jacobians , and . These were implemented in Autolev and hand-coded for muscle dynamics. Real-time simulations were run in Matlab (Mathworks, Inc., Natick, MA) with a fixed time-step of 4ms, using the semi-implicit integrator described in [21]. Model code and parameters can be found at https://github.com/dasproject.
To simulate proprioceptive feedback signals, we added the computationally efficient models of proprioception developed by Williams & Constandinou [15]. The muscle spindle model is based on Mileusnic et al. [22] and is composed of three intrafusal fibre models (bag1, bag2 and chain) and two afferent firing models, primary (Ia) and secondary (II). This generates an output signal modulated by both muscle length and velocity. The Golgi tendon organ is modelled as a force sensor with three components: a saturation non-linearity, a phase-lead filter, and a threshold [23], giving rise to an output signal related to muscle force that is physiologically reasonable. In this study we include these to assess their impact on model computational performance, but they are not further used for feedback at this time, either to the model or to the user.
B. Experiment 1: Kinematic fidelity of model-simulated hand motions
In the first experiment, the biomechanical model was driven with surface EMG signals recorded from normally-limbed individuals, and the simulated motions compared with those recorded from the participants using 3D motion analysis. EMG activity was recorded from four key muscles enabling the simulation of basic postures by the model. After giving informed consent for participation in the study (Keele Ethics Committee reference ERP390 and Newcastle Ethics Committee reference 17-NAZ-056), markers for motion capture and EMG recording electrodes were attached to the hand and forearm of the participants (shown in Fig. 2). Participants were a convenience sample of six normally-limbed individuals with no history of injury to the measured limb were recruited to the study to evaluate the kinematic fidelity of modelled hand motions. The mean age of the participants was 29.0(±6.2) years; three were male, three were female.
Fig. 2.

EMG electrodes were placed over four key muscle areas (top image) allowing independent control of the five postures (including rest). These were: the lateral part of EDC, the medial part of EDC, the FDS (just distal to the superficial wrist flexors), and the EPB. The middle row of images shows the locations of markers for the kinematic analysis, and the bottom row shows the four target postures presented.
For each participant, a static posture was recorded, with all digits extended. They were then asked to repeatedly open and close their hand at a self-selected comfortable speed for 30 seconds. Finally, they were asked to copy the hand movements presented to them in a demonstration video. The postures were presented to them in a randomised order over the course of 60 seconds, returning to a loosely closed posture in between. Thirty postures in total were attempted. The postures chosen were the fully open hand, pointing with the index finger, thumbs up, and an L-shape (pointing and thumbs up together). The resting posture was a loosely closed hand. The other postures are shown in the bottom row in Fig. 2; all postures maintained a neutral wrist position. These postures were chosen as they were easy to achieve with the activation of superficial muscles that were feasible for recording via surface EMG. They are not intended to be a functionally comprehensive set, simply a feasible subset that demonstrate the performance of the real-time model as a controller for robotic hand movement.
We recorded the motions of the participants’ hands with a 3D motion analysis system (6-camera Vicon Bonita, Oxford Metrics Ltd), to compare the simulated motions with the recorded motions. Retroreflective markers of 6mm diameter were attached to the posterior surface of the palm, and the dorsal surface of the phalanges of the thumb, index and middle fingers. We did not include markers on the fourth and fifth digits, as they were not needed to identify the postures and were frequently occluded during the movements. This was a reduced version of the marker set described by Metcalf et al. [24]. Marker position data were captured at 100Hz.
The Datalink EMG system (Biometrics Ltd, Newport, Wales) was used to record muscle activity, and bipolar surface electrodes were placed over the following four muscles and muscle areas: extensor digitorum communis (EDC) lateral, approximately covering third, fourth and fifth digit; extensor digitorum communis (EDC) medial, capturing extension of the index finger; flexor digitorum superficialis (FDS) just distal to the wrist flexor bellies, capturing flexion of the fingers; extensor pollicis brevis (EPB), capturing extension of the thumb. Again, this is a reduced but feasible set of muscles that can be recorded independently with surface EMG electrodes, allowing us to demonstrate the performance of the model.
The EMG signals were sampled at 2000Hz and then rectified and processed with the use of a moving average filter with a window of 150ms (described in Blana et al. [25]). EMG amplitudes were scaled to a maximum contraction recorded for each posture to estimate normalised muscle activation from 0 to 1. The normalised EMG signal was then mapped to a combination of representative muscle tendon units (MTU) and these were used as inputs to the model; the outputs were the set of joint angles for all five digits. These mappings were selected to best achieve the desired movement with the limited surface EMG recordings, while keeping the control as intuitive as possible. The EMG to MTU mapping is shown in Table I. The trials with the static posture were used to scale the dimensions of all the segments in the hand model to fit each participant with the OpenSim scaling method described in Delp et al. [19]). The dynamic trials were then used as inputs to inverse kinematic simulations with the scaled Opensim model. The outputs were joint angles.
TABLE I.
Mapping of recorded EMG signals to model MTUs
| Source EMG | Modelled MTU |
|---|---|
| Extensor Digitorum Communis (lateral) | EDC (digits 3–5) |
| Extensor Digitorum Communis (medial) | EDC (digit 2) & EI |
| Flexor Digitorum Superficialis | FDS (digits 2–5) |
| Extensor Pollicis Brevis | EPB & EPL |
As prosthetic hands typically have a single flexion/extension degree of freedom for each digit, we used a single joint angle to estimate the open/close movement of each digit: the thumb CMC joint, index finger MCP joint and middle finger MCP joint angles. The range of the three angles was found from the repeated open/close trial, and we used these to normalise the angles in the randomised posture trials, from zero (digit fully open) to one (digit fully closed).
Processed EMG data from the dynamic trials were input to the biomechanical model. The processing of the simulated angles was the same as the angles calculated from the Vicon recordings: the thumb CMC, index finger MCP and middle finger MCP model joint angles were normalised between zero and one using the range of simulated angles from the open/close trial.
We compared the recorded and simulated (normalised) movements throughout the randomised posture trials using the number of postures successfully achieved by each participant that were accurately replicated by the model. To do this, we assumed that if a (normalized) angle was below 0.3 the digit was open, if above 0.7 it was closed, and a particular posture was achieved if held for more than 0.4s. This allows for some variation in individual joint angles for the same posture that naturally occurs from trial to trial and between subjects, but is close enough to identify the posture. We then quantified the success rate as the ratio of the postures successfully adopted by the model to the postures attempted by the participant. Where the participant did not adopt the correct posture (or this was not clear from the Vicon data), we excluded that posture attempt from the success rate analysis. In addition to this, we report Pearson’s Correlation Coefficients for all movements, regardless of whether the posture was achieved by participant or model, to give an indication of the similarity of the movements made from one posture to another.
C. Experiment 2: Real-time control of a robotic hand
In the second experiment, EMG recorded from a single, normally-limbed participant was used to drive a desktop robotic hand, via the biomechanical model, to assess the use of the real-time model with both robotic hardware and the user in the loop. In this context, we define the ability of the model to run in real time by its ability to simulate a movement in less computational time than the integration time step. For example, if the computational time needed to simulation one integration step is 4ms, then an integration time-step of more than 4ms must be used to allow real-time simulation. This is described fully in [21] and [6].
Surface EMG signals were recorded using a Trigno EMG system (Delsys Inc., Natick, MA) from the same four muscles used in Experiment 1. Processed EMG signals were used to drive a forward-dynamic simulation of hand dynamics, and the kinematic output from the model was then passed to the robotic hand so that the device mimicked the movements of the user. The robotic hand posture was updated every 100ms. The same set of movements was recorded as those used in Experiment 1; the EMG recording and processing were also the same, except for the addition of a minimum EMG threshold to remove low-level EMG activity. Low-level EMG fluctuations generate small forces in the model that cause the robotic hand to quiver: normalised EMG values below 0.05 were therefore set to zero to prevent this.
The robotic hand used was the Prensilia IH2 Azzurra. The hand consists of 5 degrees of freedom (thumb flexion/extension, thumb rotation, and flexion/extension of the second, third and combined fourth and fifth digits). It is cable actuated and features a built-in PID controller that receives input via serial commands. At each update step, a position control command was sent to the hand in the form of , where xi denotes the desired position of the ith degree of freedom.
The participant was shown a series of target postures by the experimenter, the same postures used in Experiment 1 (shown in Fig. 2), and asked to reproduce these with the robotic hand. The participant was prevented from seeing the robotic hand so that visual feedback did not influence the muscle activation patterns produced; the robotic hand was driven by the natural muscle activation patterns produced as a result of copying the experimenter’s target hand postures.
The performance of the model and hardware setup was quantified in terms of the time taken for each step in the data acquisition and processing cycle. The fidelity of reproduced movements were not quantified in this phase of the study; see Section II-B for those metrics.
D. Experiment 3: Control of simulated gripping
In the final experiment, we created a simulation of object gripping, to explore the model’s use as a controller of grip force. In this simulation, a virtual (smooth-sided) cup was placed in the modelled hand, and slowly filled with water. Ideally, a user would modulate the EMG commands to the model to maintain the grip of the object. However, since it was not feasible to carry out such user-in-the-loop experiments, in this study a PID controller was included in the simulation in place of the user to maintain grip force at a sufficient level to prevent slipping. The controller adjusted grip force by modulating the input muscle excitation level to the model during a forward-dynamic simulation (Fig. 3).
Fig. 3.

Schematic of simulated object gripping. The weight of the cup is altered to simulate filling, and the resultant shear force, Fshear, on the finger is used to estimate the minimum contact force, Fmin, necessary to prevent slip. Muscle excitations, u, are input to the model and the resultant finger displacement, x, is output, from where cup stiffness is used to estimate fingertip contact force, Fnorm. A PID controller modulates the muscle excitation to ensure the contact force is kept above the minimum.
The presence of the object (a cup) was simulated by applying a force to the tips of the fingers in the model. A linear stiffness was assumed for the cup, allowing fingertip forces to be calculated from their displacement as the cup was squeezed. Fingertip forces were fed back to the model and the grip force modulated by controlling muscle excitations of the deep flexor muscles. Grip was maintained with the minimum force necessary to prevent slipping of the cup. The normal force required to achieve this was calculated by monitoring the weight of the cup and hence the shear (friction) force on the fingers, using an assumed value of µ = 0.3 for the coefficient of friction. The cup had a stiffness of 10KN/m, and the weight was varied to simulate filling with water.
The weight of the cup was evenly distributed between the four fingers on one side and the thumb on the other. The four fingers thus equally shared 50% of the weight. We focus on the index finger here for illustration, but the simulation involved all the fingers. The neural excitation of the Flexor Digitorum Profundus Indicis (FDPI) muscle was continually updated by a PD controller with proportional gain Kp = 0.01N −1 and derivative gain Kd = 0.0005N −1. The controller was tuned to give a short rise time and minimal oscillation.
The results of this experiment were assessed in terms of the computational performance of the model. In order to acheive real-time performance, the maximum stable step size for the forward-dynamic simulation needs to be larger than the amount of time needed to complete the computation of system dynamics. In addition, qualitative assessment of the grip force modulation behaviour in relation to normal grip was undertaken.
III. Results
A. Kinematic fidelity of model-simulated hand motions
Fig. 4 shows the muscle excitation signals that are used as inputs for the model plotted over the raw EMG signals recorded from the corresponding muscles. The segment shown is for the ‘thumbs up’, ‘hand open’, ‘pointing with index finger’ and ‘L-shape’ postures.
Fig. 4.

Example of the muscle excitation signals used as model inputs, together with the raw EMG signals from which they are derived. The segment shown includes the ‘thumbs up’, ‘hand open’, ‘pointing with index finger’ and ‘L-shape’ postures.
Fig. 5 shows a comparison of the measured hand kinematics against those estimated by the model for the same sequence of movements. The hand angles are normalised to the minimum and maximum values encountered during full opening and closing of the hand.
Fig. 5.

The normalised angles for both the measured hand kinematics and the model-predicted joint postures. These are for the same segment of data as shown in Fig. 4, normalized using the range estimated from the repeated open-close trial.
The success rate for the model matching the postures achieved by the participants is shown in Table II. In a few cases, the participant was not able to make all 30 target postures presented to them; the failed postures have been excluded from the analysis as the model posture could not be evaluated in those cases. Pearson’s Correlation Coefficient between the recorded and model-estimated angles for all movements was also used to estimate the fidelity of model-predicted movements; these are shown in Table III.
TABLE II.
The success rate of the simulated posture matching the recorded posture. In some cases, the participant failed to achieve the target posture; this is shown by the denominator in the success rate calculation, which is the total number of target postures achieved by the participant. For example, even though 6 “Thumbs up” postures were included in the demonstration video, participant S2 only achieved 5. Of those, the model matched 3, so the success rate was 3/5.
| Participant | Open | Pointing | Thumbs up | “L” | Success rate |
|---|---|---|---|---|---|
| S1 | 10 | 7 | 4 | 4 | 25/30 = 0.83 |
| S2 | 10 | 5/5 | 3/5 | 6 | 24/26 = 0.92 |
| S3 | 6 | 2 | 0 | 2 | 10/30 = 0.33 |
| S4 | 8 | 0/7 | 5 | 1/3 | 14/26 = 0.54 |
| S5 | 9/9 | 6/7 | 1/5 | 0/0 | 22/28 = 0.79 |
| S6 | 10 | 8 | 1 | 3/4 | 22/29 = 0.76 |
| Total shown | 10 | 8 | 6 | 6 | 124/169 = 0.73 |
TABLE III.
Pearson’s Correlation Coefficient for the model-predicted angles for each subject and each digit across all movements
| Participant | Thumb | Index | Middle |
|---|---|---|---|
| S1 | 0.96 | 0.95 | 0.99 |
| S2 | 0.96 | 0.96 | 0.99 |
| S3 | 0.89 | 0.98 | 0.93 |
| S4 | 0.93 | 0.92 | 0.98 |
| S5 | 0.89 | 0.97 | 0.99 |
| S6 | 0.96 | 0.91 | 0.99 |
| mean±std | 0.93±0.03 | 0.95±0.03 | 0.98±0.02 |
Finally, the addition of proprioceptive feedback in the form of the muscle spindle model allowed us to estimate the spindle firing rates from both primary and secondary afferents associated with these movements. Although they were not further used in this study, they are shown in Fig. 6 for reference.
Fig. 6.

The model also simulates proprioceptive feedback. This figure shows the muscle spindle primary and secondary afferent firing rates for the same segment of data, alongside the normalised fibre length.
B. Real-time control of a robotic hand
In the second experiment, EMG signals recorded in the same way as in Section II-B were again used to drive the forward-dynamic model, and the kinematic outputs from the model were passed to the robotic hand (Prensilia IH2 Azzurra). This allowed the participant to directly control the movements of a physical, robotic hand in real time using forearm EMG signals. A small time delay associated with the EMG-envelope calculation was observed, but otherwise robotic hand movements mimicked those of the natural hand.
Fig. 7 shows the sequence of movements made by the user in controlling the hand, together with the actual posture adopted by the robotic hand. A full video of this is available in the Supplementary Material.
Fig. 7.

The Prensilia IH2 Azzurra robotic hand and the participant’s hand, shown in the various postures encountered during the trial. A video of the control achieved using this hand is available in the Supplementary Material.
Table IV shows the time required for each prosthesis control component, including the EMG acquisition and processing, dynamic simulations with the biomechanical hand model, and updating the robotic hand position. Out of 100ms, these processes take 16.2ms in total. The calculation of the proprioceptive feedback takes 2% of the time of the dynamic simulations.
TABLE IV.
The computation time required for specific tasks in the control of robotic hand hardware.
| Process | Execution time (ms) |
|---|---|
| EMG acquisition and processing | 1.6 |
| Simulation of hand dynamics (25 timesteps) | 12.3 |
| Updating robotic hand position | 2.3 |
| Total | 16.2 |
C. Control of simulated gripping
Fig. 8 shows the results of simulated gripping, where the amount of liquid in the cup is steadily increased. The initial value of 2N is the weight of the empty cup; the weight increases as the cup fills (Panel A). Panel B shows the activation of the FDPI muscle resulting from the changing force feedback. The initial spike is a response to the applied step load when the weight of the cup is placed in the hand. There is then a slow rise in activation as the cup fills. Panel C shows the fingertip force (solid line) maintained to just exceed the minimum force necessary to prevent slip (dashed line). Panel D shows the force in the FDPI tendon and Panel E the simulated output from the Golgi Tendon Organ Model.
Fig. 8.

Panel A shows weight of the cup as liquid is added. Panel B shows activation in the FDPI muscle to ensure that the fingertip force just exceeds the minimum necessary to prevent slipping. Panel C shows the resulting fingertip force, Panel D the force in the finger flexor and Panel E the simulated GTO output.
Table V shows the computational performance achieved during the simulation of grip force control. The gripping task includes the simulation of the deformation of the cup used to estimate the fingertip contact force. Note that in ultimate use, this would not be simulated but measured, hence the key value in this table is the time taken to simulate the hand dynamics. Since this is 3.3ms and the integration step size is 4ms, the simulation is fast enough to run in real time.
TABLE V.
The computation time required for specific tasks in the simulation of grip force control.
| Process | Execution time (ms) |
|---|---|
| Simulation of hand dynamics | 3.34 |
| PD Control execution | 0.009 |
| Simulated gripping task | 3.22 |
| Total | 6.57 |
IV. Discussion
The aim of this study was to extend our initial work on real-time biomechanical simulation of hand dynamics to answer key questions regarding the feasibility of this method for prosthesis control. We have conducted three experiments intended to demonstrate (i) the fidelity of EMG-driven model movements in reproducing actions produced by normally-limbed volunteers; (ii) the computational performance of the model and its suitability for robotic hand control with hardware and the user in the loop; and (iii) the potential of the model-based control method to produce controlled gripping of objects in the hand.
In the first experiment, the model-predicted movements matched the recorded movements made by the subjects with good fidelity (Pearson’s Correlation Coefficient greater than 0.89 for all subjects) when continuous movement is compared. When comparing the final postures of the subjects’ actions with those from the model, the agreement appears somewhat less good, with the overall success rate on average being 0.70, but dropping as low as 0.33 for one subject. In some cases, this can be explained by the final posture falling just short of the threshold for classification of a given movement, although the continuous angle comparison may indicate better performance. Although a correlation coefficient of 0.89 is considered good and might be expected to lead to a close match between actual and desired movements in practice, the effects of this on actual use of a prosthetic hand will need to be assessed in terms of the functional, rather than kinematic, performance in future work. Related, preliminary work from our group has shown that control is achievable with correlations in excess of approximately 0.55.
Furthermore, for some subjects with smaller arms, it proved difficult to separate the EMG signals into distinct functional movements, and this is an unsurprising limitation of the approach using discrete surface EMG recordings. It is likely that EMG crosstalk arising from this issue is a substantial contributor to the occasional discrepancies between measured and modelled movements (for example, the first second of Figure 4, Panel B). The longer-term goal of this work is to make use of recorded motor control signals with greater spatial resolution by means of implanted electrodes, or even nerve recordings, and in that case EMG cross talk will be less of an issue. In this experiment we have focussed on a deliberately limited subset of movements, as well as limiting wrist motion, to demonstrate the potential of the model-based approach, acknowledging that not all functional movements can be completed in this way.
In the second experiment, model simulations were amply fast, and in fact would have allowed for a much faster control loop than the one we chose (100ms). Kinematic outputs produced by the model were transmitted to the robotic hand via serial communication and the postures adopted almost instantaneously due to the robotic hand’s in-built controller. The time taken to update the robotic hand’s position was very small, as was the time to read and process EMG data. As expected, most of the time was spent simulating musculoskeletal dynamics. However, even this was well below what was necessary, and allows for a significant increase in model complexity if required, or a significant reduction in the robotic hand update frequency. This may allow for more responsive behaviour in real-world use, as well as allowing for faster and more complex movements as prosthetic device hardware and interfacing technology continues to advance.
In the final experiment, we have demonstrated the potential of the model in modulating finger flexor force in a simulation of closed-loop control of gripping. This shows that the model could be used to regulate muscle activity in response to the changing demand for grip force. The inclusion of the muscle model, with its first order delays and elastic tendon, gives rise to a compliant, human like grip that may lead to more natural control. This demonstration features feedback-only control of grip, whereas human grip features much more influence from feedforward mechanisms, and recent work has shown the importance of both feedforward and feedback mechanisms for improving control of grip in upper limb prosthesis users [26]. In our long-term vision, descending commands recorded from the user would provide the feedforward element, and force-related signals from simulated GTO and haptic sensors in the fingertips of the prosthetic device would be fed back. This means the user themselves would modulate descending commands controlling muscle activation, and the need for the PD controller is removed. This brings the control of slip-prevention to the user, rather than leaving the prosthetic device itself to autonomously control grip.
The use of a biomechanical model driven by cognate muscle activity suggests that the movement of the robotic or prosthetic hand should match the naturally intended movement of the user, and indeed to some extent we have demonstrated that to be the case. However, it should be noted that no attempt at model customisation beyond simple scaling was made in this case, so some degree of learning or adaptation to the differences between the natural hand and the model may be expected. Since our goal is to enable prosthesis control in someone without a natural hand, this may be of secondary importance compared to the effect of learned non-use in that person. Indeed, supporting this, recent studies have shown that an amputee may have more difficulty in controlling coordinated movement than normally-limbed individuals [16], [27]. Nonetheless, scaling to the contralateral limb could be carried out to reduce initial differences in control signals and minimise the learning required.
Recently published work [17] has shown the effectiveness of a similar biomechanical model approach to prosthesis control in user testing with amputees, albeit focusing on whole hand movement and not individual finger control. In that study, the simulation stopped short of full rigid-body kinematics, but transferred joint torques directly to the prosthetic device by controlling joint velocity. Thus the need for numerical integration of the state dynamics was effectively replaced with a physical model in the form of the prosthesis. This is an elegant solution that allows robust, pragmatic control. Our approach uses numerical simulation of prosthesis motion via a real-time model, and therefore allows simulation of prosthesis function also in the absence of a physical device. This may allow user training and system optimisation to take place ahead of fitting, minimising the learning time for the user.
Our approach uses knowledge of limb biomechanics to provide control signals for a prosthetic device via a real-time model. In our long-term vision, recordings of descending commands made either from residual muscle and nerve, or muscles innervated through targeted reinnervation [13], will produce the driving signals for the model. Furthermore, we have shown the ability to include physiologically meaningful simulations of proprioceptive output of both joint kinematics (via muscle spindle output) and force feedback (GTO outputs) that could be fed back to the user by peripheral nerve stimulation in future work. This will allow truly closed-loop control of hand function for prosthesis users, enabling state-of-the-art devices to be used to their full potential.
V. Conclusions and future work
In this study, we have demonstrated the feasibility of realtime biomechanical simulation of hand function for prosthesis control by showing good fidelity between model-predicted and human measured kinematics, faster than real time computational performance, and the potential for model-based grip-force control. A number of limitations of the model have been identified in terms of access to driving signals, but the potential for enabling greater dexterity as device-human interfacing improves is clear.
Future work will involve testing the functional performance of this approach in upper limb amputees both in simulation and with state-of-the-art prosthetic devices. Many questions remain regarding access to and user control of suitable signals, as well as users’ ability to learn the complex dynamics of multi-joint movement. We have also shown the potential to simulate meaningful feedback signals, and future work will need to assess the viability of delivering and interpreting these.
To facilitate shorter term translation of this work to current devices where invasive recordings may not be desirable or possible, further work investigating the improved extraction of information from surface EMG recordings should be pursued. This could include combining the musculoskeletal modelling approach with computational intelligence to infer information on deeper muscles that are not amenable to surface recordings.
Supplementary Material
Acknowledgment
The authors gratefully acknowledge the support of the Engineering and Physical Sciences Research Council (EP/M025977/1) and the National Institutes of Health (NIH-5R01EB011615) in this research.
Contributor Information
Dimitra Blana, University of Aberdeen, and was formerly with the Institute for Science and Technology in Medicine, Keele University, UK..
Antonie J. van den Bogert, Cleveland State University, USA.
Wendy M. Murray, Northwestern University and the Shirley Ryan Ability Lab, USA.
Amartya Ganguly, University of Heidelberg and was with the Institute for Science and Technology in Medicine, Keele University, UK..
Agamemnon Krasoulis, Newcastle University, UK..
Kianoush Nazarpour, Newcastle University, UK..
Edward K. Chadwick, University of Aberdeen, and was formerly with the Institute for Science and Technology in Medicine, Keele University, UK.
References
- [1].Biddiss E and Chau T, “Upper-Limb Prosthetics: Critical Factors in Device Abandonment,” American Journal of Physical Medicine & Rehabilitation, vol. 86, no. 12, pp. 977–987, December 2007. [Online]. Available: https://insights.ovid.com/crossref?an=00002060-200712000-00004 [DOI] [PubMed] [Google Scholar]
- [2].Waryck B, “Comparison of Two Myoelectric Multi-Articulating Prosthetic Hands,” in Proceedings of the 2011 MyoElectric Controls/Powered Prosthetics Symposium, New Brunswick, Canada, 2011, p. 4. [Google Scholar]
- [3].Farina D, Ning Jiang, Rehbaum H, Holobar A, Graimann B, Dietl H, and Aszmann OC, “The Extraction of Neural Information from the Surface EMG for the Control of Upper-Limb Prostheses: Emerging Avenues and Challenges,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 22, no. 4, pp. 797–809, July 2014. [Online]. Available: http://ieeexplore.ieee.org/document/6737308/ [DOI] [PubMed] [Google Scholar]
- [4].Cipriani C, Segil JL, Birdwell JA, and Weir R. F. f., “Dexterous Control of a Prosthetic Hand Using Fine-Wire Intramuscular Electrodes in Targeted Extrinsic Muscles,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 22, no. 4, pp. 828–836, July 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [5].Noce E, Dellacasa Bellingegni A, Ciancio AL, Sacchetti R, Davalli A, Guglielmelli E, and Zollo L, “EMG and ENG-envelope pattern recognition for prosthetic hand control,” Journal of Neuroscience Methods, vol. 311, pp. 38–46, January 2019. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0165027018303121 [DOI] [PubMed] [Google Scholar]
- [6].Blana D, Chadwick EK, van den Bogert AJ, and Murray WM, “Real-time simulation of hand motion for prosthesis control,” Computer Methods in Biomechanics and Biomedical Engineering, vol. 20, no. 5, pp. 540–549, April 2017. [Online]. Available: https://www.tandfonline.com/doi/full/10.1080/10255842.2016.1255943 [DOI] [PMC free article] [PubMed] [Google Scholar]
- [7].Atzori M and Mller H, “Control Capabilities of Myoelectric Robotic Prostheses by Hand Amputees: A Scientific Research and Market Overview,” Frontiers in Systems Neuroscience, vol. 9, November 2015. [Online]. Available: http://journal.frontiersin.org/Article/10.3389/fnsys.2015.00162/abstract [DOI] [PMC free article] [PubMed] [Google Scholar]
- [8].Krasoulis A, Kyranou I, Erden MS, Nazarpour K, and Vijayakumar S, “Improved prosthetic hand control with concurrent use of myoelectric and inertial measurements,” Journal of NeuroEngineering and Rehabilitation, vol. 14, no. 1, p. 71, July 2017. [Online]. Available: 10.1186/s12984-017-0284-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
- [9].Batzianoulis I, Krausz NE, Simon AM, Hargrove L, and Billard A, “Decoding the grasping intention from electromyography during reaching motions,” Journal of NeuroEngineering and Rehabilitation, vol. 15, no. 1, p. 57, December 2018. [Online]. Available: https://jneuroengrehab.biomedcentral.com/articles/10.1186/s12984-018-0396-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- [10].Pistohl T, Cipriani C, Jackson A, and Nazarpour K, “Abstract and Proportional Myoelectric Control for Multi-Fingered Hand Prostheses,” Annals of Biomedical Engineering, vol. 41, no. 12, pp. 2687–2698, December 2013. [Online]. Available: http://link.springer.com/10.1007/s10439-013-0876-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- [11].Smith RJ, Tenore F, Huberdeau D, Etienne-Cummings R, and Thakor NV, “Continuous decoding of finger position from surface EMG signals for the control of powered prostheses,” in 2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Aug 2008, pp. 197–200. [DOI] [PubMed] [Google Scholar]
- [12].Ngeo JG, Tamei T, and Shibata T, “Continuous and simultaneous estimation of finger kinematics using inputs from an EMG-to-muscle activation model,” Journal of NeuroEngineering and Rehabilitation, vol. 11, no. 1, p. 122, August 2014. [Online]. Available: 10.1186/1743-0003-11-122 [DOI] [PMC free article] [PubMed] [Google Scholar]
- [13].Kuiken TA, Li G, Lock BA, Lipschutz RD, Miller LA, Stubblefield KA, and Englehart KB, “Targeted muscle reinnervation for real-time myoelectric control of multifunction artificial arms,” JAMA, vol. 301, no. 6, pp. 619–628, February 2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [14].Fougner A, Stavdahl O, Kyberd PJ, Losier YG, and Parker PA, “Control of Upper Limb Prostheses: Terminology and Proportional Myoelectric ControlA Review,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 20, no. 5, pp. 663–677, September 2012. [Online]. Available: http://ieeexplore.ieee.org/document/6205630/ [DOI] [PubMed] [Google Scholar]
- [15].Williams I and Constandinou TG, “Computationally efficient modeling of proprioceptive signals in the upper limb for prostheses: a simulation study,” Frontiers in Neuroscience, vol. 8, June 2014. [Online]. Available: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4069835/ [DOI] [PMC free article] [PubMed] [Google Scholar]
- [16].Crouch DL and Huang H, “Lumped-parameter electromyogram-driven musculoskeletal hand model: A potential platform for real-time prosthesis control,” Journal of Biomechanics, vol. 49, no. 16, pp. 3901–3907, December 2016. [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S0021929016311368 [DOI] [PubMed] [Google Scholar]
- [17].Sartori M, Durandau G, Doen S, and Farina D, “Robust simultaneous myoelectric control of multiple degrees of freedom in wrist-hand prostheses by real-time neuromusculoskeletal modeling,” Journal of Neural Engineering, vol. 15, no. 6, p. 066026, December 2018. [Online]. Available: http://stacks.iop.org/1741-2552/15/i=6/a=066026?key=crossref.24203caa63d0aa7e7a24cbd38ab35e99 [DOI] [PubMed] [Google Scholar]
- [18].Kapelner T, Vujaklija I, Jiang N, Negro F, Aszmann OC, Principe J, and Farina D, “Predicting wrist kinematics from motor unit discharge timings for the control of active prostheses,” Journal of NeuroEngineering and Rehabilitation, vol. 16, no. 1, p. 47, April 2019. [Online]. Available: 10.1186/s12984-019-0516-x [DOI] [PMC free article] [PubMed] [Google Scholar]
- [19].Delp SL, Anderson FC, Arnold AS, Loan P, Habib A, John CT, Guendelman E, and Thelen DG, “OpenSim: Open-Source Software to Create and Analyze Dynamic Simulations of Movement,” IEEE Transactions on Biomedical Engineering, vol. 54, no. 11, pp. 1940–1950, November 2007. [Online]. Available: http://ieeexplore.ieee.org/document/4352056/ [DOI] [PubMed] [Google Scholar]
- [20].Chadwick EK, Blana D, Kirsch RF, and van den Bogert AJ, “Real-Time Simulation of Three-Dimensional Shoulder Girdle and Arm Dynamics,” IEEE Transactions on Biomedical Engineering, vol. 61, no. 7, pp. 1947–1956, July 2014. [Online]. Available: http://ieeexplore.ieee.org/document/6755458/ [DOI] [PMC free article] [PubMed] [Google Scholar]
- [21].van den Bogert AJ, Blana D, and Heinrich D, “Implicit methods for efficient musculoskeletal simulation and optimal control,” Procedia IUTAM, vol. 2, pp. 297–316, 2011. [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S2210983811000289 [DOI] [PMC free article] [PubMed] [Google Scholar]
- [22].Mileusnic MP, Brown IE, Lan N, and Loeb GE, “Mathematical Models of Proprioceptors. I. Control and Transduction in the Muscle Spindle,” Journal of Neurophysiology, vol. 96, no. 4, pp. 1772–1788, October 2006. [Online]. Available: http://www.physiology.org/doi/10.1152/jn.00868.2005 [DOI] [PubMed] [Google Scholar]
- [23].Lin C-CK and Crago PE, “Neural and Mechanical Contributions to the Stretch Reflex: A Model Synthesis,” Annals of Biomedical Engineering, vol. 30, no. 1, pp. 54–67, January 2002. [Online]. Available: 10.1114/1.1432692 [DOI] [PubMed] [Google Scholar]
- [24].Metcalf C, Notley S, Chappell P, Burridge J, and Yule V, “Validation and Application of a Computational Model for Wrist and Hand Movements Using Surface Markers,” IEEE Transactions on Biomedical Engineering, vol. 55, no. 3, pp. 1199–1210, March 2008. [Online]. Available: http://ieeexplore.ieee.org/document/4360139/ [DOI] [PubMed] [Google Scholar]
- [25].Blana D, Kyriacou T, Lambrecht JM, and Chadwick EK, “Feasibility of using combined EMG and kinematic signals for prosthesis control: A simulation study using a virtual reality environment,” Journal of Electromyography and Kinesiology: Official Journal of the International Society of Electrophysiological Kinesiology, vol. 29, pp. 21–27, August 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [26].Saunders I and Vijayakumar S, “The role of feed-forward and feedback processes for closed-loop prosthesis control,” Journal of NeuroEngineering and Rehabilitation, vol. 8, no. 1, p. 60, 2011. [Online]. Available: http://jneuroengrehab.biomedcentral.com/articles/10.1186/1743-0003-8-60 [DOI] [PMC free article] [PubMed] [Google Scholar]
- [27].Pan L, Crouch DL, and Huang H, “Myoelectric Control Based on a Generic Musculoskeletal Model: Toward a Multi-User Neural-Machine Interface,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 26, no. 7, pp. 1435–1442, July 2018. [Online]. Available: https://ieeexplore.ieee.org/document/8360946/ [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
