Abstract
Robotic fingers and arms that augment the motor abilities of non-disabled individuals are increasingly feasible yet face neurocognitive barriers and hurdles in efferent motor control.
Bionic limbs were science fiction a few decades ago. Today, individuals with motor disabilities can use brain–machine interfaces to control bionic arms only by thought1, as exemplified by an invasive brain–machine interface that allowed an individual with paralyzed arms to control an anthropomorphic robotic arm with 10 degrees of freedom (DOF) (ref. 2). These achievements have resulted from innovations in hardware for prosthetic limbs and in decoding and encoding algorithms, from more robust and sustainable bidirectional neural interfaces3, and from new surgical techniques4,5 to connect prosthetic technology to the body. Proofs of concept in small-scale clinical tests have raised expectations as to whether such innovations, procedures and devices can be leveraged for the development of assistive technologies1,6.
Indeed, beyond the restoration of lost function, there is growing interest in the exploitation of similar (neuro)technologies for motor augmentation — that is, the extension, rather than the restoration, of motor capabilities. One notable example, which caught the public’s attention in 2002, consisted of a multielectrode array implanted in the median nerve7. The motor signals that the array recorded allowed the user (Kevin Warwick, Reading University) to remotely actuate devices and to crudely control the movements of a prosthetic hand. Other examples are exoskeletons to increase a user’s endurance or strength beyond those of a biological limb (these devices are often featured in action movies). Supported by biological body parts, motor augmentation can involve increased strength, dexterity, manipulation versatility, biomechanical efficiency (net force or torque), range of motion and other potential capabilities.
In this Comment, we discuss the challenges of applying restorative and assistive technologies to motor augmentation. We focus on additional bionic digits (X-digits) and bionic arms (X-arms; or, more generally, X-limbs) and consider what physiological and cognitive resources could be harnessed to control a new body part (for example, is it possible to control X-limbs simultaneously and independent of the biological limbs?). We also discuss alternative physiological interfaces for the control of X-limbs, such as sharing motor resources from another (task-irrelevant) body part or splitting the motor resources of the biological arm to independently control real limbs and X-limbs. Moreover, we consider the technological feasibility of non-invasive and invasive interfaces for motor augmentation, as well as inherent neurocognitive constraints that may be present regardless of the nature of the interface. We posit that, in the near term, approaches based on restorative technology are unlikely to be feasible for X-limbs. To fully realize the potential of bionic augmentation, engineers and designers will need to draw on strategies beyond those available in the assistive-technology toolbox.
The promise of motor augmentation
Technological development efforts for the restoration and enhancement of motor capabilities are closely related (Fig. 1). Enhancing a body function, for example by making the human hand more dexterous (via the da Vinci surgical system8) or the arm stronger (through the Guardian XO exoskeleton9), is already possible in some cases. Unlike the use of traditional tools (such as million-year-old stone-age tools), the purpose of an X-arm is to provide additional function without hindering existing biological functionality. Yet, what are the technical and cognitive challenges involved in fitting users without a disability with a robotic device (such as a bionic X-arm or an X-digit) that can be controlled independently and that provides new motor abilities in synergy with the user’s existing arms and digits? As with traditional prosthetic devices, an X-limb might aim to mimic limb functionality, for example, providing an extra opposable thumb10 or a jointed arm11. By incorporating additional fully functional limbs into the motor repertoire, we could, in principle, augment our motor capabilities and take better advantage of opportunities for multitasking. For example, an X-limb might allow an individual to hammer a nail while holding a joist in place or to flip the pages of a book while holding on to a coffee cup, or allow a surgeon to resect a tumour while holding retractors. In addition, by allowing for the control of multiple arms, carpenters, surgeons and straphangers might be able to perform some tasks more efficiently while forgoing the need for assistance (and of coordination efforts with assistants). Yet, how feasible is it to harness restorative technology for motor augmentation?
Fig. 1 ∣. Motor augmentation for enhanced motor capabilities.
Broadly defined, motor augmentation (blue circle) refers to any technology that enhances normal motor capabilities. Some augmentation technologies may overlap with restorative technologies (yellow circle). For example, a soft sixth finger can be designed to provide a patient with a paretic hand with an added ability to grasp objects. Other augmentation technologies are designed to transform hand function by sharing features with the use of traditional tools (pink circle). One such example is the da Vinci surgical system, a teleoperated robot that scales down the movements of a surgeon’s hand to allow for fine microdissection. Technologies strictly designed for motor augmentation can also work independent of the user’s limbs or in collaboration with them. For example, the Third Thumb (Dani Clode Design) is toe-controlled and specifically designed to add motor capabilities to a fully functional hand. The blurred boundaries of the circles indicate that specific technological designs and use cases cannot always be neatly classified according to the augmentation–restoration–tool-use framework. Image credits: da Vinci surgical system, ref. 32, reproduced under a Creative Commons licence CC BY 4.0; prosthetic arm, Defense Advanced Research Projects Agency (DARPA); third thumb, © 2012–2021 Danielle Clode Design. All Rights Reserved; soft sixth finger, adapted with permission from SIRSLab.
Although many of the technical requirements for augmentation technologies and restorative technologies are shared, the neural and cognitive resources needed for the successful control of an X-limb and a replacement limb are different. Fluid and intuitive control of a restorative bionic limb requires motor and sensory resources that previously were devoted to controlling the missing limb. In this case, the central challenge is technical: a bidirectional interface between the device and available biological resources must be formed. However, when designing an X-limb, new ways would need to be devised to move the new bionic body part and to provide sensory feedback from it. To realize bionic augmentation, the main challenge will be achieving fluid and intuitive control without decreasing the functionalities of existing body parts.
Challenges of efferent motor control
Current technology for X-limbs is generally non-invasive, and the control of the X-limb relies either on signals from electroencephalography (EEG) or electromyography (EMG), or on actual movements (Fig. 2). To control an upper-limb-like device, a typical design principle is to borrow the motor output from a body part (for example, the foot or the torso) that would not normally contribute to manual coordination12. This approach is effective for the proportional control of a few DOF, which together with autonomous robotic control could provide enhanced functionality13. For example, with two DOF (one controlled by each toe), a user might simultaneously flex or extend and adduct or abduct an opposable X-thumb, affording the grasping of objects of various shapes and sizes10. Naive users can achieve basic levels of functionality within minutes14, and increased dexterity through additional customized training10. But this operating principle comes at a cost: the toe’s functionality is ‘hijacked’ by the X-limb and cannot be used for its native purpose while the X-limb is being controlled. This cost would be modest for a seated surgeon, but it could seriously compromise the toe’s normal function in body balance (which could be essential for a straphanger). Hence, this approach reduces the overall usability of the X-limb.
Fig. 2 ∣. Body parts used in current interfaces for the motor control of X-limbs.
The easiest approach to controlling an X-limb is to hijack movement of a body part. This can be done by pressure sensors attached to the toes or via EMG recordings of muscles; however, this compromises the independent functionality of the hijacked body part. Alternatively, EMG can be used to recruit specific muscle patterns that are designed to control the X-limb without impacting fluency of movement of the native body part. This might be done by splitting motor resources to control both the body part and the X-limb; however, this approach may only be viable for controlling a few DOF. Another option is to take advantage of known redundancies in the cortex to split the neural control of the arm and the X-limb; this approach will most likely require invasive techniques. Figure adapted with permission from ref. 12, Springer Nature Ltd.
To sidestep this problem, one might try to tap into the apparent redundancies of the nervous system to create a split controller for the augmentation device while minimally interrupting the biological action (Fig. 3). For example, a given muscle in the arm may be innervated by anywhere from hundreds to tens of thousands of motor neurons, each controlling a separate set of muscle fibres or ‘motor unit’. It is now possible to decompose surface EMG recordings into as many as 30–35 motor units, each representing the activity of a single motor neuron15. Motor units are generally recruited in a strict order that depends on their size and fibre type, and there is evidence that it is possible to learn to activate a small subset of motor units independently16,17 (although this was questioned in ref. 18 (Fig. 3). However, any activity pattern that can be recorded with surface EMG will also cause a muscle twitch, which might interfere with the independent movement of the biological arm. Furthermore, motor-unit discrimination is susceptible to movement; in fact, the participants in these first studies were asked not to make overt movements, and their limbs were stabilized with orthoses.
Fig. 3 ∣. High-density arrays of recording electrodes for EMG can be used to extract the activity of single motor units using blind-source separation techniques.
By using real-time feedback from these recordings, the user can learn to separately control individual motor units to move a cursor on a screen. Image credits: EMG electrodes, EMG signals, decoding of motor units, ref. 17, adapted under a Creative Commons licence CC BY 4.0; selection of specific motor units, reproduced with permission from ref. 33, © 2021 the American Physiological Society; prosthetic arm, Defense Advanced Research Projects Agency (DARPA).
EEG has been used as a communication interface for individuals with amyotrophic lateral sclerosis19 and, hence, may constitute another possible non-invasive approach for motor augmentation. Because of its noisy nature, most EEG communication interfaces have used event-related potentials — that is, the responses of the brain to specific sensory, cognitive or motor events — as well as steady-state visually evoked potentials. By leveraging visually evoked potentials to detect the differential flicker frequency and phase imposed across a menu of 40 characters, individuals reached an impressive 60-characters-per-minute performance in a spelling task20. Low-frequency EEG signals have also been used to classify types of grasp across the several phases of grasping. However, such EEG methods require explicit eye movements (for character selection, for example) or visual attention (for grasping) and, hence, are unlikely to be directly applicable to X-limb control. Also, controlling a prosthetic device or a cursor via EEG continuously is much more challenging. It typically requires days or weeks of training while the user learns to imagine substantially different motor behaviours (such as contractions of the left and right arms or of the legs) tied to one or two DOF21. Differences in the methods and metrics used across studies makes quantitative comparisons difficult, yet the involved information rates are well under an order of magnitude lower than those of the best invasive approaches. Therefore, for most users, EEG and EMG are unlikely to scale much beyond a few DOF and to enable the full realization of motor augmentation.
Decoded intracortical recordings have enabled the continuous decoding of handwriting from a person with tetraplegia at a remarkable 90 characters per minute6. It is tempting to postulate that invasive brain–machine interfaces may offer the greatest promise for the control of an X-limb. The area of the motor cortex that holds a representation of the arm has more than one million neurons with highly overlapping activity patterns for the control of about a hundred muscles. Transferring even a fraction of the neural resources controlling the biological arm to the independent control of an X-arm might be a powerful solution to the neural-allocation problem.
Decades-old evidence has shown that monkeys can learn to voluntarily modulate the firing rate of single neurons if they are given a suitable form of biofeedback22. Similarly, the particular neurons used to control an intracortical brain–computer interface become more responsive and informative as a monkey learns to use the interface while the activity patterns of neurons recorded at the same time but not used for control change relatively little23. Because there are many orders of magnitude more neurons than muscles, there are many patterns of cortical activity that do not give rise directly to altered motor output and that are thought to represent extra flexibility for movement planning and other functions24. In principle, this ‘null-space activity’ might also make it possible to control an X-limb without interfering with the simultaneous control of natural limbs. Some evidence from experiments with monkeys indicates that this is more than a theoretical possibility: the animals learned to control simple cursor movements through an intracortical brain–computer interface while simultaneously controlling their own arm contralateral to the implanted array (with, however, only a total of two and three controlled DOF25; Fig. 4).
Fig. 4 ∣. A monkey performing an isometric-force-generation task with their right arm, and a centre-out reaching task using a decoder of the primary motor cortex driven by input from the left contralateral arm.
The monkey drives the brain–machine interface cursor (orange; centre-out reaching task) by maintaining an applied force (isometric-force-generation task), which they control by maintaining the diameter of the force cursor (dark blue ring) within a force target (light blue ring). Figure adapted with permission from ref.25, © 2014 Elsevier Inc. All rights reserved.
At present, ethical considerations prevent the use of invasive cortical implants to control augmentation devices in healthy humans. It is possible, however, that ethical concerns will soften, particularly if efforts to develop fully implanted wireless interfaces with thousands of recording channels (as promised by the company Neuralink) come to fruition. Still, invasive approaches may not make motor augmentation fully feasible. First, individual neurons exist within functional networks, anatomically and functionally bound to each other in a manner that creates stable covariation between neurons and that greatly reduces the actual DOF of the neural population26. Consequently, it may not be possible to arbitrarily recruit neurons to fulfil a new function. In fact, monkeys faced with the task of learning to control a brain–computer interface that was designed to require them to alter these normal patterns of covariation between neurons required weeks of incremental coaching27. It is therefore reasonable to assume that learning to operationalize a subset of neurons for distinct additional functions will require a large training effort. Yet the ambition for the implementation of motor augmentation is immediate intuitive control.
Neurocognitive challenges
Learning to take advantage of the extended abilities afforded by an X-limb will require the user to update their internal limb models and existing motor repertoires to adapt their previous motor plans to the new augmented configuration — a process that, presumably, is not unlike normal motor learning. Although the training needed to learn to control an X-limb might be a welcome challenge to some users, for most it would be costly. Training takes time and cognitive effort and can be frustrating when progress is slow. One way to mitigate barriers to training is to simplify the control demands of an X-limb. For example, in a reach-to-grasp task — a key upper-limb function —multiple joints do not need to be explicitly articulated; some of the low-level control can in fact be automated13. It is nevertheless important to provide some level of autonomy to the user to avoid the frustration of having limited options and the risk of robotic ‘alien-hand syndrome’. Soft robotics solutions might also provide a functional X-limb while maintaining a low number of DOF. For example, inspired by octopus tentacles or suction cups, X-limbs could be used to reach and grasp a large range of objects using only one or two DOF.
When developing a motor interface, normal motor control heavily relies on sensory inputs to plan and execute movements. Somatosensation is particularly important. Without it, people with otherwise intact motor and musculoskeletal systems struggle to complete even basic actions. Research on artificial limbs has shown the benefits of artificial somatosensory feedback for prosthetic devices28. Sensor technology, such as flexible organic systems that mimic the function of a sensory nerve, as well as stretchable optical waveguides, have been developed to provide tactile-like feedback from artificial limbs. Beyond touch, artificial skin can also convey temperature, humidity and positional information. Although these technologies could be repurposed to support users of X-limbs, the neurocognitive integration of this rich sensory feedback is not straightforward. It is indeed challenging to deliver additional sensory feedback from the X-limb without interfering with the sensory flow from the biological limbs and without increasing the cognitive load12. Current technology for X-limbs does not typically include artificial sensory feedback; however, the control of such devices may benefit from alternative sources of natural somatosensory feedback14 (as is the case for mechanical prostheses29). Beyond somatosensation, X-limb control may also present unexpected challenges for users by requiring altered eye–hand coordination and a hand-related visuospatial representation of movement. As such, successful integration and control of extra body parts will require thoughtful consideration of the various motor, sensory and cognitive elements that comprise intuitive motor control30.
The burden of training could also be mitigated by leveraging insights from meta-learning (that is, learning to learn). In the learning of new skills, cognitive strategies are helpful. Therefore, learning to control an X-limb could involve multiple different higher-level cognitive processes, such as reinforcement learning, decision making, perceptual learning and metacognition. Engineers developing control algorithms for X-limbs could make them more accessible for intuitive learning by leveraging knowledge from cognitive neuroscience, sports psychology and rehabilitation.
Identifying adequate solutions to train users to develop proficient motor control of an X-device might not be sufficient. Natural limits on executive control might become a bottleneck. Indeed, we might be already pushing the limits of our cognitive capabilities, as one can intuitively experience when attempting to write a manuscript in the presence of continuous inputs from news feeds, social-media feeds, background music and ongoing text conversations. We might find that our limited attentional resources, rather than limited motor control, thwarts us from taking full advantage of opportunities to carry out more tasks simultaneously. Many bilateral arm movements, such as holding a nail and striking it with a hammer, or inserting a suture needle through tissue held with forceps, become second nature only after long periods of learning. Other movements, such as maintaining four separate rhythms with arms and legs on a drum kit, are the skills of experts. It is difficult to establish where within this spectrum (or beyond it) the coordination of movements from natural limbs and X-limbs might fall.
Outlook
Attempting to create multi-purpose X-arms that operate similarly to (and independent of) biological arms without relying on invasive interfaces with broader bandwidth might be too ambitious. In the short term, non-invasive interfacing technology could provide solutions for specific tasks (such as industrial automation), particularly when adding low-level control. In the long term, X-limbs leveraging invasive interfaces with broader bandwidth and allowing for the splitting of neural control may be able to benefit from the ongoing development of more dexterous hands and more sophisticated and neuromorphic sensors mimicking those in native skin. The simplification of the cognitive and motor demands of X-limb control through embedded intelligence and semi-autonomous control strategies13 may nevertheless be necessary. New technologies for X-limbs will also generate a host of regulatory, ethical and legal issues (as reviewed in ref.12) associated with the use of invasive implants in healthy users and with the long-term impact on the user’s body representation, social implications and agency over the artificial body part31.
Acknowledgements
We thank D. Clode for her assistance with creating Figs. 1 and 2. T.R.M. was supported by an ERC Starting Grant (715022 EmbodiedTech), a Wellcome Trust Senior Research Fellowship (215575/Z/19/Z) and MRC funding award G116768 at the MRC Cognition and Brain Sciences Unit. S.M. was partly funded by the Bertarelli Foundation and the Swiss National Competence Center Research in Robotics. L.E.M was funded by the National Institute of Neurological Disorders and Stroke (R01NS095251, R01 NS053603, R01 NS109257).
Footnotes
Competing interests
The authors declare no competing interests.
References
- 1.Collinger JL et al. Lancet 381, 557–564 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Wodlinger B et al. J. Neural Eng 12, 016011 (2015). [DOI] [PubMed] [Google Scholar]
- 3.Bensmaia SJ, Tyler DJ & Micera S Nat. Biomed. Eng 10.1038/s41551-020-00630-8 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Kuiken T, Feuser A & Barlow A (eds) Targeted Muscle Reinnervation: A Neural Interface for Artificial Limbs 1st edn (CRC Press, 2013). [Google Scholar]
- 5.Srinivasan SS et al. Sci. Robotics 2, eaan2971 (2017). [Google Scholar]
- 6.Willett FR et al. Nature 593, 249–254 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Warwick K et al. Arch. Neurol 60, 1369 (2003). [DOI] [PubMed] [Google Scholar]
- 8.Maeso S et al. Ann. Surg 252, 254–262 (2010). [DOI] [PubMed] [Google Scholar]
- 9.Bogue R Industrial Robot. Int. J 45, 585–590 (2018). [Google Scholar]
- 10.Kieliba P et al. Sci. Robotics 6, eabd7935 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Sasaki T et al. SIGGRAPH '17: ACM SIGGRAPH 2017 Emerging Technologies (ACM, 2017); 10.1145/3084822.3084837 [DOI] [Google Scholar]
- 12.Dominijanni G et al. Nat. Mach. Intell 3, 850–860 (2021). [Google Scholar]
- 13.Zhuang KZ et al. Nat. Mach. Intell 1, 400–411 (2019). [Google Scholar]
- 14.Amoruso E et al. J. Neural Eng 19, 016006 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Farina D et al. Nat. Biomed. Eng 1, 0025 (2017). [Google Scholar]
- 16.Brcklein M et al. J. Neural Eng 18, 016001 (2021). [Google Scholar]
- 17.Formento E, Botros P & Carmena JM J. Neural Eng 18, 066019 (2021). [DOI] [PubMed] [Google Scholar]
- 18.Bräcklein M et al. Elife 11, e72871 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Wolpaw JR et al. Neurology 91, e258–e267 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Chen X et al. Proc. Natl Acad. Sci 112, E6058–E6067 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Abiri R et al. J. Neural Eng 16, 011001 (2019). [DOI] [PubMed] [Google Scholar]
- 22.Fetz EE & Finocchio DV Science 174, 431–435 (1971). [DOI] [PubMed] [Google Scholar]
- 23.Ganguly K et al. Nat. Neurosci 14, 662–667 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Kaufman MT et al. Nat. Neurosci 17, 440–448 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Orsborn AL et al. Neuron 82, 1380–1393 (2014). [DOI] [PubMed] [Google Scholar]
- 26.Gallego JA et al. Neuron 94, 978–984 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Oby ER et al. Proc. Natl Acad. Sci 116, 15210–15215 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Flesher SN et al. Science 372, 831–836 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Childress DS Ann. Biomed. Eng 8, 293–303 (1980). [DOI] [PubMed] [Google Scholar]
- 30.Gallego JA, Makin TR & McDougle SD Trends Neurosci. 45, 176–183 (2022). [DOI] [PubMed] [Google Scholar]
- 31.de Vignemont F in The Oxford Handbook of the Philosophy of Consciousness (ed. Kriegel U) 81–101 (2020). [Google Scholar]
- 32.Longmore SK, Naik G & Gargiulo GD Robotics 9, 42 (2020). [Google Scholar]
- 33.Ting JE et al. J. Neurophysiol 126, 2104–2118 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]