Skip to main content
PLOS One logoLink to PLOS One
. 2020 Aug 5;15(8):e0226052. doi: 10.1371/journal.pone.0226052

Controlling a robotic arm for functional tasks using a wireless head-joystick: A case study of a child with congenital absence of upper and lower limbs

Sanders Aspelund 1,*,#, Priya Patel 2,#, Mei-Hua Lee 2, Florian A Kagerer 2,3, Rajiv Ranganathan 1,2,, Ranjan Mukherjee 1,
Editor: Imre Cikajlo4
PMCID: PMC7406178  PMID: 32756553

Abstract

Children with movement impairments needing assistive devices for activities of daily living often require novel methods for controlling these devices. Body-machine interfaces, which rely on body movements, are particularly well-suited for children as they are non-invasive and have high signal-to-noise ratios. Here, we examined the use of a head-joystick to enable a child with congenital absence of all four limbs to control a seven degree-of-freedom robotic arm. Head movements were measured with a wireless inertial measurement unit and used to control a robotic arm to perform two functional tasks—a drinking task and a block stacking task. The child practiced these tasks over multiple sessions; a control participant performed the same tasks with a manual joystick. Our results showed that the child was able to successfully perform both tasks, with movement times decreasing by ~40–50% over 6–8 sessions of training. The child’s performance with the head-joystick was also comparable to the control participant using a manual joystick. These results demonstrate the potential of using head movements for the control of high degree-of-freedom tasks in children with limited movement repertoire.

Introduction

According to the 2010 American Census, there were approximately 300,000 children with disabilities requiring some form of assistance with activities of daily living [1]. In this context, assistive devices such as wheelchairs and robotic arms are vital for activities requiring mobility and manipulation. Importantly, these devices are not only critical from a sensorimotor perspective, but they also support psychosocial development by providing children with greater independence [2].

Among methods of controlling assistive devices, manual joysticks are the most popular [3]; however, because they require upper limb functionality, they are not suited for individuals with severe motor impairments such as high-level spinal cord injury or congenital limb absence. For these individuals, interfaces have been developed based on signals from the brain [46], or the body [79]. For children in particular, interfaces based on brain signals, invasive or non-invasive, are less than ideal for long-term use because of issues related to risks of surgery, signal quality, signal drift and longevity [10, 11]. These limitations highlight the need for developing body-machine interfaces that are non-invasive and robust, and, importantly, also have form factors that make them inconspicuous during interaction with peers [12].

A specific class of body-machine interfaces that addresses these requirements are interfaces based on head movements [13]. Head movements are typically preserved in individuals with severe motor impairments, often making such interfaces the only viable option. Two common approaches based on head movements are head arrays and head joysticks. Head arrays rely on a series of switches that are physically activated by contact with the head. Although they are commercially available and have been used for wheelchair control, they are not well-suited for high degree-of-freedom (DOF) tasks because of the binary nature of the switches. Head joysticks, in contrast, mimic manual joysticks and provide a continuous method of controlling degrees of freedom [14, 15], thus having the potential for being used for high-DOF tasks. In addition, head joysticks based on commercially available inertial measurement units (IMUs) are non-invasive, wireless, and have high signal-to-noise ratios. Previous research has shown the utility of head joysticks for low-DOF tasks such as wheelchair control [14, 16, 17], but evidence of control of high-DOF tasks using head joysticks is limited [18, 19], especially in children.

In this study, we investigated the use of an IMU-based head joystick for controlling a robotic arm to perform high-DOF functional tasks. In a child with congenital absence of all four limbs, we examined the child’s ability to perform two tasks related to activities of daily living–(i) picking up a cup and drinking using a straw, and (ii) manipulating objects placed on a table. We show that the child can use the head-joystick to successfully perform these complex tasks and improve over time to a level that is comparable to that of an unimpaired individual using a manual joystick.

Materials and methods

Participants

Our main participant was a 14-year old male with congenital absence of all four limbs—see Fig 1a. He had participated in two previous studies with our group which involved position control of a cursor [20] and 2-DOF velocity control of the end-effector of a robotic arm [21]. These prior studies involved the control of these devices using shoulder and torso movements. In the current study, he used his head as a joystick to control the robotic arm as shown in Fig 1a. Initially, there were 4 unstructured sessions, each lasting about 30–45 minutes in length. We used these sessions to calibrate the interface to make sure that the head movements performed were in a comfortable range when controlling the robot. During each session, the participant was asked to do some exploratory movements of the head to understand how movements of each DOF controlled the robot, and to learn the operation of the switches (which were used to toggle between the translation/orientation modes and control the end-effector). In addition, the participant was also free to perform any tasks of their liking using the robot arm like trying to pick up an object from a table. The child was paid $10 per visit.

Fig 1. Interfaces for controlling the robot and experimental setup for the drinking and stacking tasks.

Fig 1

(a) Interface for main participant using the head-joystick. A head mounted IMU was used to control the robotic arm, and switches (SW1, SW2, SW3) placed behind the shoulder were used to toggle between different control modes and for control of the grasper. (b) Interface for control participant using the manual joystick. (c) Initial layout of drinking task. Participants had to use the robotic arm to grasp the cup and bring it to the mouth. (d) Initial layout of stacking task. Participants had to use the robotic arm to stack the five blocks on top of each other in order of decreasing size with the biggest block at the base.

Our control participant was an able-bodied college-aged male volunteer (21 years old)- see Fig 1b. He controlled the robot with its accompanying manual joystick. He had no prior experience interacting with the system or observing its use.

All participants provided informed consent or assent (including parental consent in case of child) and experimental protocols were approved by the IRB at Michigan State University. The individuals pictured in Fig 1 (and the S2 and S1 Videos) have provided written informed consent (or parental consent when appropriate, as outlined in PLOS consent form) to publish their images and videos alongside the manuscript.

Apparatus

Robot: We used a 7-DOF robotic arm (JACO v2 arm, KINOVA robotics, Boisbriand QC, Canada) mounted on a table for performing the object manipulation tasks. The robotic arm, shown in Fig 1, is anthropomorphic with 2 DOFs at the shoulder, 1 DOF at the elbow, 3 DOFs at the wrist, and a 1 DOF gripper; specifications of the robot can be found at www.kinovarobotics.com.

Head-Joystick: For the child with congenital limb absence, the robotic arm was controlled via signals generated by a wireless inertial measurement unit (IMU) (YEI Technologies Inc., Ohio) worn on the top of a baseball cap with the bill removed—see Figs 1a and 2a. A second IMU, shown in Fig 1a, was placed on the table to determine the relative orientation between the participant and the robot reference frame. This second IMU, although redundant in the current study (because the table was always fixed), is necessary for preventing unintended movement of the robotic arm when the robot reference frame moves along with the participant (for example, when the robot is mounted on the wheelchair of the participant). Together, these IMUs were used to control the six DOFs of the robot end-effector—three position DOFs and three orientation DOFs. The head-joystick was not used to control the seventh DOF—opening and closing of the end-effector, which was performed by operation of switches described below.

Fig 2. Configurations of head-joystick and manual joystick.

Fig 2

(a) Configuration of the head-joystick. It consisted of a three DOF wireless inertial measurement unit (IMU) (YEI Technologies Inc.) on top of a baseball cap with the bill removed. (b) Configuration of the manual joystick. The joystick could move forward or backward, left or right, or be twisted clockwise or counterclockwise. Buttons on the joystick enabled the user to switch control modes as indicated by the light-emitting diodes (LEDs) at the top.

Switches: In addition to the two IMUs, there were three switches. The first switch (SW1) enabled toggling between position control of the end-effector, orientation control of the end-effector (both using the head-joystick), and a no-movement mode. SW1 was a small button-type off-the-shelf switch (requiring a 1 N activation force) placed below the primary participant’s left shoulder—see Fig 1a). The three modes allowed the participant to use the head-joystick to control all six DOFs of the robot, and the no-movement mode allowed the participant to freely move his body when not intending to control the robot—see Fig 3a. Two additional switches (SW2 and SW3—see Fig 1a) were attached to the chair’s backrest, behind the shoulders of the participant, and controlled the opening and closing of the end-effector. These switches were custom-made with a diameter of 70 mm, a throw of 1 mm, and an activation force of approximately 10 N. Pressing only SW2 caused the grasper to close, while pressing only SW3 caused it to open. Pressing neither or both switches resulted in the current grasp being maintained—see Fig 3b. The participant was able to determine the state of the grasper from LEDs placed on the table—see Fig 1a.

Fig 3. Description of switches used to toggle between modes.

Fig 3

(a) The three modes of end-effector control as toggled by switch one (SW1). (b) Switches SW2 and SW3 were used to close and open the grasper. The truth table shows how pressing and releasing each button on and off causes the grasper to react.

Traditional Joystick: The able-bodied adult participant controlled the robotic arm using the manual joystick (shown in Figs 1b and 2b). This allowed for 3-DOF end-effector position control, 3-DOF end-effector orientation control, and 1-DOF opening and closing of the end-effector grasper. These three modes were toggled using buttons on the joystick while LEDs (located on the joystick) signaled the active mode. This functionality inspired the design of the head-joystick and switches used by our primary participant.

Controlling the six DOFs of the robot using the head-joystick

The head-joystick has three independent DOFs associated with the head tilting up and down (neck flexion/extension, see Fig 4a), the head turning right and left (rotation, see Fig 4b), and the head tilting right and left (lateral flexion/extension, see Fig 4c). These three DOFs were measured by the IMU on the head and mapped to control either the three DOFs of the end-effector position or the three DOFs of the end-effector orientation. We used a velocity-control mode where the IMU signals were mapped to velocity commands. The robot internally performed inverse kinematic and inverse dynamic computations for joint angle velocities and actuator torques to produce the commanded end-effector velocities. Velocity commands for the end-effector position were computed with respect to the base frame of the robot (XYZ frame in Fig 4d), while velocity commands for the end-effector orientation were computed with respect to the body-fixed frame of the end-effector (e1e2e3 frame in Fig 4e).

Fig 4. Mapping between head motion and robotic arm motion for the head joystick.

Fig 4

Three sets of possible motions of the head for controlling the three DOFs of the head-joystick: (a) tilting the head up and down, (b) turning the head right and left, (c) tilting the head right and left. (d) Coordinate frame fixed to the robot base for controlling the position of the end-effector showing the corresponding change in position of the end-effector for each set of head motions. (e) Coordinate frame fixed to the robot end-effector for controlling the orientation of the end-effector showing the corresponding orientation of the end-effector for each set of head motions.

In the end-effector position control mode (achieved by toggling SW1), tilting the head upwards relative to a neutral home head orientation, as shown in Fig 4a, resulted in an upward (+Z) motion of the end-effector—see Fig 4d. Through the same process, tilting the head downwards (see Fig 4a) resulted in downward motion (-Z). The magnitude of the velocity command was proportional to the angle of head tilt. Similarly, turning the head right and left (see Fig 4b) resulted in the end-effector motion towards the right (+X) and left (-X), respectively—see Fig 4d. Finally, following the right-hand-rule, tilting the head to the right and left (see Fig 4c) resulted in end-effector movement forward (+Y) and backwards (-Y)—see Fig 4d.

In the end-effector orientation control mode (achieved by toggling SW1—see Fig 3a), the IMU signals were translated to rotational velocities of the end-effector about its body-fixed frame. Tilting the head up and down (see Fig 4a) resulted in the end-effector pitching about its e1 axis in the positive and negative direction (see Fig 4e). Similarly, turning the head right and left (see Fig 4b) resulted in the end-effector rotating about its e3 axis in the negative and positive direction. Finally, tilting the head to the right and left (see Fig 4c) caused the end-effector to rotate about the e2 axis in the positive and negative direction, respectively.

Because even small unintentional deviations from the resting posture could be captured by the IMUs and potentially affect the velocities, we implemented a ‘dead-zone’ of 0.1 radians (≈6 deg) so that robot actually started moving only when the IMU roll, pitch, or yaw angles exceeded this threshold. When a measured angle exceeded this threshold, we subtracted the value of 0.1 from the magnitude of angle before computing the commanded velocity to maintain smooth control (e.g. an IMU yaw angle of 0.15 rad would only cause the end-effector to move at 0.05 m/s to the X direction in cartesian mode). The dead-zone not only provided the user with a larger range of rest postures, but also helped the user generate distinct commands along a single direction as velocities in the other two directions would be under the threshold.

Tasks

We used two tasks that mimicked activities of daily living to assess the participants’ abilities to control all 7 DOFs of the robotic arm: a drinking task and a stacking task.

Drinking task

The first task involved drinking from a cup. The participant was required, as quickly as possible, to reach for and grasp a cup containing liquid, bring the cup towards the mouth, and drink from it using a straw. The paper cup was semi-rigid: it provided enough resistance to be firmly grasped while also deforming enough to allow the participant to visually confirm the strength of the grasp. The robotic arm was always initialized in the same starting position (X: 0.40 m, Y: 0.30 m, Z: 0.30 m from the base of the robot with the gripper open enough to grasp the cup and facing towards the right) while the cup was in front of and to the right of the participant on the table (mean X: 0.58 m, mean Y: 0.25 m, Z: 0.00 m)—see Fig 1c. The straw was also kept consistently in the same orientation. To impose similar constraints on both participants, the control participant was instructed to minimize trunk movement and to move the straw to their mouth (and not move their upper body towards the straw). The final position of the grasper holding the cup when the main participant took a drink was (mean X: 0.35 m, mean Y: -0.04 m, Z: 0.10 m) with the grasper facing towards the participant.

Stacking task

The second task involved stacking five cube-shaped blocks of decreasing size (see Table 1), each with an open face, on top of each other—see Fig 1d. From the same initial end-effector position as the drinking task, each block had to be grasped, reoriented, and placed on the previous larger block to build a tower. Each block had a lip that protruded 8–10 mm outside the base of the next larger block, thereby determining the required accuracy needed to successfully stack the blocks. The location of the tower was chosen by the participant (mean X: 0.39 m, mean Y: 0.28 m) to allow them to have a clear vision of the remaining blocks for the remainder of the tasks.

Table 1. Dimensions of five blocks used in the stacking task.
Block Outer dimension of closed face (mm) Inner dimension of open face (mm)
1 67 84
2 60 77
3 53 68
4 46 61
5 44 54

The difference between the outer dimension of the previous block and the inner dimension of the subsequent block defined the precision to which the block had to be placed to be secure. For example, the precision requirement when stacking the 2nd block on top of the 1st was 77–67 = 10 mm, and that for stacking the 5th on the 4th block is 54–46 = 8 mm.

The blocks were placed directly across the table from the participant in five orientations not matching the target orientation—see Fig 1d. The first block had its opening facing upwards; the second block had its opening facing away from the participant; the third block had its opening facing towards from the participant; the fourth block had its opening facing to the right of the participant; the fifth block had its opening facing to the left of the participant. These starting positions were standardized throughout trials (position of first block with respect to the base of the robot: mean X: 0.31 m, mean Y: 0.49 m). Each block required a different approach strategy and subsequent placement strategy. Indeed, this task was more difficult than the drinking task.

Protocol

Child with congenital limb absence

The amount and distribution of practice for both participants are shown in Table 2. We made nine visits to the school of the main participant for testing him over a period of 2 months. Visits were during the participant’s free period and were not evenly spaced as they were subject to scheduling availability such as school breaks and exams. Each session was no longer than 60 minutes as constrained by the participant’s class schedule. He performed a total of 19 drinking task trials and 11 stacking task trials over the course of all the sessions. The number of trials during each session was variable and dependent on the tasks performed: a stacking task generally took longer than the drinking task. For both participants, the study was concluded when their performance plateaued on each task.

Table 2. Experimental protocol.
MAIN PARTICIPANT CONTROL PARTICIPANT
Visit 1 2 3 4 5 6 7 8 9 1 2 3 4
Day 1 4 8 11 45 50 52 57 64 1 5 8 12
Drinking task trials 2 7 0 0 5 4 0 1 0 2 7 0 2
Stacking task trials 0 0 1 3 1 0 2 2 2 0 2 3 3

Experimental protocol showing the amount and distribution of practice across days for the main participant (i.e. the child with congenital limb absence) and the control participant. For the main participant, there were a total of 19 trials on the drinking task and 11 trials on the stacking task. For the control participant, there were a total of 11 trials on the drinking task and 8 trials on the stacking task.

During each session, the main participant sat in his personal wheelchair with the table at his navel level, robot to his left front side and eyes at the level of robot’s shoulder joint. He wore a cap with an IMU attached on top of it. The IMUs sampled at 125Hz, the same rate at which the signals were processed and sent as commands to the JACO arm. The states of both the head IMU and chair IMU were polled continuously and their difference was sent to the robotic arm as either cartesian velocity commands or rotational velocity commands, depending on the current mode. All IMU values were taken relative to a comfortable base position as defined by the participant before starting each trial.

Control participant

There were 4 lab visits made by the control participant over a two-week period (2 visits per week). Each session was no longer than 60 minutes to match the main participant’s sessions. A total of 10 drinking task trials and 8 stacking task trials were performed over the course of all the sessions. Similar to the main participant’s sessions, the number of trials per session varied and were dependent on the tasks performed.

The control participant sat at a table with the height adjusted such that the participant’s mouth was at the same height relative to the table’s surface as the main participant’s. This was to ensure the movement domain of the robotic arm would be similar between participants. This participant was instructed to not translate his head location significantly during the tasks as this could provide an unfair advantage relative to the main participant.

It is important to note that the control participant was an adult controlling a manual joystick; therefore, these data are not intended to be a direct comparison with the child using the head-joystick. Rather, given that the tasks we used were complex tasks for which benchmarks are not already available, the data from the control participant provide reference values that help in interpretation of the magnitudes of the change in performance with learning and the final performance level achieved by the child.

Procedure

At the beginning of each session, participants were allowed to explore the robotic arm’s range of movement for as long as they wanted. This generally lasted anywhere between 3–5 minutes. The goal of this free exploration was just to ensure that the interface was working as intended and the participant was ready to start controlling the robot arm.

The order of tasks was decided based on discussion with the participant. The robotic arm was put in a home position before every trial. Breaks were taken between trials if the participant wanted to.

The experimenters also occasionally provided ‘coaching’ in the form of suggested movements and grasping strategies during both trials and breaks. The type and amount of coaching was not predetermined as the goal of this study was to determine the best level of performance capable with the interface. The amount of coaching decreased over time as control skill and strategies improved.

Data analysis

Task completion

Task completion was measured by the number of successful trials at the task. For the stacking task, trials were considered incomplete if the participant was not able to finish stacking all five blocks. However, we still report the characteristics of these incomplete trials in the data analysis as they potentially reflect exploration and learning strategies.

Movement time

Movement time for the tasks were computed from video recordings of the session. For the drinking task, the movement time began on the frame when the robot was first moved from its initial home position by the participant and ended when the participant’s mouth made contact with the straw. For the stacking task, the movement times were split into movement times for each block. The first block’s time began on the frame when the robot first moved and ended when the robot was no longer touching the correctly placed block. The next block’s time began when the previous block’s time ended. Together these were combined into a total completion time for the block stacking task.

Dimensionless jerk

To quantify the smoothness of the movement, we computed the dimensionless jerk values for the tasks from the end-effector position data on each trial. The jerk was normalized by the time and peak velocity to yield a dimensionless measure which has been shown to be more appropriate measure [22]. The position values were first low pass filtered using a 2nd order Butterworth filter with a cutoff frequency of 6 Hz. The jerk values for each trial were then calculated from subsequent derivatives of the filtered position data and integrated over the duration of the trial. The dimensionless jerk value was then computed as follows

DimensionlessJerk=(t1t2x2dt)MT3/vpeak2

where x indicates the instantaneous jerk, MT = (t2t1) is the movement time of the trial, and vpeak indicates the magnitude of the peak velocity during the trial.

Results

Task completion

For the drinking task, both participants completed all trials (main participant = 19/19 trials; control participant = 11/11 trials).

For the stacking task, the main participant completed 8/11 trials. Three stacking trials were considered incomplete because the final stacking block (red block in Fig 1d) was dropped to a position outside of the reach of the robotic arm during the course of the trial. The control participant completed 8/8 stacking trials. Examples of the main participant performing the two tasks are shown in S1 and S2 Videos.

Movement time

Drinking task

The main participant’s slowest movement time was measured at 99 s, his fastest time was measured at 30 s, while the average of his movement times was 55.2 s (SD = 16.3 s) (see Fig 5). However, two trials on visit #5 (trials marked with an “x” on Fig 5) involved a lot of talking during the trial and a poorly aligned IMU. Excluding these two trials showed the main participant improved by 43% from his first trial. The control participant’s slowest movement time was 75.1 s, his fastest time was 16.9 s, while the average of his movement times was 36.0 s (SD = 15.7 s). Using the head joystick, the main participant’s average movement time was around 40% slower than the control participant’s average movement time when using the manual joystick.

Fig 5. Movement times for drinking task trials.

Fig 5

The movement times for the drinking task trials for the main participant across practice. The range and mean of the control participant’s times are included for comparison. Movement times denoted with a “x” were those in which the main participant had significant distractions or poor IMU alignment. They are not included in mean session movement time but are reported since the main participant was able to complete the task.

Stacking task

For completed trials, the main participant’s slowest movement time was 1020 s, his fastest time was 446 s, while the average of his movement times was 608 s (SD = 175 s) (see Fig 6). The control participant’s slowest movement time was 797 s, his fastest time was 376 s, while the average of his movement times was 549 s (SD = 128 s). Overall the reduction in their completion times on their last days’ trials relative to their respective first trials were very comparable (main participant = 50%; control participant = 39%) and the main participant’s best trial was only 19% slower than the control participant’s best trial. For the main participant, the average times it took to successfully place each block were 74 s (SD = 47 s), 123 s (SD = 28 s), 123 s (SD = 30 s), 156 s (SD = 98 s), and 168 s (SD = 52 s), in order. Average improvements in movement time on the last day of each block relative to trial 1 were 76%, 40%, 39%, 33%, and 51%.

Fig 6. Movement times for stacking task trials.

Fig 6

The movement times for the stacking task for the main participant across practice. The range and mean of the control participant’s complete stacking times are included for comparison. Trials 3, 5, and 8 were incomplete in which the main participant was unable to successfully place the last block due to dropping it outside the reach of robotic arm.

Dimensionless jerk

Drinking task

The dimensionless jerk of the main participant closely followed the pattern of the movement times—see Fig 7. Two trials on visit #5 (trials marked with an “x” on Fig 5) involved a lot of talking during the trial and a poorly aligned IMU. Additionally, the data for trial #15, which occurred during visit 6, was corrupted and therefore omitted from the jerk analysis. Comparing the dimensionless jerk values from the first four trials (Trials 1–4) to the last four trials (Trials 16–19)) shows a 45% decrease in dimensionless jerk value.

Fig 7. Dimensionless jerk values for drinking task trials.

Fig 7

Dimensionless jerk values for drinking task trials for the main participant across practice (smaller jerk values indicate smoother movements). Jerk values denoted with a “x” were those in which the main participant had significant distractions or poor IMU alignment which led to large jerk values. Additionally, trial #15, which occurred during visit 6, is omitted due to data corruption.

Stacking task

The dimensionless jerk of the main participant for the stacking trials also closely followed the pattern of the movement times—see Fig 8. Comparing the jerk values from the first three completed trials (trial 3 (visit 4), trial 5 (visit 5), and trial 8 (visit 8) were incomplete) to the last three completed trials shows an 57% decrease in dimensionless jerk value.

Fig 8. Dimensionless jerk values for stacking task trials.

Fig 8

Dimensionless jerk values for stacking task trials for the main participant across practice (smaller jerk values indicate smoother movements). Trial 3 (visit 4), trials 5 (visit 5) and 8 (visit 8) were incomplete in which the main participant was unable to successfully place the last block due to dropping it outside the reach of robotic arm.

Discussion

The goal of this study was to examine the use of an IMU-based head-joystick for controlling a robotic arm to perform high-DOF functional tasks. We showed that a child with congenital limb absence was able to successfully use the head-joystick to perform two complex functional tasks. Moreover, the child was able to improve his performance over time to a level comparable to that of an unimpaired individual using a manual joystick.

Across a fairly limited practice time (~6–8 sessions) for both the drinking and stacking tasks, the child achieved the task goal almost twice as fast as compared to his first attempt. These times were, as expected, somewhat higher relative to the performance of the control participant on the joystick, but here we found an effect of task complexity: in the simpler drinking task, the performance of the child was about 40% slower than the control, whereas in the more complex stacking task, this difference shrunk to about 20%. A likely explanation for this is that even though the control algorithms were identical in both cases, in the simpler drinking task, where the robot could travel with higher velocity, the manual joystick had an advantage because the user could simply push the joystick instantaneously to the end of its range of motion and maintain it in that spot without discomfort. In contrast, such rapid movements would have been difficult using the head. One alternative could have been to increase the gain on the head joystick, but this would have likely compromised fine control required in more complex tasks. However, in the more complex stacking task, where movement speed was not the limiting factor in performance, the head-joystick was almost on par with the manual joystick. Moreover, although we had no direct measures of user satisfaction, the fact that the participant continued this task for over 3 months and was enthusiastic about returning for future visits is a potential indicator that he was satisfied with the interface.

Controlling high DOFs using a body-machine interface in individuals with limited movement repertoire has always posed a significant challenge. One popular approach is to use dimensionality reduction techniques like principal component analysis (PCA) to extract the most relevant movement directions for control. While this technique can accommodate different movement repertoires, they have only been implemented for controlling one or two degrees of freedom [7, 9]; furthermore, the mapping between the motion of the body and that of the assistive device can often be non-intuitive [21]. A more recent approach—the Virtual Body Model (VBM) [23]- is more intuitive for control of high DOFs because of the pre-defined mapping between the body and device DOFs but it relies on a nearly full range of movement in the torso. Since our primary participant had no lower limbs, seatbelts are used to constrain his body to an upright posture in the wheelchair; this limited range of movement of the torso makes the VBM approach unsuitable in our case. These specific constraints required the design of a custom interface that relied primarily on head movements. In this design, the head was used as a joystick to control up to three DOFs of the end-effector at a time; toggling between different sets of DOFs was achieved by activating switches using the body.

One of our primary goals was to provide greater independence for the child in activities of daily living. To this end, we designed the two tasks to not only involve control of high DOFs, but also resemble activities that are frequently needed in both the home and school environments. The drinking task required the child to position the robot end-effector near the cup, grasp the cup, and position and orient the cup near his mouth so that he could drink comfortably using the straw. The stacking task was more complex; it not only involved proper positioning and orienting of the end-effector to grasp and place individual blocks, but also required sequence planning and on-the-fly adaptations to accommodate for variations in prior movement outcomes. For example, the final block of the task required a two-step strategy: given the distant location and orientation of the block, it could not be grasped and placed on the stack using a single grasp. Instead, the block needed to be repositioned and released before regrasping it in such a way that it could be placed on the stack. The use of such tasks with several levels of complexity may especially be critical when designing interfaces of children, as performance may not only be determined by the intuitiveness of the control but also the cognitive planning of the task. It is also worth noting that despite requiring the use of head movements to control the robot to precisely control the end effector, the child was able to successfully perform these tasks, indicating that the small head movements performed did not interfere greatly with the use of visual feedback.

Our work also extends prior work on using head gestures to control a robot. In one study [18], adult participants (able-bodied adults and tetraplegics) used a similar head-mounted IMU to control a 7 DOF robotic arm to accomplish pick and place tasks, but focused only on a single session of practice. Similarly, a second study [19] evaluated the use of IMUs for the control of a robot arm, and showed similar performance in a single session in able-bodied adults. Our results add to these prior findings by demonstrating that (i) these interfaces are well-suited for children, and (ii) the improvement in performance over multiple practice sessions is substantial (up to 40–50% reduction in movement times). The child’s performance for the tasks was found to be comparable to that of an adult control participant using a manual joystick. However, given that we only had data from a single child and a single adult, additional studies are needed for assessing the generality of these findings.

In addition to head control methods discussed here, several alternate control interfaces to manual joysticks have been developed for individuals with severe movement impairments. These include sip-and-puff systems, voice control [24], gaze control [25] and tongue control [26]. These interfaces typically involve some tradeoff between (i) the number of control dimensions (e.g., a device that only allows control of 1 or 2 dimensions would require frequent ‘switching’ to control a high DOF robotic arm), (ii) the type of control (e.g., discrete controls are possible using voice commands but are less intuitive and precise relative to continuous control like a joystick), and (iii) the ‘invasiveness’ of the device both in terms of its physical attributes (e.g., whether it is easily wearable, wireless etc.) but also how it affects other activities such as communication (e.g., gaze or voice based controls may interfere with natural day-to-day behavior). Ultimately, the choice of the interface will depend both on existing movement abilities for the individual and the number of degrees of freedom to be controlled.

In terms of further improvements to our design, we wish to highlight two issues. First, a limitation of our approach is that the ‘burden of learning’ is all on the user. This may be especially challenging for children, who show deficits relative to adults in learning such interfaces [21, 27]. One way to improve this is to use either an adaptive interface that adjusts to the user [28, 29], or use a shared control framework so that the autonomy of control can be shared between the human and the machine [30]. Second, for the sake of simplicity, we relied only on head movements (i.e. kinematics) to control the device. However, in the control of other neuroprosthetics, electromyographic signals from different muscles is often used to augment the movement repertoire by providing distinct control signals for the control of the external device [3133]. Therefore, a hybrid combination of IMU signals along with electromyography may further facilitate the user for efficient control of high DOFs [34]. Addressing these limitations could increase the potential of this approach to deal with real-life situations which require both speed and accuracy.

In conclusion, we showed that for a child with congenital limb absence, a head-joystick is a viable means for controlling a robotic arm to perform complex tasks of daily living. Developing efficient, non-invasive techniques with intuitive control of high DOFs, and quantifying their performance in a larger sample is a key challenge that needs to be addressed in future studies.

Supporting information

S1 Video. Example drinking task for main participant.

In this trial, the main participant conducts a complete drinking task. The robotic arm is initialized to the starting position after which the participant commands it to move towards the cup, grasp the cup, and then move and orient the cup such that he is able to drink from the straw.

(MP4)

S2 Video. Example stacking task for main participant.

In this trial, the main participant conducts a complete stacking task. The robotic arm is initialized to the starting position after which the participant commands it to approach, grasp, orient, and place each block on the preceding block or on the table in the case of the first block.

(MP4)

S1 Data. Data from participant trials as used for analysis.

Included are the movement times and dimensionless jerk values for each of the main and control participants’ drinking and stacking trials.

(XLSX)

Data Availability

All data are contained in the supporting information: Base Data.xlsx.

Funding Statement

This work was supported by grants from the National Science Foundation (https://www.nsf.gov/) - NSF 1703735 awarded to RM, ML,FK, RR; NSF 1654929 awarded to ML; and NSF 1823889 awarded to RR. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Brault MW. Americans with disabilities: 2010. Curr Popul Rep. 2012; 70–131. [Google Scholar]
  • 2.Isabelle S, Bessey SF, Dragas KL, Blease P, Shepherd JT, Lane SJ. Assistive Technology for Children with Disabilities. Occup Ther Health Care. 2003;16: 29–51. 10.1080/J003v16n04_03 [DOI] [PubMed] [Google Scholar]
  • 3.Fehr L, Langbein WE, Skaar SB. Adequacy of power wheelchair control interfaces for persons with severe disabilities: a clinical survey. J Rehabil Res Dev. 2000;37: 353–360. [PubMed] [Google Scholar]
  • 4.Collinger JL, Wodlinger B, Downey JE, Wang W, Tyler-Kabara EC, Weber DJ, et al. High-performance neuroprosthetic control by an individual with tetraplegia. Lancet. 2013;381: 557–64. 10.1016/S0140-6736(12)61816-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Hochberg LR, Serruya MD, Friehs GM, Mukand JA, Saleh M, Caplan AH, et al. Neuronal ensemble control of prosthetic devices by a human with tetraplegia. Nature. 2006;442: 164–171. 10.1038/nature04970 [DOI] [PubMed] [Google Scholar]
  • 6.Hochberg LR, Bacher D, Jarosiewicz B, Masse NY, Simeral JD, Vogel J, et al. Reach and grasp by people with tetraplegia using a neurally controlled robotic arm. Nature. 2012;485: 372–5. 10.1038/nature11076 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Abdollahi F, Farshchiansadegh A, Pierella C, Seáñez-González I, Thorp E, Lee M-H, et al. Body-Machine Interface Enables People With Cervical Spinal Cord Injury to Control Devices With Available Body Movements: Proof of Concept. Neurorehabil Neural Repair. 2017;31: 487–493. 10.1177/1545968317693111 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Casadio M, Pressman A, Fishbach A, Danziger Z, Acosta S, Chen D, et al. Functional reorganization of upper-body movement after spinal cord injury. Exp Brain Res. 2010;207: 233–47. 10.1007/s00221-010-2427-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Thorp EB, Abdollahi F, Chen D, Farshchiansadegh A, Lee MH, Pedersen JP, et al. Upper Body-Based Power Wheelchair Control Interface for Individuals With Tetraplegia. IEEE Trans Neural Syst Rehabil Eng. 2016;24: 249–60. 10.1109/TNSRE.2015.2439240 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Ryu SI, Shenoy KV. Human cortical prostheses: lost in translation? Neurosurg Focus. 2009;27: E5 10.3171/2009.4.FOCUS0987 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Lu CW, Patil PG, Chestek CA. Current Challenges to the Clinical Translation of Brain Machine Interface Technology. In: Hamani C, Moro E, editors. International Review of Neurobiology. Academic Press; 2012. pp. 137–160. 10.1016/B978-0-12-404706-8.00008-5 [DOI] [PubMed] [Google Scholar]
  • 12.Huang IC, Sugden D, Beveridge S. Children’s perceptions of their use of assistive devices in home and school settings. Disabil Rehabil Assist Technol. 2009;4: 95–105. 10.1080/17483100802613701 [DOI] [PubMed] [Google Scholar]
  • 13.Dymond E, Potter R. Controlling assistive technology with head movements—a review. Clin Rehabil. 1996;10: 93–103. 10.1177/026921559601000202 [DOI] [Google Scholar]
  • 14.Mandel C, Rofer T, Frese U. Applying a 3DOF Orientation Tracker as a Human-Robot Interface for Autonomous Wheelchairs. 2007 IEEE 10th International Conference on Rehabilitation Robotics. 2007. pp. 52–59.
  • 15.Rudigkeit N, Gebhard M, Gräser A. Evaluation of control modes for head motion-based control with motion sensors. 2015 IEEE International Symposium on Medical Measurements and Applications (MeMeA) Proceedings. 2015. pp. 135–140.
  • 16.Gomes D, Fernandes F, Castro E, Pires G. Head-movement interface for wheelchair driving based on inertial sensors*. 2019 IEEE 6th Portuguese Meeting on Bioengineering (ENBENG). 2019. pp. 1–4.
  • 17.Chen Y-L, Chen S-C, Chen W-L, Lin J-F. A head orientated wheelchair for people with disabilities. Disabil Rehabil. 2003;25: 249–253. 10.1080/0963828021000024979 [DOI] [PubMed] [Google Scholar]
  • 18.Jackowski A, Gebhard M, Thietje R. Head Motion and Head Gesture-Based Robot Control: A Usability Study. IEEE Trans Neural Syst Rehabil Eng. 2018;26: 161–170. 10.1109/TNSRE.2017.2765362 [DOI] [PubMed] [Google Scholar]
  • 19.Fall CL, Turgeon P, Campeau-Lecours A, Maheu V, Boukadoum M, Roy S, et al. Intuitive wireless control of a robotic arm for people living with an upper body disability. 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). 2015. pp. 4399–4402. [DOI] [PubMed]
  • 20.Lee M-H, Ranganathan R, Kagerer FA, Mukherjee R. Body-machine interface for control of a screen cursor for a child with congenital absence of upper and lower limbs: a case report. J Neuroeng Rehabil. 2016;13: 34 10.1186/s12984-016-0139-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Ranganathan R, Lee M-H, Padmanabhan MR, Aspelund S, Kagerer FA, Mukherjee R. Age-dependent differences in learning to control a robot arm using a body-machine interface. Sci Rep. 2019;9: 1960 10.1038/s41598-018-38092-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Hogan N, Sternad D. Sensitivity of smoothness measures to movement duration, amplitude, and arrests. J Mot Behav. 2009;41: 529–534. 10.3200/35-09-004-RC [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Chau S, Aspelund S, Mukherjee R, Lee MH, Ranganathan R, Kagerer F. A five degree-of-freedom body-machine interface for children with severe motor impairments. 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2017. pp. 3877–3882.
  • 24.Masato Nishimori, Takeshi Saitoh, Ryosuke Konishi. Voice Controlled Intelligent Wheelchair. SICE Annual Conference 2007. 2007. pp. 336–340.
  • 25.Dziemian S, Abbott WW, Faisal AA. Gaze-based teleprosthetic enables intuitive continuous control of complex robot arm use: Writing drawing. 2016 6th IEEE International Conference on Biomedical Robotics and Biomechatronics (BioRob). 2016. pp. 1277–1282.
  • 26.Kim J, Park H, Bruce J, Sutton E, Rowles D, Pucci D, et al. The Tongue Enables Computer and Wheelchair Control for People with Spinal Cord Injury. Sci Transl Med. 2013;5: 213ra166. 10.1126/scitranslmed.3006296 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Lee M-H, Farshchiansadegh A, Ranganathan R. Children show limited movement repertoire when learning a novel motor skill. Dev Sci. 2017; e12614 10.1111/desc.12614 [DOI] [PubMed] [Google Scholar]
  • 28.Danziger Z, Fishbach A, Mussa-Ivaldi FA. Learning Algorithms for Human–Machine Interfaces. IEEE Trans Biomed Eng. 2009;56: 1502–1511. 10.1109/TBME.2009.2013822 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.De Santis D, Dzialecka P, Mussa-Ivaldi FA. Unsupervised Coadaptation of an Assistive Interface to Facilitate Sensorimotor Learning of Redundant Control. 2018 7th IEEE International Conference on Biomedical Robotics and Biomechatronics (Biorob). 2018. pp. 801–806.
  • 30.Jain S, Farshchiansadegh A, Broad A, Abdollahi F, Mussa-Ivaldi F, Argall B. Assistive Robotic Manipulation through Shared Autonomy and a Body-Machine Interface. IEEE Int Conf Rehabil Robot Proc. 2015;2015: 526–531. 10.1109/ICORR.2015.7281253 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Roche AD, Rehbaum H, Farina D, Aszmann OC. Prosthetic Myoelectric Control Strategies: A Clinical Perspective. Curr Surg Rep. 2014;2: 44 10.1007/s40137-013-0044-8 [DOI] [Google Scholar]
  • 32.Kuiken TA, Li G, Lock BA, Lipschutz RD, Miller LA, Stubblefield KA, et al. Targeted Muscle Reinnervation for Real-time Myoelectric Control of Multifunction Artificial Arms. JAMA. 2009;301: 619–628. 10.1001/jama.2009.116 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Englehart K, Hudgins B. A robust, real-time control scheme for multifunction myoelectric control. IEEE Trans Biomed Eng. 2003;50: 848–854. 10.1109/TBME.2003.813539 [DOI] [PubMed] [Google Scholar]
  • 34.Bennett DA, Goldfarb M. IMU-Based Wrist Rotation Control of a Transradial Myoelectric Prosthesis. IEEE Trans Neural Syst Rehabil Eng Publ IEEE Eng Med Biol Soc. 2018;26: 419–427. 10.1109/TNSRE.2017.2682642 [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision Letter 0

Imre Cikajlo

30 Jan 2020

PONE-D-19-31845

Controlling a robotic arm for functional tasks using a wireless head-joystick: A case study of a child with congenital absence of upper and lower limbs

PLOS ONE

Dear Mr. Aspelund,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

We would appreciate receiving your revised manuscript by Mar 14 2020 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter.

To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). This letter should be uploaded as separate file and labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. This file should be uploaded as separate file and labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. This file should be uploaded as separate file and labeled 'Manuscript'.

Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

We look forward to receiving your revised manuscript.

Kind regards,

Imre Cikajlo, Ph.D.

Academic Editor

PLOS ONE

Journal Requirements:

1. When submitting your revision, we need you to address these additional requirements.

Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

http://www.journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and http://www.journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2.  Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information.

3. We note that Figure 1 and your videos includes an image of a [patient / participant / in the study]. 

As per the PLOS ONE policy (http://journals.plos.org/plosone/s/submission-guidelines#loc-human-subjects-research) on papers that include identifying, or potentially identifying, information, the individual(s) or parent(s)/guardian(s) must be informed of the terms of the PLOS open-access (CC-BY) license and provide specific permission for publication of these details under the terms of this license. Please download the Consent Form for Publication in a PLOS Journal (http://journals.plos.org/plosone/s/file?id=8ce6/plos-consent-form-english.pdf). The signed consent form should not be submitted with the manuscript, but should be securely filed in the individual's case notes. Please amend the methods section and ethics statement of the manuscript to explicitly state that the patient/participant has provided consent for publication: “The individual in this manuscript has given written informed consent (as outlined in PLOS consent form) to publish these case details”.

If you are unable to obtain consent from the subject of the photograph, you will need to remove the figure and any other textual identifying information or case descriptions for this individual.

4. We note you have included a table to which you do not refer in the text of your manuscript. Please ensure that you refer to Table 1 in your text; if accepted, production will need this reference to link the reader to the Table.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Partly

Reviewer #3: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: N/A

Reviewer #2: N/A

Reviewer #3: N/A

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The paper presents an evaluation of a "head joystick" (IMU-based head motion tracker) used to control a robotic arm by a child with a congenital absence of all four limbs as well as a control participant. They demonstrate that the participant was able to use the head joystick to effectively control the arm and accomplish two different tasks. Overall, I believe that the paper is well-written and represents an appropriate first evaluation of the system. I have no major concerns with it, and my suggestions mostly have to do with improving clarity and informativeness.

1. Please clarify whether the participant's prior two studies with your group also involved the head joystick.

2. Please describe the "some unstructured sessions" in more detail. How many sessions? Approximate time per session? Approximate activities performed?

3. Please consistently put spaces between the number and unit (e.g., 0.05 m/s, not 0.05m/s).

4. While photos of the two tasks are provided, I would have appreciated some more information about the parameters of the task - exact distances to be covered, precision required etc.

5. I feel like more results could have been provided. In the current form, the results are limited to task completion times and percentage of successfully completed tasks. Would have been useful to see more information about any false positives/negatives during switching, task completion strategies, motion smoothness, etc. Perhaps even subjective information like user satisfaction.

Reviewer #2: The authors present a study conducted with a child with congenital limbs absence controlling a

7dof robotic arm (Jaco, Kinova) using a head-based IMU system. The goal was to demonstrate

that the IMU-based head joystick is well suited for the control of the robotic arm and that the

system allowed the child to reach a level of handiness comparable to the one of an unimpaired

individual controlling the same robotic arm with a manual joystick.

The manuscript is well organized and well written so that also non-specialists can understand

the work. The state of the art is clear and sound. Details of the methodology are sufficient to

allow the experiments to be reproduced and the original data are accessible.

My main concern is about the comparison between the main subject (child with congenital limb

absence) and the control subject. If the goal is to examine the use of an IMU-based controlling

system for a robotic arm, I think it would be more fare to compare the performance of the

same subject using two different control systems. So in this case I would have preferred to see

the main subject practicing with the IMU-based and with the commercially available headcontrolled

joystick. Or have the control subject practicing and performing the tasks with the

joystick of the Jaco and with the IMU-based joystick. My suggestion is to add a group of healthy

subjects, not only one, that are controlling the robot with both modalities. And then also

present the case study with the congenital lack of limbs.

Minor concerns:

- It would be nice to have an idea (on average) of how long the free exploration during each

session was.

- There is no reference in the manuscript to Table 1.

- When the authors report the average movement time, I suggest reporting also the standard

deviation

Reviewer #3: The paper described a case study where a child with congenital absence of all four limbs controlled a robotic arm using custom head movement control. However, the paper had a number of issues.

• The authors did not provide a comprehensive review of the existing work on control interfaces for assistive robotic manipulators. Only brain control was mentioned. JACO arms could be controlled with wheelchair joystick, head control, sip-and-puff, and head array system. In addition, there are many types of custom control interfaces (e.g., voice control and eye gaze control) in existing literature. It was unclear how the proposed approach is more advantageous than existing work.

• The novelty or focus of the study is not clearly stated. From the technical perspective, as mentioned earlier, JACO arm could be controlled with wheelchair joystick, head control, sip-and-puff, and head array system, so it was not clear how the head control system used in this study differed from the JACO’s existing capability. From the clinical perspective, only one subject participated in the study and the training/evaluation protocol was somewhat unstructured, and thus cannot be generalized. In addition, only task completion time was reported, and user perceived usability was not mentioned. Given the nature of such intervention, it would be helpful to know the user feedback on the control interface and training procedure.

• The performance contrast between the case subject and control subject is not well justified, as the performance could be affected by not only the input devices (head vs hand), but also their personal characteristics including but not limited to physical limitations. Thus, it is unclear how such information would be clinically or practically meaningful.

• The limitation of the control interface was not stated. How does head movement control affect visual feedback a user would need for accurately controlling the arm motion? How generalizable the approach is when used in real-life situations (e.g., wheelchair-mounted arm)?

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

Reviewer #3: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2020 Aug 5;15(8):e0226052. doi: 10.1371/journal.pone.0226052.r002

Author response to Decision Letter 0


14 Mar 2020

Response to Editors and Reviewers

We thank the Editor and all 3 reviewers for their insightful comments. We have addressed these concerns with significant changes in the manuscript as seen below. We think these changes have greatly improved the manuscript and hope that the revised version is suitable for publication.

We have provided a point-by-point rebuttal to each comment below. For the sake of clarity, we have color coded the text as follows:

Editors and Reviewer comments in BLACK

Authors’ response in BLUE

Corresponding changes in manuscript in GREY

-------

Journal Requirements:

1. When submitting your revision, we need you to address these additional requirements.

Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

http://www.journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and http://www.journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

We have made edits throughout our manuscript to ensure that our manuscript meets PLOS ONE’s style requirements

2. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information.

We have included captions for supporting information files at the end of the manuscript,

Supporting information

Video S1. Example drinking task for main participant. In this trial, the main participant conducts a complete drinking task. The robotic arm is initialized to the starting position after which the participant commands it to move towards the cup, grasp the cup, and then move and orient the cup such that he is able to drink from the straw.

Video S2. Example stacking task for main participant. In this trial, the main participant conducts a complete stacking task. The robotic arm is initialized to the starting position after which the participant commands it to approach, grasp, orient, and place each block on the preceding block or on the table in the case of the first block.

Data File S3. Data from participant trials as used for analysis. Included are the movement times and dimensionless jerk values for each of the main and control participants’ drinking and stacking trials.

3. We note that Figure 1 and your videos includes an image of a [patient / participant / in the study].

As per the PLOS ONE policy (http://journals.plos.org/plosone/s/submission-guidelines#loc-human-subjects-research) on papers that include identifying, or potentially identifying, information, the individual(s) or parent(s)/guardian(s) must be informed of the terms of the PLOS open-access (CC-BY) license and provide specific permission for publication of these details under the terms of this license. Please download the Consent Form for Publication in a PLOS Journal (http://journals.plos.org/plosone/s/file?id=8ce6/plos-consent-form-english.pdf). The signed consent form should not be submitted with the manuscript, but should be securely filed in the individual's case notes. Please amend the methods section and ethics statement of the manuscript to explicitly state that the patient/participant has provided consent for publication: “The individual in this manuscript has given written informed consent (as outlined in PLOS consent form) to publish these case details”.

If you are unable to obtain consent from the subject of the photograph, you will need to remove the figure and any other textual identifying information or case descriptions for this individual.

We have acquired from the participant’s guardian a signed Consent Form for Publication in a PLOS Journal and it has been mentioned in the manuscript.

The individual in this manuscript has given written informed consent (as outlined in PLOS consent form) to publish these case details.

4. We note you have included a table to which you do not refer in the text of your manuscript. Please ensure that you refer to Table 1 in your text; if accepted, production will need this reference to link the reader to the Table.

We have referred to all tables present in our manuscript within the text.

Reviewer's Responses to Questions

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The paper presents an evaluation of a "head joystick" (IMU-based head motion tracker) used to control a robotic arm by a child with a congenital absence of all four limbs as well as a control participant. They demonstrate that the participant was able to use the head joystick to effectively control the arm and accomplish two different tasks. Overall, I believe that the paper is well-written and represents an appropriate first evaluation of the system. I have no major concerns with it, and my suggestions mostly have to do with improving clarity and informativeness.

We thank the reviewer for the positive comments

1. Please clarify whether the participant's prior two studies with your group also involved the head joystick.

The prior studies involved control with the shoulder and torso movements, not the head joystick. This has now been clarified in the manuscript.

These prior studies involved the control of these devices using shoulder and torso movements.

2. Please describe the "some unstructured sessions" in more detail. How many sessions? Approximate time per session? Approximate activities performed?

The following text has been included to describe the unstructured sessions:

Initially, there were 4 unstructured sessions, each lasting about 30-45 minutes in length. We used these sessions to calibrate the interface to make sure that the head movements performed were in a comfortable range when controlling the robot. During each session, the participant was asked to do some exploratory movements of the head to understand how movements of each DOF controlled the robot, and to learn the operation of the switches (which was used to toggle between the translation/orientation modes). In addition, the participant was also free to perform any tasks of their liking using the robot arm like trying to pick up an object from a table.

3. Please consistently put spaces between the number and unit (e.g., 0.05 m/s, not 0.05m/s).

Thank you. This has been corrected.

4. While photos of the two tasks are provided, I would have appreciated some more information about the parameters of the task - exact distances to be covered, precision required etc.

We thank the reviewer for this suggestion. We have now included these details in the methods section. Table 1 has the dimensions of the stacking blocks.

The robotic arm was always initialized in the same starting position (X: 0.40 m, Y: 0.30 m, Z: 0.30 m from the base of the robot with the gripper open enough to grasp the cup and facing towards the right) while the cup was in front of and to the right of the participant on the table (mean X: 0.58 m, mean Y: 0.25 m, Z: 0.00 m) - see Fig 1c. [...] The final position of the grasper holding the cup when the main participant took a drink was (mean X: 0.35 m, mean Y: -0.04 m, Z: 0.10 m) with the grasper facing towards the participant.

The location of the tower was chosen by the participant (mean X: 0.39 m, mean Y: 0.28 m) to allow them to have a clear vision of the remaining blocks for the remainder of the tasks.

Block Outer dimension of closed face (mm) Inner dimension of open face (mm)

1 44 54

2 46 61

3 53 68

4 60 77

5 67 84

Table 1. Dimensions of stacking blocks. The difference between the inner dimension of the upper block and the outer dimension of the lower block defined the precision to which the block had to be placed to be secure.

The blocks were placed directly across the table from the participant in five orientations not matching the target orientation - see Fig 1d. The first block had its opening facing upwards; the second block had its opening facing away from the participant; the third block had its opening facing towards from the participant; the fourth block had its opening facing to the right of the participant; the fifth block had its opening facing to the left of the participant. These starting positions were standardized throughout trials (position of first block with respect to the base of the robot: mean X: 0.31 m, mean Y: 0.49 m).

5. I feel like more results could have been provided. In the current form, the results are limited to task completion times and percentage of successfully completed tasks. Would have been useful to see more information about any false positives/negatives during switching, task completion strategies, motion smoothness, etc. Perhaps even subjective information like user satisfaction.

We thank the reviewer for this suggestion. We have now included figures for the jerk calculations (quantifying smoothness) for both drinking and stacking tasks. The results are similar to that seen in the movement times (smoothness increases overall with learning as evidenced by decreased jerk).

Because the control was continuous, the only ‘discrete’ errors we could see were during grasping (which were controlled by switches). The number of grasping errors was quite low and we did not observe any trends with practice even as movement times decreased. We also did not observe any major qualitative changes in task completion strategies.

We do not have any standard measures of perceived usability or satisfaction. However, we the fact that the participant continued this task for over 3 months and was enthusiastic about returning for future visits is a potential indicator that he was satisfied with the interface. We have added this in the Discussion

Moreover, although we had no direct measures of user satisfaction, the fact that the participant continued this task for over 3 months and was enthusiastic about returning for future visits is a potential indicator that he was satisfied with the interface.

These changes related to the dimensionless jerk have been incorporated in the Data analysis and Results of the manuscript as follows:

In the Data analysis:

Dimensionless jerk. To quantify the smoothness of the movement, we computed the dimensionless jerk values for the tasks from the end-effector position data on each trial. The jerk was normalized by the time and peak velocity to yield a dimensionless measure which has been shown to be more appropriate measure The position values were first low pass filtered using a 2nd order Butterworth filter with a cutoff frequency of 6 Hz. The jerk values for each trial were then calculated from subsequent derivatives of the filtered position data and integrated over the duration of the trial. The dimensionless jerk value was then computed as follows

Dimensionless Jerk= √((∫_(t_1)^(t_2)▒〖‖x‖^2 dt〗) 〖MT〗^3/v_peak^2 )

where x indicates the instantaneous jerk, MT=(t_2- t_1 ) is the movement time of the trial, and v_peak indicates the magnitude of the peak velocity during the trial.

In the Results:

Dimensionless jerk

Drinking task

The dimensionless jerk of the main participant closely followed the pattern of the movement times – see Fig 7. Two trials on visit #5 (trials marked with an “x” on Fig 5) involved a lot of talking during the trial and a poorly aligned IMU. Additionally, the data for trial #15 was corrupted and therefore omitted from the jerk analysis. Comparing the dimensionless jerk values from the first four trials to the last four trials shows a 45% decrease in dimensionless jerk value.

Stacking task

The dimensionless jerk of the main participant for the stacking trials also closely followed the pattern of the movement times – see Fig 8. Comparing the jerk values from the first three completed trials (trials 2, 5, and 8 were incomplete) to the last three completed trials shows an 57% decrease in dimensionless jerk value.

Figure 7. Dimensionless jerk values for drinking task trials for the main participant across practice. Jerk times denoted with a “x” were those in which the main participant had significant distractions or poor IMU alignment which led to large jerk values. Additionally, trial #15 is omitted due to data corruption.

Figure 8. Dimensionless jerk values for stacking task trials for the main participant across practice. Trials 3, 5, and 8 were incomplete in which the main participant was unable to successfully place the last block due to dropping it outside the reach of robotic arm.

Reviewer #2: The authors present a study conducted with a child with congenital limbs absence controlling a 7dof robotic arm (Jaco, Kinova) using a head-based IMU system. The goal was to demonstrate that the IMU-based head joystick is well suited for the control of the robotic arm and that the system allowed the child to reach a level of handiness comparable to the one of an unimpaired individual controlling the same robotic arm with a manual joystick.

The manuscript is well organized and well written so that also non-specialists can understand the work. The state of the art is clear and sound. Details of the methodology are sufficient to allow the experiments to be reproduced and the original data are accessible.

We thank the reviewer for the positive comments

My main concern is about the comparison between the main subject (child with congenital limb absence) and the control subject. If the goal is to examine the use of an IMU-based controlling system for a robotic arm, I think it would be more fare to compare the performance of the same subject using two different control systems. So in this case I would have preferred to see the main subject practicing with the IMU-based and with the commercially available head-controlled joystick. Or have the control subject practicing and performing the tasks with the joystick of the Jaco and with the IMU-based joystick. My suggestion is to add a group of healthy subjects, not only one, that are controlling the robot with both modalities. And then also present the case study with the congenital lack of limbs.

We thank the reviewer for the comment. We want to emphasize that the purpose of including the control participant was not to compare the IMU interface directly with a head joystick (there were no statistical comparisons being made), but to only provide a baseline reference for the movement times (otherwise the magnitude of the movement times would not be directly interpretable).

An important novel contribution of the current study was to examine the feasibility of the interface in children with movement impairment and show that they can accomplish complex tasks in a reasonable period of time. Therefore, we think that adding a group of healthy adult participants doing both interfaces (which could provide information about the relative usability of the interfaces) is not directly related to the current focus of the manuscript. The reviewer’s point about directly comparing with a commercially available head joystick is well taken and while this is not currently feasible, this line of investigation certainly lies within the scope of our future work.

We have now added this line to clarify the use of the control participant.

It is important to note that the control participant was an adult controlling a manual joystick; therefore, these data are not intended to be a direct comparison with the child using the head-joystick. Rather, given that the tasks we used were complex tasks for which benchmarks are not already available, the data from the control participant provide reference values that help in interpretation of the magnitudes of the change in performance with learning and the final performance level achieved by the child.

Minor concerns:

- It would be nice to have an idea (on average) of how long the free exploration during each

session was.

The following text has been included:

The free exploration lasted anywhere between 3-5 minutes. The goal of the free exploration was just to ensure that the interface was working as intended and the participant was ready to start controlling the robot arm.

- There is no reference in the manuscript to Table 1.

We thank the reviewer for pointing this out. This is now Table 2 and is referenced in the manuscript.

The amount and distribution of practice for both participants is shown in Table 2.

- When the authors report the average movement time, I suggest reporting also the standard

Deviation

We have included standard deviation next to their corresponding means in the text. We have also included the individual data points in the figures

Reviewer #3: The paper described a case study where a child with congenital absence of all four limbs controlled a robotic arm using custom head movement control. However, the paper had a number of issues.

• The authors did not provide a comprehensive review of the existing work on control interfaces for assistive robotic manipulators. Only brain control was mentioned. JACO arms could be controlled with wheelchair joystick, head control, sip-and-puff, and head array system. In addition, there are many types of custom control interfaces (e.g., voice control and eye gaze control) in existing literature. It was unclear how the proposed approach is more advantageous than existing work.

• The novelty or focus of the study is not clearly stated. From the technical perspective, as mentioned earlier, JACO arm could be controlled with wheelchair joystick, head control, sip-and-puff, and head array system, so it was not clear how the head control system used in this study differed from the JACO’s existing capability.

We thank the reviewer for raising this point. In the introduction, we compare our method in the context of existing ‘head control’ methods (head arrays and head joysticks) as the focus is on developing interfaces with individuals with severe impairments (where the assumption is that that they cannot use a regular manual joystick). Here, one main advantage of our method is to create a wireless ‘continuous’ control interface like the joystick for high DOF tasks (which is more intuitive compared to head arrays which rely on switches).

In the discussion, we have now talked about other interfaces (such as the sip and puff system, eye gaze control, voice commands). One primary advantage of our interface is that it is designed to be flexible and used for high-DOF control with precision. The relative advantages of each system will depend to a great extent on the abilities of the individual (e.g. a more severely impaired individual with limited head movements may benefit from a voice controlled or sip-and-puff system). We have included this text in the Discussion

In addition to head control methods discussed here, several alternate control interfaces to manual joysticks have been developed for individuals with severe movement impairments. These include sip-and-puff systems, voice control [23], gaze control [24] and tongue control [25]. These interfaces typically involve some tradeoff between (i) the number of control dimensions (e.g., a device that only allows control of 1 or 2 dimensions would require frequent ‘switching’ to control a high DOF robotic arm), (ii) the type of control (e.g., discrete controls are possible using voice commands but are less intuitive and precise relative to continuous control like a joystick), and (iii) the ‘invasiveness’ of the device both in terms of its physical attributes (e.g., whether it is easily wearable, wireless etc.) but also how it affects other activities such as communication (e.g., gaze or voice based controls may interfere with natural day-to-day behavior). Ultimately, the choice of the interface will depend both on existing movement abilities for the individual and the number of degrees of freedom to be controlled.

From the clinical perspective, only one subject participated in the study and the training/evaluation protocol was somewhat unstructured, and thus cannot be generalized.

This is a highly unique population (child with no movement of the limbs)- so the goal was to provide proof of concept that the head IMU interfaces are feasible for complex movements in children. We have added this as a limitation of the study.

Although conducted as a case study, which places limits on generalizability, our results add to these prior findings by demonstrating that (i) these interfaces are well-suited for children, and (ii) the improvement in performance over multiple practice sessions is substantial (up to 40-50% reduction in movement times) and comparable to manual joystick performance.

In addition, only task completion time was reported, and user perceived usability was not mentioned. Given the nature of such intervention, it would be helpful to know the user feedback on the control interface and training procedure.

The reviewer makes a good point but we do not have any standard measures of perceived usability or satisfaction. However, the fact that the participant continued this task enthusiastically for over 3 months is an indicator that he was satisfied with the interface.

• The performance contrast between the case subject and control subject is not well justified, as the performance could be affected by not only the input devices (head vs hand), but also their personal characteristics including but not limited to physical limitations. Thus, it is unclear how such information would be clinically or practically meaningful.

As in our response to reviewer 2, the performance of the control subject was only to provide a baseline reference for the movement times (and not to directly compare them). We have now added this line to clarify the use of the control participant.

Given that the control participant was an adult controlling a manual joystick, these values are not intended to be a direct comparison with the child using the head-joystick. Rather, given that the tasks we used were complex tasks for which benchmarks are not already available, the data from the control participant provide reference values that help interpretation of the magnitudes of the change in performance over learning and the final performance level achieved by the child.

• The limitation of the control interface was not stated. How does head movement control affect visual feedback a user would need for accurately controlling the arm motion? How generalizable the approach is when used in real-life situations (e.g., wheelchair-mounted arm)?

Both tasks required visual feedback since they required precision to hold the cup or stack the blocks. The head movements performed were quite small; because head movements directly controlled the velocity of the robot, big movements of the head were not needed. Additionally, when big movements of the robot were needed (where visual feedback is not as critical). So at least to a first approximation, these movements did not seem to affect the use of visual feedback to control the movement of the robot arm. Even when it is mounted on a wheelchair, it is unlikely that the user will do both wheelchair navigation and robot arm control simultaneously; therefore, we do not think that the head movements will pose a problem in real-world situations.

We have referred to this in the Discussion

It is also worth noting that despite requiring the use of head movements to control the robot to precisely control the end effector, the child was able to successfully perform these tasks, indicating that the small head movements performed did not interfere greatly with the use of visual feedback..

We refer to the limitations of the current approach in the penultimate paragraph in the Discussion. (Burden of learning being completely on the user, and the exclusive use of head signals)

In terms of further improvements to our design, we wish to highlight two issues. First, a limitation of our approach is that the ‘burden of learning’ is all on the user. This may be especially challenging for children, who show deficits relative to adults in learning such interfaces [21], [26]. One way to improve this is to use either an adaptive interface that adjusts to the user [27], [28], or use a shared control framework so that the autonomy of control can be shared between the human and the machine [29]. Second, for the sake of simplicity, we relied only on head movements (i.e. kinematics) to control the device. However, in the control of other neuroprosthetics, electromyographic signals from different muscles is often used to augment the movement repertoire by providing distinct control signals for the control of the external device [30]–[32]. Therefore, a hybrid combination of IMU signals along with electromyography may further facilitate the user for efficient control of high DOFs [33].

We have also added this line to highlight the goal of responding in real-life situations

Addressing these limitations could increase the potential of this approach to deal with real-life situations which require both speed and accuracy.

Attachment

Submitted filename: PLos ONE Review Response.docx

Decision Letter 1

Imre Cikajlo

29 Apr 2020

PONE-D-19-31845R1

Controlling a robotic arm for functional tasks using a wireless head-joystick: A case study of a child with congenital absence of upper and lower limbs

PLOS ONE

Dear Mr. Aspelund,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

We would appreciate receiving your revised manuscript by Jun 13 2020 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter.

To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). This letter should be uploaded as separate file and labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. This file should be uploaded as separate file and labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. This file should be uploaded as separate file and labeled 'Manuscript'.

Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

We look forward to receiving your revised manuscript.

Kind regards,

Imre Cikajlo, Ph.D.

Academic Editor

PLOS ONE

Additional Editor Comments (if provided):

Please carefully examine the manuscript with reviewers' comments in mind. Thank you.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: (No Response)

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: (No Response)

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: (No Response)

Reviewer #2: N/A

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: (No Response)

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: (No Response)

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: (No Response)

Reviewer #2: I want to tank the authors for the detailed replies to all my concerns. Everything is good with the exception of the main concern. I am still not fully convinced about the lack of a control group. I understand that comparing the IMU-based interface with the commercial available interface is out of the scope of this specific paper, and I am okay with that. As the author said themselves “the purpose of including the control participant was to only provide a baseline reference for the movement times” but with n=1 the baseline values might not be true but instead a “false positive”. My suggestion is to include data of few healthy subjects.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2020 Aug 5;15(8):e0226052. doi: 10.1371/journal.pone.0226052.r004

Author response to Decision Letter 1


2 Jun 2020

Response to Editors and Reviewers

We thank the Editor and the reviewers for their insightful comments. We have addressed these concerns with changes in the manuscript as seen below. We think these changes have improved the manuscript and hope that the revised version is suitable for publication.

We have provided a point-by-point rebuttal to each comment below. For the sake of clarity, we have color coded the text as follows:

Editors and Reviewer comments in BLACK

Authors’ response in BLUE

Corresponding changes in manuscript in GREY

-------

Journal Requirements:

N/A

Reviewer's Responses to Questions

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: (No Response)

Reviewer #2: I want to tank the authors for the detailed replies to all my concerns. Everything is good with the exception of the main concern. I am still not fully convinced about the lack of a control group. I understand that comparing the IMU-based interface with the commercial available interface is out of the scope of this specific paper, and I am okay with that. As the author said themselves “the purpose of including the control participant was to only provide a baseline reference for the movement times” but with n=1 the baseline values might not be true but instead a “false positive”. My suggestion is to include data of few healthy subjects.

We thank the reviewer for the comment. We understand the reviewer’s concerns regarding the fact that we had only a single individual as control (although we had multiple measurements over multiple visits on this individual and we provide an indication of not only the mean but the full range of values seen during practice). Also, anecdotally we wish to note that the best times for the stacking task of the control participant and the main participant were within 25% of the best time of what can be considered an expert user’s time (the expert user was the researcher who developed the system and had practiced on it extensively during testing) – main best: 446 s, control best: 376 s, expert best ~300 s. Even though this data is anecdotal, we think it gives us confidence that the data we report are not far off the mark.

Ultimately, given this is a case-study, we think this result can only be treated as a proof of concept and is not intended to be generalizable - so issues such as false positives or negatives are not applicable. Moreover, given the current situation with COVID-19, we believe that we will not be able to get this data for several months, and it would not fundamentally alter any of the conclusions in the paper.

We have now explicitly addressed the limitations of our results as needing further study in order to be applied generally. We hope that this additional narrowing of the scope of the generality of our work is sufficient.

"Our results add to these prior findings by demonstrating that (i) these interfaces are well-suited for children, and (ii) the improvement in performance over multiple practice sessions is substantial (up to 40-50% reduction in movement times). The child’s performance for the tasks was found to be comparable to that of an adult control participant using a manual joystick. However, given that we only had data from a single child and a single adult, additional studies are needed for assessing the generality of these findings."

Attachment

Submitted filename: PLos ONE Response to Reviewers 2.docx

Decision Letter 2

Imre Cikajlo

30 Jun 2020

Controlling a robotic arm for functional tasks using a wireless head-joystick: A case study of a child with congenital absence of upper and lower limbs

PONE-D-19-31845R2

Dear Dr. Aspelund,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Imre Cikajlo, Ph.D.

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

Imre Cikajlo

13 Jul 2020

PONE-D-19-31845R2

Controlling a robotic arm for functional tasks using a wireless head-joystick: A case study of a child with congenital absence of upper and lower limbs

Dear Dr. Aspelund:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Professor Imre Cikajlo

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Video. Example drinking task for main participant.

    In this trial, the main participant conducts a complete drinking task. The robotic arm is initialized to the starting position after which the participant commands it to move towards the cup, grasp the cup, and then move and orient the cup such that he is able to drink from the straw.

    (MP4)

    S2 Video. Example stacking task for main participant.

    In this trial, the main participant conducts a complete stacking task. The robotic arm is initialized to the starting position after which the participant commands it to approach, grasp, orient, and place each block on the preceding block or on the table in the case of the first block.

    (MP4)

    S1 Data. Data from participant trials as used for analysis.

    Included are the movement times and dimensionless jerk values for each of the main and control participants’ drinking and stacking trials.

    (XLSX)

    Attachment

    Submitted filename: PLos ONE Review Response.docx

    Attachment

    Submitted filename: PLos ONE Response to Reviewers 2.docx

    Data Availability Statement

    All data are contained in the supporting information: Base Data.xlsx.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES