Skip to main content
iScience logoLink to iScience
. 2020 Nov 10;23(12):101732. doi: 10.1016/j.isci.2020.101732

Individuals Prioritize the Reach Straightness and Hand Jerk of a Shared Avatar over Their Own

Takayoshi Hagiwara 1,6,, Gowrishankar Ganesh 2,∗∗, Maki Sugimoto 3, Masahiko Inami 4, Michiteru Kitazaki 5,∗∗∗
PMCID: PMC7756142  PMID: 33376966

Summary

Cyber space enables us to “share” bodies whose movements are a consequence of movements by several individuals. But whether and how our motor behavior is affected during body sharing remains unclear. Here we examined this issue in arm reaching performed by a shared avatar, whose movement was generated by averaging the movements of two participants. We observed that participants exhibited improved reaction times with a shared avatar than alone. Moreover, the reach trajectory of the shared avatar was straighter than that of either participant and correlated with their subjective embodiment of the avatar. Finally, the jerk of the avatar's hand was less than either participant's own hand, both when they reached alone and in the shared body. Movement straightness and hand jerk are well known characteristics of human reach behavior, and our results suggest that during body sharing, humans prioritize these movement characteristics of the shared body over their own.

Subject Areas: Behavioral Neuroscience, Cognitive Neuroscience

Graphical Abstract

graphic file with name fx1.jpg

Highlights

  • We embodied two human participants within a shared avatar in virtual reality

  • Movements of the shared avatar were the average of the participant's movements

  • Avatar's hand movements were straighter and less jerky than participant's

  • Humans prioritize movement of a shared body over their own


Behavioral Neuroscience; Cognitive Neuroscience

Introduction

The use of the cyber space has seen a substantial expansion through the COVID-19 crisis. The cyber space enables us to work together at distant places and can even enable us to interact with remote environments and individuals by “embodying” virtual avatars, as well as real avatars, like robots. Such interactions using avatars is seen as a major future mode of communication between people. But for these to be possible, we need to understand the limits of avatar embodiment and how these embodiments affect the human users in terms of behaviors and emotions.

A plethora of studies have shown that, given the right sensory and/or motor stimulations, humans can elicit embodiment of virtual and physical objects other than their own body. Embodiment has been shown toward not only virtual bodies that are visually similar to one's own body (Gonzalez-Franco et al., 2010) but also toward bodies or objects that differ from one's own (Aymerich-Franch and Ganesh, 2016) in terms of size (Banakou et al., 2013; Tajadura-Jiménez et al., 2017; Falconer et al., 2014; Falconer et al., 2016), skin color (Peck et al., 2013), and arm length (Kilteni et al., 2012), as well as when the body is that of a robot (Aymerich-Franch et al., 2017; Aymerich-Franch, Petit, Ganesh and Kheddar, 2016a, 2016b; Aymerich-Franch et al., 2015; Aymerich-Franch, Petit, Ganesh and Kheddar, 2016a, 2016b). Furthermore, ownership can also be induced when different body parts are re-associated (Kondo et al., 2020) or even when the body is partially invisible (Kondo et al., 2018). Crucially, while these illusory bodies' appearances were different from the participants' own body, only few rare studies (like Burin et al., 2019; Kokkinara et al., 2016) have investigated the scenario where the movements of the illusory body did not correspond to the movement of the actual body. And even in these studies, the control of the body was still determined by a sole participant. Furthermore, in each of these studies participants perceived sole ownership of the body.

On the other hand, few studies have examined the case in which several people share a body (or avatar), or a limb, where the movement of an artificial body (or limb) is determined by more than one individual. Sharing thus enables multiple individuals to collaborate and contribute in the task performed by an avatar potentially decreasing the workload for each user and improving task performance. Sharing of body parts has been shown to lead to changes in the perceived ownership and agency (Fribourg et al., 2020; Hagiwara et al., 2019). On the other hand, it still remains unclear whether and how shared embodiment affects an individual's motor behavior. Here, we examined this issue using an arm reaching task in virtual reality, performed by a shared avatar who's arm movement was the average of arm movements by the two participants who share it.

Motor neuroscience studies have shown that individual human movement trajectories are a result of sensori-motor planning and control (Wolpert et al., 2011; Franklin and Wolpert, 2011; Ganesh and Burdet, 2013) as well as memory-related processes (Diedrichsen et al., 2010; Ganesh and Burdet, 2013; Ganesh et al., 2010). Considering these results, in this study we investigated what the individuals prioritize when they share a virtual body, specifically whether they still optimize their own movements or whether they optimize the movements of the avatar, and, consequently, how the sharing affects the avatar control. Here, we chose to investigate this issue during arm reaching, that is well-established empirical task to investigate motor behavior. Previous studies have shown that point-to-point arm reaching is roughly straight and tends to minimize the jerk measured at the hand (Flash and Hogan, 1985). Hence, here we compared the reach movement straightness and hand jerk exhibited by human individuals when they were embodied in individual avatars (Solo-body condition) with their hand movements when they were embodied in a same avatar (Shared-body condition), with the avatar movements representing their average hand movement (Figure 1, please also see Video S1). To anticipate our results, we observed that individual's movements change in the Shared-body condition (relative to the Solo-body condition) to minimize the trajectory length and jerk of the avatar hand, rather than that of their own hand.

Figure 1.

Figure 1

Shared Avatar Setup

(A) The avatar's arm movement was developed by averaging the arm movements performed by the two participants in the Shared-body condition. Both participants observed the same shared avatar through their HMD.

(B) The two participants wore the motion capture suites and HMD and performed the experiment.

(C) Cube reaching tasks were performed for 5 min followed by the rating of the sense of agency and body ownership. The Shared-body condition and the Solo-body condition were repeated four times in counter-balanced order in the within-participant design.

Document S1. Trasnsparent Methods and Figures S1–S3
mmc1.pdf (204.8KB, pdf)

Results

Perceived Sense of Agency and Ownership to the Shared Body

Our experiment was performed in a virtual reality (VR) environment. The participants performed our experiment in dyads. They worked in two conditions. In each condition they were presented with a virtual avatar that replaced their own body in VR and were required to make reaching movements with their right hand, toward target cubes presented at various locations in front of both of them (see Transparent Methods for details). Both participants were presented with the same target at any time. In the Solo-body condition, the participant arm movement was replicated on the avatar, whereas in the Shared-body condition, the avatar's arm movement was the average of the arm movements performed by the two participants in the dyad (see Transparent Methods for details).

After each condition, the participants were asked to rate the sense of agency (0%–100%: 0 is “I did not control the avatar hand at all”; 100 is “I fully controlled the avatar hand”) and the sense of body ownership (Questionnaire: I felt as if the avatar's body I saw was my body; −3 to +3; 7 level Likert scale. −3 is “I did not feel it at all”; +3 is “I felt it extremely strongly”).

A Wilcoxon signed-rank test was conducted on the rated sense of agency to compare the Solo-body and Shared-body conditions because the data significantly deviated from normality (Shapiro-Wilk test, W = 0.828, p = .002). We found that the sense of agency was significantly higher in the Solo-body condition than in the Shared-body condition (W(19) = 190.000, p < .001, d = 1.000, Figure 2A). We conducted a Wilcoxon signed-rank test to test whether the rated sense of agency was different from the actual weights (100% in the Solo-body condition, 50% in the Shared-body condition). In the Solo-body condition, the sense of agency was significantly lower than 100% (W(19) = 0.000, p < .001, d = −1.000). Interestingly the sense of agency was significantly higher than 50% in the Shared-body condition (W(19) = 153.000, p = .021, d = 0.457), even though each participant contributed equally to the avatar movements.

Figure 2.

Figure 2

Sense of Agency and Body Ownership

Perceived sense of agency was more than 50%, and the Solo-body condition was better than the Shared-body condition both in the sense of agency and the sense of body ownership.

(A) Rating of the sense of agency. A Wilcoxon signed-rank test showed that the sense of agency was significantly higher in the Solo-body condition than in the Shared-body condition (∗∗∗p < .001). In the Solo-body condition, the sense of agency was significantly lower than 100%, and in the Shared-body condition, the sense of agency was significantly higher than 50%. The sense of agency was higher in the Solo-body condition than in the Shared-body condition.

(B) Likert scale data of the sense of body ownership. A paired t test showed that the sense of body ownership was significantly higher in the Solo-body condition than in the Shared-body condition (∗∗∗p < .001). Bar plots in the figure show means +/− SE measures.

A paired t test (one-sample t test) was conducted on the rated sense of body ownership in the Solo-body and Shared-body conditions because the data did not deviate from normality (Shapiro-Wilk test, W = 0.916, p = .083). As would be expected, a paired t test between the Solo-body and Shared-body conditions indicated that the sense of body ownership was significantly higher in the Solo-body condition than in the Shared-body condition (t(19) = 5.643, p < .001, d = 1.45, Figure 2B).

Avatar Hand Movements Were Straighter Than That of Participants

We evaluated the straightness of the hand reach by measuring hand reach deviation (D), which was defined as the difference between the length of the hand movement trajectory and the straight line joining the start and endpoint of the hand during a reach (See Transparent Methods and Figure S1 for detail). We evaluated the hand reach deviation by the participants in the Solo-body condition (Dsolohuman), the actual hand reach deviation by the same participants in the Shared-body condition (Dsharedhuman) and the hand reach deviation by their shared avatar in the Shared-body condition (Dsharedavatar).

A one-way repeated measures ANOVA across the three reach deviations (Dsolohuman, Dsharedhuman, Dsharedavatar) was conducted because the data did not deviate from normality (Shapiro-Wilk test, Dsolohuman: W = 0.932, p = .468, Dsharedhuman: W = 0.928, p = .433, Dsharedavatar: W = 0.849, p = .056) or sphericity (Mauchly's sphericity test, p = .319). We found a significant main effect (F(2, 18) = 19.278, p < .001, ηp2 = 0.682, Figure 3A). Holm's post hoc test indicated that Dsolohuman was significantly larger than Dsharedhuman (t(9) = 4.150, adj.p = .005, d = 1.312), indicating that, in the Shared-body condition, the participants adopted a straighter path than when they performed in the reach alone. Interestingly, Dsharedavatar was smaller than both Dsolohuman (t(9) = 5.054, adj.p = .002, d = 1.598) and Dsharedhuman (t(9) = 2.998, adj.p = .015, d = 0.948) indicating that the avatar attained hand trajectories that were straighter than the individual participant when they reached alone, as well as when they reached in the Shared-body condition.

Figure 3.

Figure 3

Avatar Hand Movements were Straighter and Exhibited Less Jerk

(A) Difference of path of participant's hand (Solo-body condition, Shared-body condition) and shared body avatar's hand from the direct path. Holm's post hoc tests after a one-way repeated measures ANOVA were conducted (∗p < .05, ∗∗p < .01). Dsolohuman was significantly larger than Dsharedhuman, indicating that in the Shared-body condition the participants adopted a straighter path than when they performed in the reach alone. Dsharedavatar was significantly smaller than both Dsolohuman and Dsharedhuman, indicating that the avatar attained hand trajectories that were straighter than the individual participant when they reached alone, as well as when they reached in the Shared-body condition.

(B) Data are represented as each participant's Dsharedavatar (horizontal axis) and the rated sense of body ownership in the Shared-body condition (vertical axis). The sense of body ownership in the Shared-body condition positively correlated with the hand reach deviation D in the Shared-body condition (please also see the rank order plot in Figure S2).

(C) Average jerk in reaching. Tukey's post hoc tests with Kenward-Roger degrees of freedom approximation after a one-way repeated measures ANOVA with ART were conducted (∗∗p < .01, ∗∗∗p < .001). Jsharedavatar was smaller than both Jsolohuman and Jsharedhuman. Also, Jsharedhuman was significantly smaller than Jsolohuman. Thus, the shared avatar's movements were smoother than when they were solo.

(D) Data are represented as each participant's difference between Jsharedhuman and Jsolohuman (horizontal axis) and difference of the rated sense of agency between the Shared-body condition and the Solo-body condition (vertical axis). The change in the sense of agency between the Shared-body and Solo-body conditions inversely correlated with the change in hand jerk between the conditions (please also see the rank order plot in Figure S3). Bar plots in (A) and (C) show means +/− SE measures.

Avatar Hand Jerk Was Less Than That of Individual Participants

Next, we calculated the average hand jerk in reaching in the same three cases (Jsolohuman, Jsharedhuman, Jsharedavatar; Figure 3C). A one-way repeated measures ANOVA with ART (aligned rank transformation) procedure (Wobbrock et al., 2011) was conducted because the data significantly deviated from normality (Jsharedavatar: W = 0.823, p = .028). We found a significant main effect (F(2, 18) = 37.238, p < .001, ηp2 = 0.805) across the three measures. Tukey's post hoc tests with Kenward-Roger degrees of freedom approximation (Kenward and Roger, 1997) indicated that Jsharedavatar was significantly smaller than both Jsolohuman (t(18) = 8.629, adj.p < .001, d = −3.86) and Jsharedhuman (t(18) = 4.438, adj.p = .001, d = −1.98). Also, Jsharedhuman was smaller than Jsolohuman (t(18) = 4.191, adj.p = .002, d = −1.87). These results show that the shared avatar's movements were smoother that when they were solo. Furthermore, in the Shared-body condition, the participants might achieve smoother movements with the avatar than with their own hands.

Task Performance: Reaction of Avatar Was Better, Target Error Was Similar

We quantified the task performance in our experiment by evaluating the task time, reaction time, and target error. We compared the participants' task time in the Solo-body condition (TTsolohuman) and the shared avatars' task time in the Shared-body condition (TTsharedavatar, Figure 4A). The task time was defined as a time between the appearance of the target and the hand's touching the target. A paired t test was conducted on the task time because the data did not deviate from normality (Shapiro-Wilk test, W = 0.950, p = .667). There was no difference (t(9) = 0.174, p = .866, d = 0.055). We similarly compared the participants' reaction time in the Solo-body condition (RTsolohuman) and the shared avatars' reaction time in the Shared-body condition (RTsharedavatar, Figure 4B). The reaction time was defined as the time between the appearance of the target and the time at which the hand velocity goes over 10% of the maximum velocity. A paired t test was conducted on the reaction time because the data did not deviate from normality (Shapiro-Wilk test, W = 0.942, p = .574). The shared avatars' reaction time was shorter than the participants' reaction time (t(9) = 2.357, p = .043, d = 0.745).

Figure 4.

Figure 4

Performance by Avatar Changed Compared with Individual Participants

(A) Participants' task time in the Solo-body condition (TTsolohuman) and the shared avatars' task time in the Shared-body condition (TTsharedavatar). There was no difference.

(B) A paired t test showed that the shared avatar's reaction time in the Shared-body condition (RTsharedavatar) was significantly faster than the participants' reaction time in the Solo-body condition (RTsolohuman) (∗p < .05).

(C) Tukey's post hoc tests with Kenward-Roger degrees of freedom approximation after a one-way repeated measures ANOVA with ART showed that the target errors of hand reaching TEsharedavatar and TEsolohuman were significantly smaller than TEsharedhuman (∗∗∗p < .001). TEsharedavatar was slightly smaller than TEsolohuman but the difference did not reach significance. Bar plots in the figure show means +/− SE measures.

Target error was defined as the difference between the endpoint of the participant's reach and the center of the target object and was again calculated for the three cases as before (TEsolohuman, TEsharedhuman, and TEsharedavatar; Figure 4C). A one-way repeated measures ANOVA with ART procedure was conducted because the data significantly deviated from normality (TEsharedhuman: W = 0.774, p = .007). We found a significant main effect (F(2, 18) = 117.839, p < .001, ηp2 = 0.929) in the target error across the cases. Tukey's post hoc tests with Kenward-Roger degrees of freedom approximation indicated that TEsharedavatar was significantly smaller than TEsharedhuman (t(18) = 14.017, adj.p < .001, d = −6.27). TEsolohuman was significantly smaller than TEsharedhuman (t(18) = 12.430, adj.p < .001, d = 5.56). TEsharedavatar was slightly smaller than TEsolohuman, but the difference did not reach significance (t(18) = 1.587, adj.p = .277, d = −0.71). These results show that the participant's reach performance improved when sharing the avatar body, in terms of the reaction time, while remaining same in terms of the target error. In the Shared-body condition, the avatar task performance improved, even when individual participant errors increased.

Interpersonal Distance Changes: An Argument against Averaging Effects

In our experiment we observed that the participants attained a straighter (Figure 3A) and smoother hand trajectories (Figure 3C) than they could do with their own hands under the Solo-body condition. They could also achieve similar target errors with the avatar, as the Solo-body condition (Figure 4C). It is, however, important to note that the participant movements suffer from sensory as well as motor noise (Harris and Wolpert, 1998), and the avatar movement is generated by averaging the hand movements of the participants. Therefore, there is the possibility that, in fact, the participants behave the same in the Solo-body (IPsolohuman) and Shared-body (IPsharedhuman) conditions and that the improvements in straightness, jerk, and target errors we observed are just a consequence of the averaging of the individual participant movements.

To ensure that the participant behaviors did change between the Shared-body and Solo-body, we evaluated the inter-participant distance (IP) between the participants' hand trajectories during reaching in the Shared-body and Solo-body conditions (Figure 5A). A Wilcoxon signed-rank test was conducted because the data significantly deviated from normality (Shapiro-Wilk test, W = 0.826, p = .030). We found that IPsolohuman was significantly lower than IPsharedhuman (W(9) = 0.000, p = .002, d = 1.000). This result indicates that, although participants took similar hand trajectories in the Solo-body condition (even though they embodied separate avatars), their hand trajectories deviated away from one another in the Shared-body condition such that the inter-participant distance was higher. We also compared the evolution of IPsolohuman and IPsharedhuman over three phases of the reach (Figure 5A). These three phases were obtained by dividing the reaching between the appearance of the target and the hand's touching the target into three equal parts in time. Because the data did not deviate from normality (Shapiro-Wilk's normality test, phase 1: W = 0.990, p = .996; phase 2: W = 0.921, p = .365; phase 3: W = 0.851, p = .060), we conducted paired (one-sample) t-tests. IPsolohuman and IPsharedhuman were similar during the first phase of the participant reach (t(9) = 1.064, p = .315, d = 0.336), but showed considerable difference in the second and third phase of the movement, near the targets (t(9) = 3.155, p = .012, d = 0.998; t(9) = 7.037, p < .001, d = 2.225).

Figure 5.

Figure 5

Interpersonal Distance was Larger in the Shared-Body Condition Than in the Solo-Body Condition

(A) Distance between the hand of two participants (top row: Full phase, bottom row: First phase, Second phase, Third phase). IPsolohuman was significantly lower than IPsharedhuman, indicating that, although participants took similar hand trajectories in the Solo-body condition, their hand trajectories deviated away from one another in the Shared-body condition such that the inter-participant distance was higher. (Full phase) A Wilcoxon signed-rank test showed that IPsharedhuman was significantly larger than IPsolohuman (∗∗p < .01). (First phase) A paired t test did not show significant difference. (Second phase, Third phase) Paired t-tests showed that IPsharedhuman was significantly larger than IPsolohuman (∗p < .05, ∗∗∗p < .001).

(B) There was a significant main effect between the IPs in the Solo-body condition and the Shared-body condition but not main effect of sessions (hence trials) or an interaction (a two-way repeated measures ANOVA with ART procedure). Bar plots in the figure show means +/− SE measures.

We also compared the modifications in IP over trials (Figure 5B). Participants worked in Solo-body condition and Shared-body condition across eight sessions (each condition presented four times). We performed a two-way repeated measures ANOVA across these sessions with ART procedure because the data significantly deviated from normality (Shapiro-Wilk test, Session 1 of IPsharedhuman: W = 0.805, p = .016, Session 2 of IPsharedhuman: W = 0.833, p = .037). A significant main effect (F(1, 9) = 55.216, p < .001, ηp2 = 0.772) between the IPs in the Solo-body condition and the Shared-body condition was found. There was no main effect of sessions (hence trials) (F(3, 27) = 2.088, p = .111, ηp2 = 0.179) and no interaction (F(3, 27) = 0.578, p = .632, ηp2 = 0.095).

These results clearly show that the participant behaviors changed significantly between the Solo-body and Shared-body conditions.

Overall, the results in Figures 3 and 4 suggest that the increase in straightness (Figure 3A), decrease in jerk (Figure 3C), and similar target error (Figure 4C) by the avatar compared with the individuals in the Solo-body condition cannot be explained by the averaging of trajectories alone. On the other hand, these results suggest that the participants changed their behavior in the Shared-body condition to optimize these movement variables of the avatar, while ignoring the same on their own hands.

Discussion

To evaluate how the control of a shared avatar affects the motor behavior of the controlling participants, we developed an avatar reaching task in virtual reality. We compared the participant hand trajectories in the Solo-body condition, in which the avatar was controlled alone by individuals, with the Shared-body condition in which the avatar movement was the average of the movements made by two participants. Movement straightness and jerk are known to be optimized by individuals for their hand movements (Flash and Hogan, 1985). However, interestingly, here we observed that the shared avatar hand trajectories were straighter (Figure 3A) and displayed less jerk (Figure 3C) than the hand movements by the participants, both alone and when they were controlling the avatar. The shared avatar also displayed faster reaction times than participants alone (Figure 4B). These results indicate that participants prioritize the optimization of the hand trajectories by the shared avatar over their own individual arm trajectories.

A previous study that examined reaching with a dynamic tool showed that humans optimize the movement of the endpoint of the tool rather than their hand (Dingwell et al., 2002). Although our results here look seemingly similar, there are two fundamental differences. Primarily, the previous task involved a single person controlling and working with the tool. In contrast, the current task examined shared control, where two individuals cooperated to achieve a common task by the avatar they controlled. Second, we observed that the reach task performance was significantly improved while using the shared avatar (Figure 4B). This behavior was not observed during the tool learning task. These differences in fact suggest the behavior in the Shared-body condition to be similar to that observed during human inter-personnel interactions, where low-impedance interactions have been shown to benefit task performance (Ganesh et al., 2014; Takagi et al., 2017; Takagi et al., 2018). Inter-personal benefits are, however, believed to be enabled by the prediction of the partner's behavior utilizing the haptic cues (Takagi, et al., 2017). Although in our task there obviously was no haptic interaction, we believe partner prediction as the reason for the straighter path of the avatar. The avatar movement in out setup was the consequence of movements made by the two dyad participants. This forced the dyad participants to cooperate, and develop an avatar trajectory together, in order to perform reaches successfully. However, this demands that the individuals predict each other's preferred avatar trajectory. We believe that the choice of the straight avatar trajectory was driven by the fact that this choice optimizes the predictability of one's partner's avatar trajectory choice. Subsequently, the lower jerk and endpoint error of the avatar reaches are probably the consequence of the straighter trajectory. Further studies are, however, required to prove this hypothesis.

Body ownership decreases when there is a mismatch between visual and motor information (Sanchez-Vives et al., 2010). This explains why in the Shared-body condition, where the avatar movement is a consequence of movements by two partners, the sense of body ownership is less than in the Solo-body condition. Interestingly, we found that the reported level of ownership in the Shared-body condition positively correlated with the hand reach deviation D in the Shared-body condition (Spearman's rank-order correlation ρ = 0.4795, p = .032; Figure 3B), and the change in the reported level of agency between the Shared-body and Solo-body conditions inversely correlates with the change in hand jerk between the conditions (Spearman's rank-order correlation ρ = −0.5109, p = .023; Figure 3D). In these results, we adopted Spearman's rank-order correlation because the data significantly deviated from normality (Shapiro-Wilk test, Dsharedavatar: W = 0.828, p = .002; Sense of agency (Shared-Solo): W = 0.828, p = .002). Although a previous study has shown the observation of an embodied limb can implicitly disturb the movements of a person (Burin et al., 2019), our results are probably the first to suggest that participants optimize the movements of the embodied limb rather than their own, which is probably also a cause of the observations in Burin et al. (2019). Pyasik, Furlanetto, and Pia (2019) propose the idea that only seeing the own body moving would be enough to activate the neurocognitive processes sub-serving action preparation based on the findings that the body ownership affects the sense of agency (e.g., Banakou and Slater, 2014; Kokkinara et al., 2016; Burin et al., 2019). Our results show that the low-level movement control can be systematically affected by the level of embodiment felt toward a limb. Accordingly, it might be supposed that seeing the shared body (or sense of body ownership to the shared body) can directly affect the motor control process as well as the sense of agency. A better understanding of the relation between embodiment and motor control promises new application of the shared body concept as a virtual avatar, like in our study, or as a collaborating robot.

Limitations of the Study

Although we find that the hand trajectories of the shared avatar are both straighter and less jerky than the solo hand trajectories, our results cannot clarify the causality, that is, whether the participant's goal is to have a straighter avatar trajectory, which subsequently results in less jerky hand trajectory, or, conversely, they aim for less jerk, which results in a straighter trajectory. Further studies are required to clarify this issue. However, in either case, the key goal of this work was to show that the participants are able to make straighter trajectories and better minimize the hand jerk of the shared avatar than their own.

Resource Availability

Lead Contact

Further information and requests for resources should be directed to and will be fulfilled by the Lead Contact, Takayoshi Hagiwara (takayoshi.hagiwara.18@gmail.com).

Materials Availability

This study did not generate new unique reagents.

Data and Code Availability

Original data for figures in the paper is available at Mendeley Data https://doi.org/10.17632/w9n6s2kcmc.1.

Methods

All methods can be found in the accompanying Transparent Methods supplemental file.

Acknowledgments

This research was supported by JST ERATO Grant Number JPMJER1701 (Inami JIZAI Body Project), and JSPS KAKENHI Grant Number JP20H04489.

Author Contributions

T.H. and M.K. contributed to the experimental design. T.H. performed the experiment and collected the data. T.H. analyzed the data with G.G. and M.K. T.H., G.G., and M.K. wrote the manuscript. All authors read and approved the final manuscript.

Declaration of Interests

The authors declare no competing interests.

Published: November 10, 2020

Footnotes

Supplemental Information can be found online at https://doi.org/10.1016/j.isci.2020.101732.

Contributor Information

Takayoshi Hagiwara, Email: takayoshi.hagiwara.18@gmail.com.

Gowrishankar Ganesh, Email: Ganesh.Gowrishankar@lirmm.fr.

Michiteru Kitazaki, Email: mich@tut.jp.

Supplemental Information

Video S1. Related to Figure 1, Methods of the Experiment

The video explains the methods including apparatus, stimuli, and procedure.

Download video file (8.9MB, mp4)

References

  1. Aymerich-Franch L., Ganesh G. The role of functionality in the body model for self-attribution. Neurosci. Res. 2016;104:31–37. doi: 10.1016/j.neures.2015.11.001. [DOI] [PubMed] [Google Scholar]
  2. Aymerich-Franch L., Petit D., Ganesh G., Kheddar A. IEEE International Workshop on Advanced Robotics and its Social Impacts (ARSO); 2015. Embodiment of a Humanoid Robot Is Preserved during Partial and Delayed Control; pp. 1–5. [Google Scholar]
  3. Aymerich-Franch L., Petit D., Ganesh G., Kheddar A. The second me: seeing the real body during humanoid robot embodiment produces an illusion of bi-location. Conscious. Cogn. 2016;46:99–109. doi: 10.1016/j.concog.2016.09.017. [DOI] [PubMed] [Google Scholar]
  4. Aymerich-Franch L., Petit D., Kheddar A., Ganesh G. Forward modelling the rubber hand: illusion of ownership modifies motor-sensory predictions by the brain. R. Soc. Open Sci. 2016;3:160407. doi: 10.1098/rsos.160407. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Aymerich-Franch L., Petit D., Ganesh G., Kheddar A. Object touch by a humanoid robot avatar induces haptic sensation in the real hand. J. Comput. Mediat. Commun. 2017;22:215–230. [Google Scholar]
  6. Banakou D., Groten R., Slater M. Illusory ownership of a virtual child body causes overestimation of object sizes and implicit attitude changes. Proc. Natl. Acad. Sci. U S A. 2013;110:12846–12851. doi: 10.1073/pnas.1306779110. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Banakou D., Slater M. Body ownership causes illusory self-attribution of speaking and influences subsequent real speaking. Proc. Natl. Acad. Sci. U S A. 2014;111:17678–17683. doi: 10.1073/pnas.1414936111. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Burin D., Kilteni K., Rabuffetti M., Slater M., Pia L. Body ownership increases the interference between observed and executed movements. PLoS One. 2019;14:e0209889. doi: 10.1371/journal.pone.0209899. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Diedrichsen J., White O., Newman D., Lally N. Use-dependent and error-based learning of motor behaviors. J. Neurosci. 2010;30:5159–5166. doi: 10.1523/JNEUROSCI.5406-09.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Dingwell J.B., Mah C.D., Mussa-Ivaldi F.A. Manipulating objects with internal degrees of freedom: evidence for model-based control. J. Neurophysiol. 2002;88:222–235. doi: 10.1152/jn.2002.88.1.222. [DOI] [PubMed] [Google Scholar]
  11. Falconer C.J., Slater M., Rovira A., King J.A., Gilbert P., Antley A., Brewin C.R. Embodying compassion: a virtual reality paradigm for overcoming excessive self-criticism. PLoS One. 2014;9:e111933. doi: 10.1371/journal.pone.0111933. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Falconer C.J., Rovira A., King J.A., Gilbert P., Antley A., Fearon P., Ralph N., Slater M., Brewin C.R. Embodying self-compassion within virtual reality and its effects on patients with depression. BJPsych Open. 2016;2:74–80. doi: 10.1192/bjpo.bp.115.002147. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Flash T., Hogan N. The coordination of arm movements: an experimentally confirmed mathematical model. J. Neurosci. 1985;5:1688–1703. doi: 10.1523/JNEUROSCI.05-07-01688.1985. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Franklin D.W., Wolpert D.M. Computational mechanisms of sensorimotor control. Neuron. 2011;72:425–442. doi: 10.1016/j.neuron.2011.10.006. [DOI] [PubMed] [Google Scholar]
  15. Fribourg R., Ogawa N., Hoyet L., Argelaguet F., Narumi T., Hirose M., Lécuyer A. Virtual co-embodiment: evaluation of the sense of agency while sharing the control of a virtual body among two individuals. IEEE Trans. Vis. Comput. Graph. 2020 doi: 10.1109/TVCG.2020.2999197. [DOI] [PubMed] [Google Scholar]
  16. Ganesh G., Burdet E. Motor planning explains human behavior in tasks with multiple solutions. Rob. Auton. Syst. 2013;61:362–368. [Google Scholar]
  17. Ganesh G., Haruno M., Kawato M., Burdet E. Motor memory and local minimization of error and effort, not global optimization, determine motor behavior. J. Neurophysiol. 2010;104:382–390. doi: 10.1152/jn.01058.2009. [DOI] [PubMed] [Google Scholar]
  18. Ganesh G., Takagi A., Osu R., Yoshioka T., Kawano M., Burdet E. Two is better than one: physical interactions improve motor performance in humans. Sci. Rep. 2014;4:3824. doi: 10.1038/srep03824. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Gonzalez-Franco M., Perez-Marcos D., Spanlang B., Slater M. The contribution of real-time mirror reflections of motor actions on virtual body ownership in an immersive virtual environment. Proc. IEEE Virtual Reality. 2010;2010:111–114. [Google Scholar]
  20. Hagiwara T., Sugimoto M., Inami M., Kitazaki M. IEEE Conference on Virtual Reality and 3D User Interfaces (VR); 2019. Shared Body by Action Integration of Two Persons: Body Ownership, Sense of Agency and Task Performance; pp. 954–955. [Google Scholar]
  21. Harris C.M., Wolpert D.M. Signal-dependent noise determines motor planning. Nature. 1998;394:780–784. doi: 10.1038/29528. [DOI] [PubMed] [Google Scholar]
  22. Kenward M.G., Roger J.H. Small sample inference for fixed effects from restricted Maximum likelihood. Biometrics. 1997;53:983–997. [PubMed] [Google Scholar]
  23. Kilteni K., Normand J.-M., Sanchez-Vives M.V., Slater M. Extending body space in immersive virtual reality: a very long arm illusion. PLoS One. 2012;7:e40867. doi: 10.1371/journal.pone.0040867. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Kokkinara E., Kilteni K., Blom K., Slater M. First person perspective of seated participants over a walking virtual body leads to illusory agency over the walking. Sci. Rep. 2016;6:28879. doi: 10.1038/srep28879. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Kondo R., Sugimoto M., Minamizawa K., Hoshi T., Inami M., Kitazaki M. Illusory body ownership of an invisible body interpolated between virtual hands and feet via visual-motor synchronicity. Sci. Rep. 2018;8:7541. doi: 10.1038/s41598-018-25951-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Kondo R., Tani Y., Sugimoto M., Minamizawa K., Inami M., Kitazaki M. Re-association of body parts: illusory ownership of a virtual arm associated with the contralateral real finger by visuo-motor synchrony. Front. Robotics AI. 2020;7:26. doi: 10.3389/frobt.2020.00026. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Peck T.C., Seinfeld S., Aglioti S.M., Slater M. Putting yourself in the skin of a black avatar resuces implicit racial bias. Conscious. Cogn. 2013;22:779–787. doi: 10.1016/j.concog.2013.04.016. [DOI] [PubMed] [Google Scholar]
  28. Pyasik M., Furlanetto T., Pia L. The role of body-related afferent signals in human sense of agency. J. Exp. Neurosci. 2019;13 doi: 10.1177/1179069519849907. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Sanchez-Vives M.V., Spanlang B., Frisoli A., Bergamasco M., Slater M. Virtual hand illusion induced by visuomotor correlations. PLoS One. 2010;5:e10381. doi: 10.1371/journal.pone.0010381. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Tajadura-Jiménez A., Banakou D., Bianchi-Berthouze N., Slater M. Embodiment in a child-like talking virtual body influences object size perception, self-identification, and subsequent real speaking. Sci. Rep. 2017;7:9637. doi: 10.1038/s41598-017-09497-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Takagi A., Ganesh G., Yoshioka T., Kawato M., Burdet E. Physically interacting individuals estimate the partner’s goal to enhance their movements. Nat. Hum. Behav. 2017;1:0054. [Google Scholar]
  32. Takagi A., Usai F., Ganesh G., Sanguineti V., Burdet E. Haptic communication between human is tuned by the hard or soft mechanics of interaction. PLoS Comput. Biol. 2018;14:e1005971. doi: 10.1371/journal.pcbi.1005971. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Wobbrock J.O., Findlater L., Gergle D., Higgins J.J. Proceedings of the SIGCHI conference on human factors in computing systems. 2011. The aligned rank transform for nonparametric factorial analyses using only ANOVA procedures; pp. 143–146. [Google Scholar]
  34. Wolpert D.M., Diedrichsen J., Flanagan J.R. Principles of sensorimotor learning. Nat. Rev. Neurosci. 2011;12:739–751. doi: 10.1038/nrn3112. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Document S1. Trasnsparent Methods and Figures S1–S3
mmc1.pdf (204.8KB, pdf)
Video S1. Related to Figure 1, Methods of the Experiment

The video explains the methods including apparatus, stimuli, and procedure.

Download video file (8.9MB, mp4)

Data Availability Statement

Original data for figures in the paper is available at Mendeley Data https://doi.org/10.17632/w9n6s2kcmc.1.


Articles from iScience are provided here courtesy of Elsevier

RESOURCES