Skip to main content
PLOS One logoLink to PLOS One
. 2021 Aug 19;16(8):e0255898. doi: 10.1371/journal.pone.0255898

Motion sickness and sense of presence in a virtual reality environment developed for manual wheelchair users, with three different approaches

Zohreh Salimi 1,¤,*, Martin William Ferguson-Pell 1,*
Editor: Thomas A Stoffregen2
PMCID: PMC8375983  PMID: 34411151

Abstract

Visually Induced Motion Sickness (VIMS) is a bothersome and sometimes unsafe experience, frequently experienced in Virtual Reality (VR) environments. In this study, the effect of up to four training sessions to decrease VIMS in the VR environment to a minimal level was tested and verified through explicit declarations of all 14 healthy participants that were recruited in this study. Additionally, the Motion Sickness Assessment Questionnaire (MSAQ) was used at the end of each training session to measure responses to different aspects of VIMS. Total, gastrointestinal, and central motion sickness were shown to decrease significantly by the last training session, compared to the first one. After acclimatizing to motion sickness, participants’ sense of presence and the level of their motion sickness in the VR environment were assessed while actuating three novel and sophisticated VR systems. They performed up to four trials of the Illinois agility test in the VR systems and the real world, then completed MSAQ and Igroup Presence Questionnaire (IPQ) at the end of each session. Following acclimatization, the three VR systems generated relatively little motion sickness and high virtual presence scores, with no statistically meaningful difference among them for either MSAQ or IPQ. Also, it was shown that presence has a significant negative correlation with VIMS.

Introduction

The Virtual Reality (VR) industry is rapidly growing and finding applications in widely different areas. Many believe that VR will have a prominent role in many aspects of life, including business, education, entertainment, medicine, and research facilities, but there are others who disagree. One of the main reasons is the Visually Induced Motion Sickness (VIMS) that is associated with VR [13], particularly the immersive VRs, e.g. head-mounted displays [4]. VIMS is an unpleasant and nauseating experience that if experienced, can have a strong role in throwing VR users’ sense of presence off and subsequently, adversely affecting how they behave and perform in VR [4]; so much so that the VR users may be reluctant to use VRs again. VIMS is an accumulative [5] construct during a VR session that when triggered can rapidly escalate and the consequences such as disorientation and vertigo may remain for hours [6] or even days [7]. This, in addition to potentially shrinking the pool of potential VR users, could be unsafe, e.g. when the users return to normal activities, as reported in the literature [8].

To figure out what factors influence motion sickness in VR, Chattha et al. [9] conducted an experiment with 46 participants. The VR system used in this study was the Oculus Rift DK2. The factors studied were gender, genre (horror/pleasant), prior VR experience, and prior motion sickness experience. All participants first tried the pleasant genre and then the horror genre, and their heart rate, sugar level, and blood pressure were recorded before the experiment, between the two genres, and after the experiment. Participants also completed a motion sickness questionnaire after finishing each genre. Authors reported that the horror genre made all 46 participants sick, more severely for women, when compared to the pleasant genre. This motion sickness severity was shown both in questionnaire results and in elevated heart rate and blood pressure, as well as decreased sugar level. They conclude these results indicate that fear is related to motion sickness. They also reported that prior experience of VR and 3D games, or prior motion sickness experience do not impact motion sickness.

Some factors are understood to have roles in triggering VIMS, such as eye separation, field of view, frame rate, latency [1], interactivity [10], quality of the images and projection, and calibration of devices. However, even after optimizing these factors, VIMS still happens, as a consequence of a mismatch between vestibular and visual stimuli [11]. There have been ways identified to deal with this issue, but clearly, in many VR applications, there will always be an inconsistency between the two. Fortunately, VIMS, and motion sickness in general, tend to diminish when subjects are exposed to them during multiple sessions repeated over days [3, 1114]. Also, other techniques could be used such as Puma exercises [15] and vestibular training [14] to help reduce motion sickness.

As mentioned, when there is a mismatch between vestibular and visual stimuli, VIMS could trigger. This is roughly whenever the user moves around, or looks around, in the virtual world. We also mentioned that VR has a potential to be used to assist with many different areas, including for training and conducting related research studies for wheelchair users, as one important part of the society who face miscellaneous difficulties on a daily basis; difficulties ranging from troubles in moving around and navigating in and outside home and accessing buildings, to secondary injuries that they usually undergo as a consequence of the above-normal and repetitive load they experience in their upper-body which they have to use for ambulation. We developed this study to simulate navigation using wheelchair in the VR, and since combination of navigation and VR has a high probability of inducing VIMS, we concentrated on mitigating VIMS while trying to ascertain higher sense of presence.

This study

We have recently developed an immersive VR environment for manual wheelchair users that is comprised of a wheelchair ergometer in a VR cube (Fig 1). The virtual world is projected on the screens (three walls and the ground) and is controlled by the wheelchair user and not from outside: the interaction with the VR is through the natural and realistic means of pushing the wheels. Being able to see oneself when propelling the wheelchair adds to the immersion and is believed to intensify the feeling of presence. Presence is the perception of transportation to the virtual scene and the feeling of being there [16]. Presence is subjective and may vary from one user to another in a given VR environment, while immersion is objective and deals with the level of sensory fidelity that is provided in a VR environment [16].

Fig 1. Wheelchair ergometer inside EON IcubeTM Mobile.

Fig 1

Three different approaches were taken to simulate wheelchair maneuvers in the VR environment, using three VR systems. To be brief, here we call the VR environment actuated using each approach a “VR_sys”. VR_sysI only mechanically replicates linear inertia (details on [17]), assuming rotational inertia can be neglected. VR_sysII and VR_sysIII take a mechanical (using pneumatic brakes to slow down rotations to induce the feeling of inertial resistance) and perceptual (slowing down rotations by a smart software application) approach in simulating rotational inertia, respectively, in addition to mechanically compensating linear inertia [17, 18]. Technical details regarding the development of the VR systems are provided in our former publication [17], as well as the literature review on the existing wheelchair simulators, so we did not repeat that here.

Having a wide field of view helps greatly in the feeling of presence [2] and immersion, which in turn helps VR users perform better in VR [4]; however, a wide field of view is shown to cause higher VIMS [2], as the peripheral vision which is responsible for detecting movements, is also more sensitive to fake movements (movements that are inaccurate, flickering, encompass lags, etc.) and can trigger motion sickness [19]. Immersion affects presence [4] and a VR with a holistic design would provide a higher sense of presence [20], but could this potentially increase the risk of VIMS?

The objective of this study was to assess motion sickness and virtual presence of participants when performing wheelchair maneuvers in each of those three systems, in addition to assessing the effect of up to four training sessions in acclimatizing to motion sickness. We hypothesized that up to four preconditioning sessions will suppress VIMS to an easily tolerable level, and also the VR systems tested in this study would receive a relatively high presence score.

This protocol was designed based on research studies that have reported that having participants trying VR in four [13], five [14, 21] and six [3] sessions had helped them acclimatized to motion sickness. These training sessions should be held on different days [8], as sleeping between the sessions helps to promote neuro-plasticity (repairing and forming new connections in the nervous system).

This study had an iterative approach to develop a VIMS-free VR with natural interaction for wheelchair users. We measured participants’ VIMS and sense of presence in the simulator (VR systems) in 4–7 sessions. It is better said in advance that in our experiments a group of the same participants did not try different systems in equal time differences. Since we improved the VR environment based on the feedback received from the participants as the experiments progressed, the data obtained from each participant and for each session-system were different, however, we completed additional statistical analyses concerning the reduced sample size. The statistics were used with great care, but they were not the main outcome or focus of this work. Rather, this study was about the development of three system iterations in time, and how people reacted to training on one or many of these systems. However, we have made every attempt to draw statistically sound results from the experiments, based on standard statistical procedures. We also wish to note that although Statistics was not the focus of this research study, the statistical approaches were selected with care rather than to simplify the analyses. In order to get the most rigorous results, we carefully assessed each outcome for the six analyses to determine which of the parametric or nonparametric methods would suit the conditions of that dataset and chose the proper statistical approach accordingly. Hence, although sometimes not statistically significant, we believe that the results of this study are valuable and informative.

Methods

Participants

Fourteen healthy able-bodied (independent from using a wheelchair) subjects participated in this experiment. Small sample sizes around 10 are quite respectable when the research involves expensive, complex, and time-consuming study protocols. The experiment protocol was approved by the Human Research Ethics Board of the University of Alberta. Subjects -eight females and six males- signed a consent form and a ParQ & You physical readiness form prior to participating. They were 27.9 ± 4.74 years old on average, with no significant prior VR experience. Participants were asked in their first visit to hold a ruler on their forehead and look straight to the camera which was about 1 meter ahead of them, to record their intra-ocular distance. Participants tried different systems and had different numbers of training and main sessions, which is illustrated in Fig 2.

Fig 2. Flowchart of the experiments.

Fig 2

This graph shows what systems each participant was exposed to, and in which sessions. Also, it shows how many training sessions each participant needed. Also, the blue rectangle marks the main sessions.

Experimental procedure

On their first visit, participants were trained to propel a wheelchair in both VR and Real World (RW). Based on how they reacted in that session regarding motion sickness, some of them were asked to complete up to three more training sessions in preparation for participating in the main sessions. During the training sessions, participants were simply exposed to different VR scenes and asked to freely “move around” as long as they felt they liked it. The training sessions were finished, however, whenever the participant asked to stop. The duration of these sessions was between 5 to 30 minutes. All of the conditioning sessions were held on different days. The number of additional training sessions was determined based on the score of their motion sickness at the end of each training session and also by asking them directly if they feel ready to participate in the main experiments. The length of the sessions was up to the participants; they could stop the session as soon as they feel sick and bothered by the VR, or if they felt they have no problems (anymore) with being in the VR.

At the end of each training session, participants completed the Motion Sickness Assessment Questionnaire (MSAQ), a questionnaire that has been shown to be valid and reliable [22] in assessing different aspects of motion sickness. Subjects’ readiness for participation in the main tests was determined based on their MSAQ score in the last training session and their written declaration that they feel ready and confident. Subjects then participated mostly in two main sessions, although some of the participants completed a third main session later on, as the progress of the VR systems demanded. Each of the main sessions consisted of performing a standard, reliable [23] and valid [24] agility test, Illinois agility test, with a wheelchair, both in RW and in the VR environment; they completed four trials of this test in VR and four trials in RW. At the end of each main session, participants completed the MSAQ and also Igroup Presence Questionnaire (IPQ), a questionnaire used for assessing different aspects of feeling present in VR [25]. Most studies have tested the sense of presence in the VR using post-test questionnaires [26]. In this study, therefore, to measure the sense of presence in the VR systems we used the IPQ questionnaire, the validity and reliability of which has been shown using a large sample size (n = 296) [25].

Materials

The VR environment consisted of a sophisticated wheelchair ergometer placed inside an EON IcubeTM Mobile (Fig 1): this VR environment allows a real-life-like experience of ambulation in VR by using the wheelchair as the interface to move around in VR. The wheelchair ergometer is equipped with an inertia system that replicates the biomechanics of straight-line wheelchair propulsion [17, 27]. The first VR system (VR_sysI) replicates straight-line biomechanically but does not provide inertial compensation for simulating turns. We hypothesize that when linear inertia is compensated, rotational inertia can be neglected and the participant’s perception of turning can be induced using visual cues. In other words, the participant will use the wheels to turn as they do in RW, and the VR scene will follow the turn, but there will be no inertia resisting turning. Subjects recruitment started by using this system. It is worth stating that since not all the participants tried the same VR systems in their experiments, the data of each participant was included in the analysis of the system(s) that they had tried.

After recruiting a few subjects (Figs 2 and 3), screening the comments they made during their visit led us to believe that lacking the inertial compensation for wheelchair turning triggers motion sickness and is hard to tolerate and get used to, and also, it did not “feel like the real world”. Therefore, VR_sysII was designed and built: a system that uses a pneumatic braking system to simulate rotational inertia by adding friction to one of the wheels when turning, as needed. However, as this system uses a pneumatic system, alternating between the setpoint braking pressures builds up hysteresis, making it impossible to formulate and control the set pressures. To solve this problem, the pneumatic pressure was reset back to 0 between every two set points; although this was an effective solution for setting the exact pressure needed for rotational inertia replication, inevitably the limited response time in the system could not be better than ~10 Hz. When the participant wanted to turn fast and pushed harder on one of the wheels, sometimes they could not see the corresponding turn in the VR display. As a result, they pushed harder and tended to overshoot. This made the control of the VR_sysII to be rather hard.

Fig 3. Summary of the experiments.

Fig 3

Experiments started on the left side and proceeded to the right side.

Participant recruitment was resumed using VR_sysII; After recruiting a few more participants and based on the feedback received from them, it was realized that although helpful at the beginning, participants started being dissatisfied with VR_sysII after acclimatizing to VR. This was because VR_sysII was harder to control, due to the slow response time of the pistons. Thus, to compare the two systems, one VR trial and one RW trial of each session were completed using VR_sysI and the rest of the trials of that session were completed using VR_sysII (see Figs 2 and 3).

A third system, VR_sysIII, was developed to deal with the limitations of VR_sysII: Instead of mechanically compensating the rotational inertia, this system induced the perception of rotational inertia by slowing down the rotation in the visual feedback of the VR display, thereby increasing the effort needed to rotate the scene. Details of the VR systems and their validity and reliability can be found in our former publications [17, 18]. When this system was ready, it was tested by two participants in their main sessions. Their feedback showed greatly improved satisfaction. Therefore, former participants were called back to try VR_sysIII in an additional main session. Figs 2 and 3 show a summary of the experiments. In Fig 2, the first column shows the session numbers (-3 to 0 indicate preconditioning sessions and 1 to 3 indicate the main sessions), and the other columns are indicative of the VR systems tried by the participant. In addition to VR, participants also performed real-world tests in all main sessions.

Here is why the sessions are named -3 to 3: Participants took 1 to 4 training sessions based on their needs. Since the number of the training sessions was different for different subjects and since the training sessions preceded the main sessions, the last training session was named 0 and the other training sessions were named retrospectively. This way, the first main session which came after the last training session is named session 1 for all participants, while keeping the chronological sequence of all sessions for everyone.

Table 1 shows the technical descriptions of the VR systems that relate to VIMS.

Table 1. Technical descriptions of the VR systems.

Factor Our VR systems Known thresholds
Time lag <10 ms <10 ms [6]
Frame rate 60 Hz >10 Hz [10]
Haptics Inertia is felt immediately by hand when pushing the wheels, as the systems are not motorized -
Response time or interactivity (for VR_sysII) Up to 0.1 s <0.1 s* [10]
Control system Proportional -
Inter ocular distance Measured for each participant and accordingly adjusted in the simulation -

*- May not be enough if there are fast-moving objects in the scene that are not controlled by the user [10].

The data then were classified into 7 sessions: from session -3 to session 3. Session -3 to 0 being training sessions, and session 1 to 3 being the main sessions. This was designed based on the fact that participants had different counts of training sessions from one to four; so, the sessions were named by how far they were from the first main session. This way all main sessions start at 1 while keeping the sequential order of sessions. This is also rationally the right choice, as people who needed fewer training sessions were those who were less susceptible to motion sickness and so generally scored less for MSAQ in their first training session.

The MSAQ [22] consists of 16 questions in four subcategories (5 scores): total (T), gastrointestinal (GI), central (C), peripheral (P), and sopite-related (S). In the original MSAQ each question has a score from 1 to 9; so, the total score is the sum of all scores that will be a number between 11 and 144. The four subcategories also follow a similar rule. This scale was not intuitive in the context of this study, so we slightly modified the MSAQ to make the scores more understandable: we changed the scale of each question to 0 to 9 and at the end, scaled the total score and all subcategory scores to a maximum of 100. This way, all scores are from 0 to 100, which helps with making judgments and comparisons much easier. For the training sessions, one question was added at the end of MSAQ to directly ask if they felt ready to participate in the main experiments.

The IPQ contains four subcategories, hidden in 14 questions: INVolvement (INV), Experienced Realism (ER), Spatial Presence (SP), and General (G). In addition to the original 14 questions, we added one more question to the IPQ to specifically ask them about their idea of similarity/difference between RW and VR, as in Fig 4.

Fig 4. The questions added to IPO.

Fig 4

Statistical procedure

Small sample sizes reduce the power of studies. On the other hand, the parametric methods are more powerful than non-parametric methods. Since the power of this study for our research questions (that involve high between-subject variations) is fragile, we used every effort to add to the power of the analyses. Thus, where possible, parametric methods were used. However, if the statistical assumptions were not met, the non-parametric methods were employed. In other words, we used parametric and nonparametric tests when appropriate. SPSS (IBM® SPSS® Statistics Premium GradPack 24 for Windows) was used for the statistical analyses.

For each research question, data were checked for normality first, using Shapiro-Wilk test which is highly recommended for testing normal distribution in SPSS [28]; If not normal, an attempt was made to make data follow normal distribution using the two-step method [29] (or other transformation functions). Then, for any subcategories that followed a normal distribution, MANOVA was used to test the null hypothesis. In the case of non-normally distributed subcategories, we used non-parametric methods to test the null hypotheses. Data for groups involved were checked firstly for having similar distributions, and secondly for satisfying the Assumption of Homogeneity of Variances (AHV). If assumptions were met, Kruskal-Wallis method was used to find whether there were any statistically significant results.

Repeated measures MANOVA was not used here as the data in this study could not be paired between groups, e. g. a group of the same people did not try different systems in equal time differences. Since we improved the VR environment based on the feedback received from the participants as the experiments progressed, the data obtained from each participant and for each session-system were different. Tables 2 and 3 show data available for each participant.

Table 2. IPQ data available for each participant.

Subject 1 2 3 4 5 6 7 8 9 10 11 12 13 14
VR_sysI 1 1 1 2 2 2 1,2 1,2 1,2
VR_sysII 1,2 1,2 1 1,2 1,2 1,2 1,2 1,2 1
VR_sysIII 3 3 3 3 3 3 3 3 1,2 1,2

Numbers before each VR system indicate the session numbers that participants filled in the IPQ.

Table 3. MSAQ data available for each participant.

Subject 1 2 3 4 5 6 7 8 9 10 11 12 13 14
VR_sysI -1,0,1 -1,0,1 0,1 2 -3,2 1,2 1,2 1,2 -1 0
VR_sysII -2,-1,0, 1,2 -3,-2,-1,0, 1,2 -1,0, 1,2 0,1,2 -2,-1,0, 1,2 -2,-1,0, 1,2 -1,0, 1,2 -1,0, 1,2 -3,-2,-1,0, 1,2 -2,0 -2,-1
VR_sysIII 3 3 3 3 3 3 3 3 1,2 1,2

Numbers before each VR system indicate the session numbers that participants filled in the MSAQ.

Results

Our research questions are addressed here, by analyzing the data obtained from this study. Tables 2 and 3 were used to extract the statistically validated data for testing each analysis. Details of and the grounds for opting the statistical procedures on the reported results of this study are shown in Tables 46.

Table 4. Details of statistical procedures for the analyses 1 to 5.

Analysis Data MSAQ or IPQ? Categories not initially having normal distribution Normal after transformation? Which transformation method? AHV is met? Pillai’s Trace significance of MANOVA F value
1: The influence of training sessions in decreasing motion sickness First training sessions’ MSAQ_sysII (Sample Size (SS) = 9) versus last training sessions’ MSAQ_sysII (SS = 10). MSAQ GI and P Yes 2-step method Yes (Box’s M test significance of 0.035 [30]) 0.008 5.573
2: VR_sysII vs. VR_sysI during the training stage. MSAQ scores of the training sessions: VR_sysI (SS = 6) versus VR_sysII (SS = 11). Data were averaged where there was more than one data per system per participant MSAQ None _ _ Yes (Box’s M test significance of 0.994) 0.285 1.438
3: Comparing the VR systems during the main sessions, for MSAQ scores. MSAQ scores of the main sessions: VR_sysI (SS = 8) versus VR_sysII (SS = 9) versus VR_sysIII (SS = 10). Data were averaged where there was more than one data per system per participant. MSAQ G, C, and P (at least for one system) Only for C- See Table 11 for non-normal variables (G and P) 2-step method Yes (Box’s M test significance of 0.076) 0.414 1.038
4: Comparing VR systems regarding the IPQ scores IPQ scores of the main sessions: VR_sysI (SS = 9) versus VR_sysII (SS = 9) versus VR_sysIII (SS = 10). Data were averaged where there was more than one data per system per participant. IPQ G, SP No-See Table 11 for these variables (G and SP) _ Yes (Box’s M test significance of 0.734) 0.128 1.886
5: Comparing the IPQ scores among sessions IPQ scores of the main sessions wherever IPQ of at least 2 sessions are available: Session 1 (SS = 11) versus session 2 (SS = 9) versus session 3 (SS = 8). Data were averaged where there was more than one system tried per session per participant. IPQ None _ _ Yes (Box’s M test significance of 0.828) 0.066 2.014

Table 6. Details of statistical procedures for the non-parametric tests.

Analysis Non-normally distributed categories Statistical test AHV met? *
3 G, P Kruskal-Wallis Yes- Significance = 0.998
4 G, SP Kruskal-Wallis Yes- Significance = 0.43

*- To test the underlying assumption of Kruskal-Wallis,

AHV, we ranked all data of the three systems, then found the average of the ranked data for each system and found the absolute difference of each ranked data from the mean of the respected group, and finally, found out if there were real differences between those absolute difference data, using MANOVA. This is the non-parametric equivalent of Levene’s test.

Table 5. Details of statistical procedures for the analysis 6.

Analysis Data Assumptions
6: Correlation of IPQ and MSAQ scores All IPQ and MSAQ (SS = 14) scores of all sessions, averaged for each participant, so we are left with only one column of IPQ and one column of MSAQ (one correlation is assessed). According to the former results of analyses, it is permittable to average for each participant. - Normal distribution: P-value of Shapiro-Wilk test of normality for total motion sickness = 0.279 and for general presence = 0.066 = > Assumption met—Linearity: P-value of deviation from linearity: 0.93 = > Assumption met

MSAQ

Training sessions

Participants took one to four training sessions based on their needs until they were ready to start the main sessions. The average time gap between the training sessions was 8.1 days (SD = 7.78 days). Table 7 shows the numbers of participants needing different numbers of training sessions. The mean number of sessions needed to mitigate VIMS was 2.8, with a standard deviation of 1.1.

Table 7. Number of participants (#females, #males) taking each number of training sessions.
# Training sessions 1 2 3 4
How many participants? 2 (2f) 3 (2f, 1m) 4 (2f, 2m) 5 (2f, 3m)

Analysis 1: The influence of training sessions in decreasing motion sickness. MANOVA was significant with a total observed power of 0.91 and a total effect size of 0.72. A post hoc analysis revealed that the subcategories of T, GI, and C showed significant results; therefore, the training sessions significantly helped participants by decreasing those three factors of motion sickness (total, gastrointestinal, and central. Fig 5 shows how the average MSAQ score of all data (14 subjects) diminished from the first training session to the last one, showing that the training sessions indeed work. Also, all participants replied “yes” to the direct question they were asked that whether they felt ready to take part in the relatively long sessions of a complex maneuvering task, after a maximum of four sessions.

Fig 5. Average MSAQ score of all data for each subcategory.

Fig 5

Fig 6 depicts the mean and SD of all subcategories of MSAQ for the first and the last training sessions. MSAQ scores are from 0 to 100.

Fig 6. Mean and SD of subcategories of MSAQ for the first and the last training sessions.

Fig 6

The prefix INV- indicates outcomes that needed transformation to comply with the normal distribution condition.

Analysis 2: VR_sysII vs. VR_sysI during the training stage. All subcategories (T, GI, C, P, and S) had a normal distribution for each group (mean and standard deviations are presented in Table 8). MANOVA results were not significant and the mean data from Table 8 also do not show any clinical impact. Therefore, there was no statistically meaningful difference between the VR systems I and II during the training sessions in terms of the measured motion sickness.

Table 8. Mean and SD of MSAQ subcategories for the training sessions.
MSAQ subcategory Total Gastrointestinal Central Peripheral Sopite-related
VR system I II I II I II I II I II
Mean 24.9 23.4 31.0 32.6 32.6 26.5 19.8 16.2 13.0 15.9
Standard Deviation 18.3 16.0 28.9 24.1 26.4 19.1 23.0 14.0 10.5 16.2

Main sessions

Analysis 3: Compering the VR systems during the main sessions, for MSAQ scores. Kruskal-Wallis test was used for P and G (see Tables 4 and 6) that returned an exact significance of 0.388. Therefore, there is no meaningful difference between different VR systems during the main experiments (stabilized motion sickness).

For subcategories total, central, and sopite-related, MANOVA test was not significant with the observed power of only 36.5%main; mean and SD of these subcategories is depicted in Fig 7.

Fig 7. Mean and SD of MSAQ subcategories for the main sessions.

Fig 7

The prefix INV- indicates that transformation was needed to comply with the normal distribution condition.

IPQ

Analysis. 4: Comparing VR systems regarding the IPQ scores. We used a non-parametric method (Kruskal-Wallis) for analyzing the results of G and SP, and for the others (INV and ER) we used MANOVA which is more powerful (details on Tables 4 and 6).

Kruskal-Wallis. The P-value of 0.19 and 0.23 for Kruskal-Wallis test for G and SP suggested that there is no meaningful difference between the VR systems with regard to G and SP. Also, the effect sizes of 0.13 and 0.11 for G showed that about 10 percent of the variability in rank scores of G and SP are accounted for by the VR systems.

The median of G and SP scores for each VR system and all VR systems as a whole are presented in Table 9.

Table 9. Median of general and spatial presence (Scores are out of 6).

VR system Median
I II III Total
Spatial Presence 4.4 4.2 5.1 4.4
General 4.3 4 5 4.5

MANOVA. MANOVA showed that the three systems were not statistically different. Table 10 presents the mean and standard deviation of normally distributed presence factors for each system and all data.

Table 10. Mean and SD of presence factors (out of 6).

VR system Mean (SD)
I II III Total
Involvement 4.2 (0.9) 3.7 (0.8) 4.5 (0.6) 4.1 (0.8)
Experienced Realism 3.5 (1.2) 2.7 (0.9) 3.9 (0.7) 3.4 (1)

Although no meaningful differences were detected between the VR systems regarding the presence factors, looking at means (Fig 8) we could see an almost linear trend from VR_sysII receiving the lowest scores to VR_sysIII receiving the highest scores for involvement and experienced realism. Therefore, linear contrast analysis was performed on INV and ER that revealed a significant linear trend from VR_sysII to VR_sysI to VR_sysIII (sig = 0.033 and 0.012, respectively) with good observed-powers (0.58 and 0.74) and considerable effect sizes (0.17 and 0.23). In other words, about 20% of the variability in INV and ER scores is accounted for by the VR system.

Fig 8. The linear trend for INV and ER among the three systems.

Fig 8

The system with the lowest scores (VR_sysII) was placed far left and the system with the highest scores (VR_sysIII) was placed far right on the horizontal axis.

Analysis 5: Comparing the IPQ scores among sessions. MANOVA results showed no significant difference among sessions for IPQ, with a significance of 0.066 and observed power of 75.4%.

Analysis 6: Correlation of IPQ and MSAQ scores. Pearson coefficient of correlation was obtained as -0.533 which is significant for a sample size of 14. This is an interesting result since it shows the more the person suffers from motion sickness, the less they grade their presence in VR, confirming an interaction between VIMS and “presence”.

Direct question

Question number 15 (Q15) had a normal distribution. ANOVA was used to find possible statistical differences between the scores given to each system for Q15, which was rejected with 23.5% observed power. Table 11 shows the descriptive statistics for each system and Fig 9 illustrates how far from similar are the mean scores of each VR system. According to these results, with system III, participants generated the forces that are the closest to the RW conditions.

Table 11. Descriptive statistics for Q15.

Mean SD
VR_sysI 3.6 2.6
VR_sysII 6.5 4.4
VR_sysIII 5.4 1.3

Fig 9. Distances each system score has from the ideal situation (absolute similarity).

Fig 9

Free text

At the end of each session, participants were asked to write down any comments that they had about their experience. Word clouds, which are a common way to represent qualitative data [3133], were then made from the comments gathered for each system and are shown in Figs 1012. In these figures, the size of each word is related to the rank of its repetition in the participants’ comments. Thus bigger words indirectly show that those words are concerns to the participants. Note that for each word cloud, frequent but non-informative words were excluded, namely: VR, RW, real, virtual, world, some, more, and time.

Fig 10. Word cloud for VR_sysI (made using [34]).

Fig 10

Fig 12. Word cloud for VR_sysIII (made using [34]).

Fig 12

Fig 11. Word cloud for VR_sysII (made using [34]).

Fig 11

As the word clouds clearly show, participants were mainly displeased with their experience of VR_sysI, complaining about it being too sensitive to turning, so much so it would make them sick and feel like “the world is spinning”. For VR_sysII, however, they did not talk about sensitivity and spinning anymore, but they were not satisfied with the system yet, either. Scenes were sometimes slower and not how they would expect for turning. For the VR_sysIII though, they generally wrote about how this time it was better, smoother, the easiest for their motion sickness, and more realistic. These comments nicely show how the VR systems gradually improved based on the participants’ feedback.

Discussion

In this study, motion sickness and sense of presence in three VR systems developed for wheelchair maneuvering were compared. Also, the effect of providing up to four training sessions in reducing motion sickness to VR to a tolerable level was confirmed. Since the discussion on IPQ and MSAQ results are intertwined, we first discuss MSAQ scores alone and then discuss the general results of this study, considering both IPQ and MSAQ results.

MSAQ

Training sessions

According to the results of this study, the training sessions significantly reduced the gastrointestinal and central motion sickness, as well as the total motion sickness level, with a very high effect size (0.72). Also, the most important evidence and the one conclusive outcome for the effect of the training sessions in reducing VIMS to a minimal/tolerable level is the question they were asked whether they felt ready to take part in the relatively long sessions of a complex maneuvering task to which all participants replied “yes” after a maximum of four sessions. This clearly shows that the maximum of four training sessions does indeed work in extinguishing motion sickness.

This is a very positive and encouraging result, as one of the main issues of usability of VR is still the motion sickness that it causes in many of its users [1, 19, 35]. Motion sickness, in addition to all the negative consequences and unfavorable feelings accompanying it, could be unsafe, as was experienced by one of the participants of this study. When he tried VR for the first time, he suddenly became motion sick. He was helped out of the VR and recovered. However, he reported later that when he went home on his bike, he still was disoriented and had vertigo, and did not feel safe riding. This clearly illustrates the importance of looking after, predicting, and controlling VIMS for the safety of VR users.

Although peripheral and sopite-related categories of MSAQ did not show statistically significant results, the downward trend in these motion sickness categories, as it is depicted in Fig 6, suggests potential clinical relevance. The observed power for T, GI, C, P, and S was, respectively: 0.98, 0.64, 0.95, 0.25, and 0.16, which shows a good to excellent power for the first categories and albeit a great probability of type II error (false negative) for P and S (β = 75% and 84%). This is a consequence of the high standard deviation of the data, especially for P and S, which is a result of the big between-subject variations for motion sickness (see the overlapping standard devotions).

Main sessions

For subcategories total, central, and sopite-related the difference between the systems during the main sessions was not statistically meaningful. However, looking at the mean data (Fig 7) we see a considerable difference between MSAQ scores of VR_sysII and the other systems which is indicative of potential clinical impact, but more research is needed to conclude this. Here again, the large between-subject variability in the propensity for motion sickness has caused high standard deviations and high chances of type II error.

General discussion on IPQ and MSAQ results

Using questionnaires is a good way to measure presence as it is cheap and easy, does not interrupt the experiment, and has high face validity [26]. However, the recency effect is one of the main disadvantages of questionnaires [26] which means the scores participants give to the questions about presence are usually the way they were feeling at the final parts of the test. On the other hand, motion sickness is a cumulative construct and it builds up as time passes during the experiment [5] and gradually tends to throw the participant’s concentration away [8, 36]. This means that participants who, after receiving some training, still have some susceptibility to motion sickness have probably given scores that show more sickness than how they have felt during the whole experiment.

Despite all that, in general, the three VR systems studied here received relatively low motion sickness and high virtual presence scores during the main sessions, which is indicative of their good general quality.

It was shown in this paper that direct questions that were added to the MSAQ and IPQ questionnaires provided stronger evidence about user preference regarding sense of presence and also the overall feeling of motion sickness. Although great care was taken when selecting validated questionnaires to characterize motion sickness and virtual presence well, this seemed to be inadequate to capture users’ ideas about our VR systems. Therefore, it was felt that there was a need for some objective questions. This indicates a serious shortcoming in evaluating VR using currently available validated tools.

With regards to the difference among the VR systems, no statistically meaningful difference was detected for either MSAQ or IPQ. It should be said that the focus of this research was on iterative wheelchair virtual reality development with the users’ comments being considered during this continuous development. This approach to development allows fast consideration of user’s perception and faster iterations between VR systems; however, it does introduce complexity in analyzing data post-experiment. As a result of high between-subject variances, which are due to natural differences between people rather than a consequence of the study methods, much bigger sample sizes are needed for significant results. However, future work should focus on creating a VR user experience that is a significant improvement on the designs used in this study, rather than recruiting a larger sample of participants, simply to demonstrate the differences between these systems.

Based on the participants’ comments (word clouds), the VR_sysIII was the easiest motion-sickness-wise and the most realistic one. Questionnaires’ results also, although not statistically significant, show higher IPQ scores for VR_sysIII and lower IPQ scores for VR_sysII. This indicates that users were more inclined to choose and be satisfied with VR_sysIII and the least so with VR_sysII.

Interactivity constraint is an important factor that influences the sense of presence in VR [10]. It is the time it takes from when the participant provides an input to the VR system to when he/she feels/observes the effect of it; in our case, since the participant applies a force to the wheels to when he/she observes the corresponding displacement/rotation in the visual feedback. Interactivity constraint was a considerable issue in the VR_sysII, as it was based on a mechanical system that encompasses some delay in its application.

The statistics are not the main outcome of this work, but rather, this study is about the development of three system iterations in time, and how people reacted to training on one or many of these systems. As already mentioned, the statistical analyses failed to detect any differences among the systems, either for IPQ or MSAQ, although participants’ comments caused us to anticipate some. Also, as mentioned earlier, two of the five subcategories of motion sickness (SR and P) were not shown to significantly decrease from the first training session to the last one, despite the large decrease in their score (percent of reduction for each subcategory were: T: 70.6, G: 63.1, C: 71.8, P: 44.5, and SR: 40.2). The large standard deviations in most data in this study led to relatively low study power and therefore a high chance of type II error, and thus, some of the differences detected in this study could not be confirmed statistically. Despite this, we believe that the results of this study have important clinical and practical values.

The sample size of 10 that was used for testing most of the hypotheses here, is quite respectable when the research involves expensive, complex, and time-consuming study protocols. Nevertheless, large between-subject and possibly small effect size variations indicate that larger sample sizes are needed to detect meaningful effects from natural variations. A retrospective power analysis based on the effect sizes obtained in this study, for example, shows that for testing the effects of training sessions in eliminating peripheral and sopite-related motion sickness (Analysis 5), we need to recruit 628 and 1751 subjects, respectively [37] (power = 0.8 and α = 0.05)! The same observation was made by another study on simulator sickness that was unable to find significant results and stated that a much bigger sample size is needed due to great inter-individual differences of the study participants [8].

The third session took place about four/five months after the second session and we believe this has biased the IPQ scores of the third session, as the VR was not so exciting and a new experience anymore and thus received lower scores.

According to the results of this study, there is a meaningful inverse relationship between the level of motion sickness in the VR and the level of presence the VR users experience, which is consistent with similar results of other studies [35, 38]. In other words, for the VR users to have a realistic experience, it is important to make sure that the VR is carefully designed and calibrated to minimize issues that throw the users off and trigger motion sickness. Additionally, it is necessary to ensure that the users take enough training sessions to eliminate/minimize nausea and therefore, increase the usability of the VR by enhancing the realism of the experience.

Conclusion

The motion sickness and sense of presence of the participants in three VR systems were assessed in this study. The effect of providing up to four training sessions to precondition participants to VR was also assessed. We found that the training sessions significantly reduced the gastrointestinal and central motion sickness, as well as the total motion sickness level, with a very high effect size (0.72). This is a very positive and encouraging result, as one of the main issues for the usability of VR is that motion sickness causes many [15, 35, 39] to abandon VR, or at least feel uncomfortable using it. In general, the three VR systems studied here resulted in relatively low motion sickness and high virtual presence scores during the main sessions, which is indicative of their good general technical and experimental quality. With regards to the difference among the VR systems, no statistically meaningful difference was detected for either MSAQ or IPQ. Nevertheless, based on the participants’ comments, VR_sysIII was the most comfortably tolerated and the most realistic one. Questionnaires’ results showed that although not statistically significant, there were higher IPQ scores for VR_sysIII and lower IPQ scores for VR_sysII. Thus, we can conclude that based on surveys and qualitative data, VR_sysIII and VR_sysII gained the most and the least user preference, respectively.

One key observation in this study was that when simulating wheelchair manoeuvres, the technical and perceptual challenges of simulating turning are the main issue, especially when the rotational inertia is totally absent (VR_sysI). For having both a biomechanically sound wheelchair simulation and user satisfaction, it is important to compensate for both linear and rotational inertia while providing smooth visual feedback; something that complies with user expectations.

Acknowledgments

The authors would like to thank the participants of this study for their time and great comments that helped to refine our VR systems.

Data Availability

All datasets are available from the Figshare repository database at: https://doi.org/10.6084/m9.figshare.13669862.v1.

Funding Statement

The authors received no specific funding for this work. Canadian Foundation for Innovation (CFI 30852 Leading Edge Fund 2012) infrastructure grant provided the capital equipment for the project (received by MFP). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Chardonnet J.-R., Mirzaei M. A., and Mérienne F., “Features of the postural sway signal as indicators to estimate and predict visually induced motion sickness in virtual reality,” Int. J. Human–Computer Interact., 2017. [Google Scholar]
  • 2.Lin J. J.-W., Duh H. B. L., Parker D. E., Abi-Rached H., and Furness T. A., “Effects of field of view on presence, enjoyment, memory, and simulator sickness in a virtual environment,” Proc. IEEE Virtual Real. 2002, no. February, pp. 164–171, 2002. [Google Scholar]
  • 3.Sparto P. J., Whitney S. L., Hodges L. F., Furman J. M., and Redfern M. S., “Simulator sickness when performing gaze shifts within a wide field of view optic flow environment: preliminary evidence for using virtual reality in vestibular rehabilitation.,” J. Neuroeng. Rehabil., vol. 1, no. 14, 2004. doi: 10.1186/1743-0003-1-14 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Alshaer A., Regenbrecht H., and Hare D. O., “Immersion factors affecting perception and behaviour in a virtual reality power wheelchair simulator,” Appl. Ergon., vol. 58, pp. 1–12, 2017. [DOI] [PubMed] [Google Scholar]
  • 5.Kiryu T. and So R. H., “Sensation of presence and cybersickness in applications of virtual reality for advanced rehabilitation,” J. Neuroeng. Rehabil., vol. 4, no. 1, p. 34, 2007. doi: 10.1186/1743-0003-4-34 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Golding J. F., “Motion sickness susceptibility,” Auton. Neurosci. Basic Clin., vol. 129, no. 1–2, pp. 67–76, 2006. [DOI] [PubMed] [Google Scholar]
  • 7.“Motion sickness,” University of Maryland Medical Center. [Online]. http://www.umm.edu/health/medical/altmed/condition/motion-sickness. [Accessed: 01-Jun-2017].
  • 8.D. M. Johnson, “Simulator Sickness Research Summary,” 2005.
  • 9.Chattha U. A., Janjua U. I., Anwar F., Madni T. M., Cheema M. F., and Janjua S. I., “Motion Sickness in Virtual Reality: An Empirical Evaluation,” IEEE Access, vol. 8, pp. 130486–130499, 2020. [Google Scholar]
  • 10.S. Bryson, “Approaches to the successful design and implementation of VR applications,” in SIGGRAPH’94, 1994, pp. 1–11.
  • 11.Reason J. T., “Motion sickness adaptation: A neural mismatch model,” J. R. Soc. Med., vol. 71, no. February, pp. 819–829, 1978. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Wright R. H., “Helicopter simulator sickness: A state-of-the-art review of its incidence, causes, and treatment,” Alexandria, VA: U.S. Army, 1995. [Google Scholar]
  • 13.J. McGuinness, J., Bouwman, J.H. and Forbes, “Simulator sickness occurrences in the 2E6 Air Combat Maneuvering Simulator (ACMS),” Orlando, FL, 1981.
  • 14.Clement G., Deguine O., Bourg M., and Pavy-LeTraon A., “Effects of vestibular training on motion sickness, nystagmus, and subjective vertical,” J Vestib Res, vol. 17, no. 5–6, pp. 227–237, 2007. [PubMed] [Google Scholar]
  • 15.S. C. Puma and S. W. Puma, “United States Patent,” US 10/885,853, 2007.
  • 16.Hale K. S. and Stanney K. M., Eds., “Beyond presence: How holistic experience drives training and education,” in Handbook of virtual environments: Design, implementation and applications, 2nd ed., New York, NY, USA: CRC Press, 2015. [Google Scholar]
  • 17.Salimi Z. and Ferguson-Pell M., “Development of three versions of a wheelchair ergometer for curvilinear manual wheelchair propulsion using virtual reality,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 26, no. 6, pp. 1215–1222, 2017. [DOI] [PubMed] [Google Scholar]
  • 18.Salimi Z. and Ferguson-pell M., “Investigating the Reliability and Validity of Three Novel Virtual Reality Environments with Different Approaches to Simulate Wheelchair Maneuvers,” IEEE Trans. neural Syst. Rehabil. Eng., vol. 27, no. 3, pp. 514–522, 2019. doi: 10.1109/TNSRE.2019.2896904 [DOI] [PubMed] [Google Scholar]
  • 19.S. Zhang, “The neuroscience of why virtual reality still sucks.” http://gizmodo.com.
  • 20.D. B. Chertoff, B. Goldiez, and J. J. LaViola, “Virtual experience test: A virtual environment evaluation questionnaire,” 2010 IEEE Virtual Real. Conf., pp. 103–110, Mar. 2010.
  • 21.Smither J. A.-A., Mouloua M., and Kennedy R., “Reducing Symptoms of Visually Induced Motion Sickness Through Perceptual Training,” Int. J. Aviat. Psychol., vol. 18, no. 4, pp. 326–339, Oct. 2008. [Google Scholar]
  • 22.Gianaros P. J. et al., “A questionnaire for the assessment of the multiple dimensions of motion sickness,” Aviat Sp. Env. Med, vol. 72, no. 2, pp. 115–119, 2001. [PMC free article] [PubMed] [Google Scholar]
  • 23.Salimi Z. and Ferguson-Pell M. W., “Investigating the test-retest reliability of Illinois Agility Test for wheelchair users,” PLoS One, 2020. doi: 10.1371/journal.pone.0241412 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Williams E. J., “Wheelchair basketball and agility,” University of Alabama, 2014. [Google Scholar]
  • 25.Schubert T., Friedmann F., and Regenbrecht H., “The experience of presence: factor analytic insights,” Presence Teleoperators Virtual Environ., vol. 10, no. 3, pp. 266–281, 2001. [Google Scholar]
  • 26.Ijsselsteijn W. and Van Baren J., “Measuring presence: A Guide to current measurement approaches,” Madisn, Wisconsin, USA, 2004. [Google Scholar]
  • 27.Z. Salimi and M. W. Ferguson-Pell, “Ergometers can now biomechanically replicate straight-line wheelchair propulsion: three models presented,” in Proceedings of the ASME 2013 International Mechanical Engineering Congress & Exposition IMECE2013, 2013.
  • 28.Ghasemi A. and Zahediasl S., “Normality tests for statistical analysis: A guide for non-statisticians,” Int. J. Endocrinol. Metab., vol. 10, no. 2, pp. 486–489, 2012. doi: 10.5812/ijem.3505 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Templeton G. F., “A two-step approach for transforming continuous variables to normal: Implications and recommendations for IS research,” Commun. Assoc. Inf. Syst., vol. 28, no. 1, pp. 41–58, 2011. [Google Scholar]
  • 30.Huberty CJ and P. MD, “Multivariate analysis of variance and covariance,” in Handbook of Applied Multivariate Statistics and Mathematical Modeling, New York: Academic press, 2000, pp. 183–211. [Google Scholar]
  • 31.Henderson S. and Segal E. H., “Visualizing Qualitative Data in Evaluation Research,” New Dir. Eval., vol. 2013, no. 139, pp. 53–71, Sep. 2013. [Google Scholar]
  • 32.DePaolo C. A. and Wilkinson K., “Get Your Head into the Clouds: Using Word Clouds for Analyzing Qualitative Assessment Data,” TechTrends, vol. 58, no. 3, pp. 38–44, May 2014. [Google Scholar]
  • 33.Diakopoulos N., Elgesem D., Salway A., Zhang A., and Hofland K., “Compare Clouds: Visualizing Text Corpora to Compare Media Frames,” Proc. IUI Work. Vis. Text Anal., pp. 193–202, 2015. [Google Scholar]
  • 34.“Word it out.” [Online]. https://worditout.com/.
  • 35.Schuemie M. J., van der Straaten P., Krijn M., and van der Mast C. A. P. G., “Research on presence in virtual reality: A survey,” CyberPsychology Behav., vol. 4, no. 2, pp. 183–201, 2001. doi: 10.1089/109493101300117884 [DOI] [PubMed] [Google Scholar]
  • 36.Dahlman J., Sjors A., Lindstrom J., Ledin T., and Falkmer T., “Performance and Autonomic Responses During Motion Sickness,” Hum. Factors J. Hum. Factors Ergon. Soc., vol. 51, no. 1, pp. 56–66, 2009. doi: 10.1177/0018720809332848 [DOI] [PubMed] [Google Scholar]
  • 37.Faul F., Erdfelder E., Lang A.-G., and Buchner A., “G*Power: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences.,” Behav. Res. Methods, vol. 39, no. 2, pp. 175–191, 2007. [DOI] [PubMed] [Google Scholar]
  • 38.Weech S., Kenny S., and Barnett-Cowan M., “Presence and cybersickness in virtual reality are negatively related: A review,” Front. Psychol., vol. 10, no. FEB, pp. 1–19, 2019. doi: 10.3389/fpsyg.2019.00158 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.“Preservation of upper limb function following spinal cord injury: A clinical practice guideline for health-care professionals,” Journal of Spinal Cord Medicine, vol. 28, no. 5. Consortium Member Organizations and Steering Committee Representatives, pp. 434–470, 2005. doi: 10.1080/10790268.2005.11753844 [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision Letter 0

Thomas A Stoffregen

26 Feb 2021

PONE-D-21-03454

Motion sickness and sense of presence in three virtual reality environments developed for manual wheelchair users

PLOS ONE

Dear Dr. Salimi,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

I have secured reviews from three subject-matter experts. As you will see, each identifies significant problems with the submitted manuscript. Please carefully revise, based on these detailed comments.

Please submit your revised manuscript by Apr 12 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

Thomas A Stoffregen, PhD

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: No

Reviewer #3: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: No

Reviewer #3: No

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Line 34: I'm confused. VR system or environments? System reads as the physical hardware, the actual system that the VR environment is created by.

Line 18: The users' sense of presence is a major focus of this paper. Defining exactly what that is would strengthen your paper immensely. Is this the sense of immersion? (I see in line 80 that these are listed as two separate things.) Please define this term.

Line 54: Missing period after [8].

Line 58: It is "stimuli."

Line 65: VR environment again, verses system. This is later changed in the Materials section (line 134). If these two are the same thing, it would be best to chose a single term. If there is some major difference between "environment" vs "system," that terminology should be established in the Introduction section.

Line 89: Why four? I would greatly appreciate an explanation in this paragraph about why this number was chosen, as well as any citations that may support this.

Line 89: Numbers under 10 (that are not reference numbers) are spelled out.

Line 102: What was the exclusion criteria? Motion sickness can also be affected by a history of concussion, TBI, etc. What does "able-bodied" actually mean?

Line 109: Again, with spelling out numbers under 10 that are not references. Make sure this is consistent throughout the manuscript.

Line 110: I did not initially realize that the sessions were held on different days. This should be clarified sooner, as the past tense used in line 117 confused me to if the sessions were in fact on different days. Additionally, was the amount of time between sessions controlled? What was the average time between sessions? More information needs to be recorded concerning this portion of the Methods section.

Line 139: The sentence starting on this line is winding and confusing. Consider:

"This first VR system (VR_sysI) replicates straight-line biomechanically but does not provide inertial compensation for simulating turns. We hypothesize that when linear inertia is compensated, rotational inertia can be neglected and the participant’s perception of turning can be induced using visual cues."

Line 145: "It is worth stating that all the participants recruited were included in data analyses of the VR systems they had tried during the main sessions." Does this indicate that some participants were eventually excluded? Reference Table 5 for clarity.

Line147: What is considered "a few" subjects? Was this a statistically motivated decision?

Line 159: "harder"

Line 162: Participant recruitment was restarted? Add reference to Table 5 for clarity.

Line 175: Former participants across all previous trials? Add reference to Table 5 for clarity.

Line 179: Formatting issues.

Line 182: Figure 2 should be placed further up to provide some clarification.

Line 184: Remove "Some." Additionally, the note of * does not provide enough information. What does "may not be enough" mean? Additionally, intra-ocular distance being recorded should be mentioned further up in the Methods section under "Participants."

Line 212: I am delighted to see that you completed additional statistical analyses concerning participant size.

Line 229: This line answers some of the questions I have listed above. However, these answers should be presented sooner to prevent undue confusion.

Table 5: I would reference this table in the Participant section, as well as other lines noted above.

Table 7: I feel like a better way to lay out this table is to include the demographic data of the participants, and indicate # of training sessions by participant. Additionally, training sessions by environment/system would also be enlightening. Maybe Fig. 2 can be modified in some way to include this?

Line 277: A visible pattern that is not significant, is not significant. This should not be presented as such. This issue is repeated at line 312.

Line 399: I am not convinced that motion sickness tends to " throw the participant's concentration away." There is no citation for this, and your study does not investigate concentration as a DV.

Line 450: Here is the line I have been waiting for. This would be best being stated in the "Participant" section, and then reiterated here.

Line 461: I appreciate this acknowledgment that the "wow" factor may have worn off by this point.

Fig 2: Why are the participants not labeled as "Subject 1, 2" etc.? This graph makes it very difficult to quickly refer to individual participants. Additionally, your caption for this figure only partially explains what I am looking at. Having more labels would benefit this graph immensely. Is this not the same thing as Table 5?

Overall, I feel like this study would benefit from more revision. Your introduction was succinct, and would be strengthened by expanding on your literature stepping stones. Further information in the Methods section would also benefit this manuscript.

I understand this consists of three major parts and therefore will be more winding than a short experiment, but I found myself constantly backtracking to ensure I was properly digesting the content. I still do not fully understand how training sessions were organized. Did the participants recieve training sessions for each environment? What was the average time gap between training sessions? There is a possibility that time may impact the outcome as well. Additionally, the three VR environment/systems blended together heavily, to the point of confusion. You might fair better by clearly seperating these three environments into their own sections, with an emphasise on clarifying participation/recruitment and methods. Improving Fig. 2 to aid with this should also be a priority.

Reviewer #2: The manuscript is not technically sound. The results of this study rely on there being no difference between MSAQ and IPQ results between the VR systems used. If there were significant differences between these systems, then the data would mandate an entirely different set of analyses than what was used in this study. However, the authors state their comparison across VR training systems is underpowered which makes a comparison moot. The authors need to increase the sample size of this study so that their comparison across VR systems is not underpowered and they can have statistical confidence in their results As it stands, the underpowered result cannot be trusted as indicative one way or another that VR systems were no different in MSAQ and IPQ. Failing to reject the null hypothesis should not be taken as evidence of the null hypothesis. Without this additional data, the findings of this study are equivocal.

Additionally, the authors claim that training sessions reduced MSAQ scores. However, training sessions was not the only independent variable in the study. Exposure to VR system type is a second independent variable which was not intended by the authors. This study would benefit from a different methodology that evaluated only one type of VR system. As the study stands, it’s impossible to say if training session alone led to reduced MSAQ scores. Perhaps it was changing VR systems across participants which led to reduced MSAQ scores. Perhaps it was both VR system type exposure, and repeated exposures (training sessions) which led to the reduction in MSAQ scores. The methodology and underpowered comparison of VR systems MSAQ and IPQ scores makes this impossible to determine.

The statistical analyses have not been performed appropriately and rigorously. The samples are undersized, as stated by the authors, and the decision against using multiple comparisons is not compelling or rigorous.

Reviewer #3: This project examines the role of repeated exposure to a virtual environment in order to reduce occurrences of VIMS without sacrificing immersion or presence. The researchers exposed participants to a large immersive virtual environment that the navigated through using a wheelchair – relation between wheelchair control and perceptual information was altered across three conditions. Participants were exposed to up to four pre-experimental training sessions to mitigate VIMS. Once VIMS was reduced to ‘tolerable’ levels researchers were interested in whether the different control/information coupling paradigms influenced presence in the virtual environment.

This was a timely and interesting project that would be of interest to the VR community and would make a contribution to the literature. However, in trying to be concise the structure and process of the studies can be hard to follow. Authors may want to reorganize to first address the aspects of the study that are designed to mitigate VIMS, then separately discuss the aspects of the study that are designed to address the differences in the VR system mechanics. It seems there are two goals – reduce sickness and maintain level of presence – for clarity, may need to explicitly organize the introduction, methods, analysis, and discussion in this manner. While VIMS and presence are related, discussing them separately may make it easier for readers to follow.

In terms of the analysis, I indicated ‘no’ because it is not clear why non-parametric statistics are not used across the board as your research questions/goals seem to suggest “yes/no” rather than inferential answers. Nonparametric analyses are perfectly appropriate for the questions you were addressing (particularly since the MANOVA did not reveal significant difference – which was the hope of the researchers?).

Minor things:

Just check for typos and tense/number agreement in sentences

For figure 6 – is the reason why the three systems are presented out of order? If so, should be noted in the caption.

For the word cloud figures – in the captions need to indicate what the reader should take note of – does the relative ‘strength’ of particular terms support your hypotheses?

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: Yes: Justin Munafo

Reviewer #3: Yes: L. James Smart Jr.

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Attachment

Submitted filename: Review Feedback_ Motion sickness and sense of presence in three virtual reality environments developed for manual wheelchair users.pdf

PLoS One. 2021 Aug 19;16(8):e0255898. doi: 10.1371/journal.pone.0255898.r002

Author response to Decision Letter 0


17 Apr 2021

Comments Responses

Reviewer #1:

Overall: Overall, I feel like this study would benefit from more revision. Your introduction was succinct, and would be strengthened by expanding on your literature steppingstones. Further information in the Methods section would also benefit this manuscript.

I understand this consists of three major parts and therefore will be more winding than a short experiment, but I found myself constantly backtracking to ensure I was properly digesting the content. I still do not fully understand how training sessions were organized. Did the participants receive training sessions for each environment? What was the average time gap between training sessions? There is a possibility that time may impact the outcome as well. Additionally, the three VR environment/systems blended together heavily, to the point of confusion. You might fair better by clearly seperating these three environments into their own sections, with an emphasise on clarifying participation/recruitment and methods. Improving Fig. 2 to aid with this should also be a priority.

A. We tried to apply all of these comments as addressed below. Thank you for the constructive comments.

Literature review is expanded.

More information is added to the Methods section.

Fig.2 depicts which systems were tried during the training and the main sessions.

An explanation is now added to Methods/Experimental procedure to provide more details on training sessions, as:

“During the training sessions, participants were simply exposed to different VR scenes and asked to freely “move around” as long as they feel like it. The training sessions were finished, however, whenever the participant asked for it. The duration of these sessions was between 5 to 30 minutes”.

The average time gap was 8.1 days. This is now added to the manuscript.

This manuscript is a story on continuous wheelchair virtual reality development with the users' comments being considered during this continuous development. This kind of development allows fast consideration of user's perception and faster iterations between VR systems. Separating every section into two or three parts could help with understanding each part better but will harm the flow of the “story”, making the logic of transitions between the sections harder to follow. Also, discussion on IPQ and MSAQ results are intertwined and there are several parts in the Discussion where it applies to the experiment as a whole, and not just to MSAQ or IPQ, such as the discussion on the sample size. Hence, with all due respect, we decided to keep the Methods as is. However, we did add subtitles of IPQ and MSAQ (next heading: training sessions/main sessions) to the Results and Discussion sections to organize the information.

Fig. 2 is improved now. Also, another figure (Fig 3) is added to the paper to help elaborate the ambiguities of the experimental procedure.

1- Line 34: I'm confused. VR system or environments? System reads as the physical hardware, the actual system that the VR environment is created by.

A. Thank you for pointing out the unclarity. We changed the wordings and added an explanation early in the manuscript to clarify that there was a VR environment with three approaches to replicating rotational inertia (using three VR systems).

“Three different approaches were taken to simulate wheelchair maneuvers in the VR environment, using three VR systems. To be brief, here we call the VR environment actuated using each approach a “VR_sys”.

2- Line 18: The users' sense of presence is a major focus of this paper. Defining exactly what that is would strengthen your paper immensely. Is this the sense of immersion? (I see in line 80 that these are listed as two separate things.) Please define this term.

A. Immersion affects presence but is a separate thing. Immersion is objective and deals with the level of sensory fidelity provided by a VR system, but presence is subjective and may vary from one user to the other in a given VR environment. The definition of VR is added to the introduction as:

“Presence is the perception of transportation to the virtual scene and feeling as being there”.

3- Line 54: Missing period after [8].

A. Corrected.

4- Line 58: It is "stimuli."

A. Corrected.

5- Line 65: VR environment again, verses system. This is later changed in the Materials section (line 134). If these two are the same thing, it would be best to choose a single term. If there is some major difference between "environment" vs "system," that terminology should be established in the Introduction section.

A. We changed the wordings and added an explanation early in the manuscript to clarify that there was a VR environment with three approaches to replicating rotational inertia (using three VR systems).

6- Line 89: Why four? I would greatly appreciate an explanation in this paragraph about why this number was chosen, as well as any citations that may support this.

A. The explanation you are asking for was in the method section, which is now displaced to Introduction/This study. That is:

“This protocol was designed based on research studies that have reported that having participants trying VR in four [12], five[13], [19] and six [3] sessions had helped them acclimatized to motion sickness. These training sessions should be held on different days [8], as sleeping between the sessions helps to promote neuro-plasticity (repairing and forming new connections in the nervous system).”

7- Line 89: Numbers under 10 (that are not reference numbers) are spelled out.

A. Corrected here and elsewhere in the manuscript.

8- Line 102: What was the exclusion criteria? Motion sickness can also be affected by a history of concussion, TBI, etc. What does "able-bodied" actually mean?

A. Being able-bodied is used in contrast to having disabilities. Specifically, in here, being independent of using a wheelchair was meant. Able-bodied is now explained in the manuscript.

Participants were excluded if they had a musculoskeletal injury that affects normal wheelchair use, exercise-induced asthma, or heart disease. Their physical readiness was assessed before the experiments using ParQ and You physical readiness questionnaire. Other conditions not included in our exclusion criteria were accepted to affect the initial state of the participants with regard to their susceptibility to motion sickness. Our objective was to show no matter what the initial state was, the participant would be ready to undertake the experiments after a maximum of four conditioning sessions, based on their own judgment.

Other exclusion criteria were:

� Neuromuscular condition e.g. multiple sclerosis, motor neuron disease

� Pre-existing injury or pain during exertion in upper extremities by using PAR-Q questionnaire

� Prescribed drugs for neuro-musculoskeletal pain or which have related side-effects

9- Line 109: Again, with spelling out numbers under 10 that are not references. Make sure this is consistent throughout the manuscript.

A. Corrected. We made sure it is consistent throughout the manuscript.

10- Line 110: I did not initially realize that the sessions were held on different days. This should be clarified sooner, as the past tense used in line 117 confused me to if the sessions were in fact on different days. Additionally, was the amount of time between sessions controlled? What was the average time between sessions? More information needs to be recorded concerning this portion of the Methods section.

A. This paragraph is now placed further up in the manuscript, where we explain our study in the Introduction.

The conditioning sessions were only required to be held on different days, so the exact time between these sessions was not recorded. However, the time between those sessions was usually about 1 day to 1 week.

11- Line 139: The sentence starting on this line is winding and confusing. Consider: "This first VR system (VR_sysI) replicates straight-line biomechanically but does not provide inertial compensation for simulating turns. We hypothesize that when linear inertia is compensated, rotational inertia can be neglected and the participant’s perception of turning can be induced using visual cues."

A. Applied as suggested. Thank you for the help.

12- Line 145: "It is worth stating that all the participants recruited were included in data analyses of the VR systems they had tried during the main sessions." Does this indicate that some participants were eventually excluded? Reference Table 5 for clarity.

A. No. No data were excluded from the analyses. For resolving the ambiguity, the sentence was re-written as:

“It is worth stating that since not all the participants tried the same VR systems in their experiments, data of each participant was included in the analysis of the system(s) that they had tried”.

13- Line147: What is considered "a few" subjects? Was this a statistically motivated decision?

A. Three subjects, to be exact. This is depicted in Fig. 2 (reference added to the text).

No. The statistical analyses were completed after the completion of the experiments. As stated in the manuscript, this study had an iterative approach, seeking to reach a motion-sickness-free and representing VR system for wheelchair users. Thus, we made changes based on the participants’ comments and thereby made VR_sysII and VR_sysIII.

14- Line 159: "harder"

A. Corrected.

15- Line 162: Participant recruitment was restarted? Add reference to Table 5 for clarity.

A. “restarted” was substituted by “resumed”. Thank you for pointing it out.

16- Line 175: Former participants across all previous trials? Add reference to Table 5 for clarity.

A. Every participant that had completed two main sessions. Also, one participant declined from completing the third session. There was a reference to Fig 2 right after that sentence, which shows which participants completed the third main session, so reference to that table was not added to this sentence.

17- Line 179: Formatting issues.

A. This sentence was re-written to address your comment.

18- Line 182: Figure 2 should be placed further up to provide some clarification.

A. Applied.

19- Line 184: Remove "Some." Additionally, the note of * does not provide enough information. What does "may not be enough" mean? Additionally, intra-ocular distance being recorded should be mentioned further up in the Methods section under "Participants."

A. “Some” is removed.

The method of recording intra-ocular distance is now added to the Participants section.

The note of * is also updated as: “May not be enough if there are fast-moving objects in the scene that are not controlled by the user”.

20- Line 212: I am delighted to see that you completed additional statistical analyses concerning participant size.

A. We are happy to hear that.

21- Line 229: This line answers some of the questions I have listed above. However, these answers should be presented sooner to prevent undue confusion.

A. The message of this paragraph is now added to the Introduction section.

22- Table 5: I would reference this table in the Participant section, as well as other lines noted above.

A. We are afraid the reference to that table in the Participants section is not practical, as it needs the IPQ and MSAQ to be defined and discussed first. However, reference to Fig 2 is now added to Participants section, as it represents similar information. Other notes above are also applied.

23- Table 7: I feel like a better way to lay out this table is to include the demographic data of the participants, and indicate # of training sessions by participant. Additionally, training sessions by environment/system would also be enlightening. Maybe Fig. 2 can be modified in some way to include this?

A. Table 7 is now modified to include demographic data next to # of training sessions.

Training sessions by environment/system would also be enlightening is already included in Fig 2.

24- Line 277: A visible pattern that is not significant, is not significant. This should not be presented as such. This issue is repeated at line 312.

A. The sentences are re-written as below to address this problem:

- Although P and S did not show statistically significant results, the downward trend in these motion sickness categories, as it is depicted in Fig 4, suggests potential clinical relevance.

- … we see a considerable difference between MSAQ scores of VR_sysII and the other systems which is indicative of potential clinical impact, but more research is needed to conclude this.

25- Line 399: I am not convinced that motion sickness tends to " throw the participant's concentration away." There is no citation for this, and your study does not investigate concentration as a DV.

A. With all due respect, there are references in the literature for this. Two references are added to the text now.

26- Line 450: Here is the line I have been waiting for. This would be best being stated in the "Participant" section, and then reiterated here.

A. This sentence is stated in the Participants section too, now. Thank you for the comment.

27- Line 461: I appreciate this acknowledgment that the "wow" factor may have worn off by this point.

A. Happy to hear that.

28- Fig 2: Why are the participants not labeled as "Subject 1, 2" etc.? This graph makes it very difficult to quickly refer to individual participants. Additionally, your caption for this figure only partially explains what I am looking at. Having more labels would benefit this graph immensely. Is this not the same thing as Table 5?

A. Labels of the participants are now added to Fig 2. The caption is also expanded.

This graph has an overlap with what that table (now Table 3) and Table 4 represent, as those tables show what data is available for each questionnaire, while this graph shows what systems each participant was exposed to, and in which sessions. Also, it shows how many training sessions each participant needed.

Reviewer #2:

Overall: The manuscript is not technically sound.…

The statistical analyses have not been performed appropriately and rigorously. The samples are undersized, as stated by the authors, and the decision against using multiple comparisons is not compelling or rigorous.

A. We are sorry you felt that there were technical problems with this manuscript, and we hope that with the explanations provided, you feel more confident about the quality of this manuscript.

With all due respect, we disagree with statistical analyses not being performed appropriately and rigorously. Although the samples are undersized, every effort was made to add to the power of the analyses and that is why instead of using non-parametric methods that perfectly fitted our research questions, we used parametric methods wherever possible so that the power of the analyses are boosted. The statistical grounds for every decision made for which analyses to be used is presented in Tables 4 to 6.

29- The results of this study rely on there being no difference between MSAQ and IPQ results between the VR systems used. If there were significant differences between these systems, then the data would mandate an entirely different set of analyses than what was used in this study.

A. We are afraid we do not follow your logic here. We were interested to see, among others, if there were any differences among the VR systems for IPQ and MSAQ and we ran MANOVA hypothesis testing for it. Every hypothesis test will eventually have a “maintain” or” reject” result for the null hypothesis. Why would a “maintain” result harm our analysis?

30- However, the authors state their comparison across VR training systems is underpowered which makes a comparison moot. The authors need to increase the sample size of this study so that their comparison across VR systems is not underpowered and they can have statistical confidence in their results As it stands, the underpowered result cannot be trusted as indicative one way or another that VR systems were no different in MSAQ and IPQ.

A. We were not trying to show that there was no difference between the systems regarding MSAQ or IPQ, or the systems were “similar”. In contrast, we were interested to see which system is the best among the three systems, regarding motion sickness or presence. This analysis, however, did not show any difference among the systems, but this was not the only outcome of this study, nor the most important one. In this study we could show, with statistical significance, that:

- A maximum of 4 training sessions indeed decrease the motion sickness to a tolerable level with a total observed power of 0.91 and a total effect size of 0.72 (for 3 of 5 subcategories, plus the explicit declaration of the participants),

- About 20% of the variability in involvement and experienced realism scores of presence is accounted for by the VR system.

- Inverse relation between MSAQ and IPQ: The more the person suffers from motion sickness, the less they grade their presence in VR.

Despite that, as emphasized in the manuscript, “The statistics were used with great care, but they were not the main outcome or focus of this work. Rather, this study was about the development of three system iterations in time, and how people reacted to training on one or many of these systems”.

May we also add that we believe that research studies should not be published only on the ground of having statistically significant results, as this will bias the published literature to those research findings that have statistical significance.

31- Failing to reject the null hypothesis should not be taken as evidence of the null hypothesis. Without this additional data, the findings of this study are equivocal.

A. That is true and we did not do it. As mentioned in the previous comment: We were not trying to show that there was no difference between the systems regarding MSAQ or IPQ, or the systems were “similar”. In contrast, we were interested to see which system is the best among the three systems, regarding motion sickness or presence.

32- Additionally, the authors claim that training sessions reduced MSAQ scores. However, training sessions was not the only independent variable in the study. Exposure to VR system type is a second independent variable which was not intended by the authors. This study would benefit from a different methodology that evaluated only one type of VR system. As the study stands, it’s impossible to say if training session alone led to reduced MSAQ scores. Perhaps it was changing VR systems across participants which led to reduced MSAQ scores. Perhaps it was both VR system type exposure, and repeated exposures (training sessions) which led to the reduction in MSAQ scores. The methodology and underpowered comparison of VR systems MSAQ and IPQ scores makes this impossible to determine.

A. This is correct that participants were exposed to different VR systems in the training sessions, but the conclusion remains sound, as we were concerned with exposure to VR as a whole and not with VR constraints or conditions: “exposure to VR for a maximum of four training sessions diminishes the motion sickness to a tolerable level, as indicated by the user, regardless of the type of the VR, provided that the sessions are held on different days.

Furthermore, Fig 2 shows that changing the exposure to VR system type was not common among all of the subjects and cannot be causing the reduction in motion sickness, as suggested by the reviewer.

33- In line 83 the author refers to fake movements in a user’s peripheral vision. This is unclear.

The author should provide an explanation of what a fake movement is or means.

A. This explanation is now added to the text: “movements that are inaccurate, flickering, encompass lags, etc.”.

34- In lines 159 the authors have a typo, “arder”

A. Corrected to “harder”

35- Unable to understand what the preconditions -3 to 0 trials are. The authors should provide more information about what users did during these training sessions.

A. This explanation is now added to the text for elaboration:

“Participants took 1 to 4 training sessions based on their needs. Since the number of the training sessions was different for different subjects and since the training sessions preceded the main sessions, the last training session was named 0 and the other training sessions were named retrospectively. This way, the first main session which comes after the last training session is named session 1 for all participants, while keeping the chronological sequence of all sessions for everyone”.

An explanation is now added to Methods/Experimental procedure to provide more details on training sessions, as:

“During the training sessions, participants were simply exposed to different VR scenes and asked to freely “move around” as long as they feel like it. The training sessions were finished, however, whenever the participant asked for it. The duration of these sessions was between 5 to 30 minutes”.

36- Lines 214-215 the authors say, “the statistical procedures are selected in a way to maximize the

power. Therefore, wherever possible, the parametrics methods used that have higher statistical power than non-parametric methods. However if the assumptions were not met, the

non-parametrics methods have been utilized.”

- This is confusing. The authors should be using analyses that meet the constraints of the

statistical assumptions. If this is what the authors did, I suggest rewriting this passage to

simplify it. The authors can simply say they used parametric and nonparametric tests when appropriate.

A. This paragraph is re-written to address this comment, by including your suggestion.

37- The justification for not using a Bonferonni correction, or any other statistical correction for

multiple comparisons, is untenable. The decision to correct for multiple comparisons cannot be made on the basis of what is convenient for the study or results. The authors state that

because they’re looking for similarities as opposed to differences, corrections for multiple comparisons aren’t necessary. The opposite is true. The authors should use conventional statistical methods (i.e. multiple comparisons corrections) which adhere to widely accepted and established statistical practices to demonstrate any claims they make based on their data.

A. We apologize that this paragraph is mistakenly put in this paper and actually belongs to (and is an excerpt of) another paper we have already published from the same project (reference 22 of this manuscript). The argument made was for that publication. This paragraph is deleted altogether. Thank you for pointing it out.

38- The authors should report the mean and standard deviation for MSAQ scores, and should report the sample size, the p value, and F value for the MANOVA.

A. Means and standard deviations for MSAQ scores were already included in Fig 5 and Table 8. Also, sample size and P-values were already reported in Tables 4 to 6. F values are now added to Table 4.

39- Lines 288-294 editorialize in the results section. The authors should plainly state their result,

and expand upon the importance of the result in the discussion section.

A. All editorial information including the section you indicated are now moved to Discussion.

40- Line 445-447 The authors state that some results of the study could not be confirmed statistically. However, a result cannot exist without empirical evidence. The authors claim here that a result exists in which they have no evidence for. The authors should remove this sentence.

A. That sentence is re-written now as: “some of the differences detected in this study could not be confirmed statistically”.

41 Lines 414-425 The authors state that there was no significant difference in MSAQ or IPQ for any VR system, but then state that a larger sample size would be needed to identify significant results. The authors then claim that future work doesn’t need a larger sample size. This is problematic for two reasons. The authors specifically identify that their null result in MSAQ and IPQ across systems is likely due to too small a sample size. Because this is the case, the authors should not claim there is or is not a difference between VR training systems. A second concern is that the authors say future work should not focus on getting a large enough sample size to identify statistical differences between the systems. However, this would leave future researchers in the same bind the current authors are in; their analyses are underpowered and cannot be used to validate their claims one way or another.

A. As you said, we have recognized in our manuscript that a larger sample size, if implanted, could have probably shown some differences between the systems, which by the way, is not the main outcome of this study. Also, we have said: “future work should focus on creating a VR user experience that is a significant improvement on the designs used in this study, rather than recruiting a larger sample of participants, simply to demonstrate the differences between these systems”. There is no controversy between these two sentences, and they are not problematic, as in the latter one we are saying, in other words, that although larger sample size could have “saved” our data, our main objective was a fast iteration between the systems to come with a better VR system. Then we have shown the future avenue by identifying that non of these VR systems are “perfect” and future studies should not just repeat these tests with (one of) these VR systems using larger sample sizes, merely to show a “significant difference” between these systems. Rather, they need to come with an even better VR system and validate it using an appropriate statistical procedure.

With regard to recognizing differences among systems, we would like to emphasize again that we never claimed differences between the systems more than what participants' comments, shown in the form of word clouds, tells us (with no statistical judgment on that). The take-home messages of this paper are:

-The 3 VR systems showed relatively high presence and low motion sickness.

- Up to four training sessions is effective in mitigating motion sickness.

- Presence and motion sickness are inversely related.

- About 20% of the variability in involvement and experienced realism scores of presence is accounted for by the VR system (statistically significant).

Reviewer #3:

Overall: This project examines the role of repeated exposure to a virtual environment in order to reduce occurrences of VIMS without sacrificing immersion or presence. The researchers exposed participants to a large immersive virtual environment that the navigated through using a wheelchair – relation between wheelchair control and perceptual information was altered across three conditions. Participants were exposed to up to four pre-experimental training sessions to mitigate VIMS. Once VIMS was reduced to ‘tolerable’ levels researchers were interested in whether the different control/information coupling paradigms influenced presence in the virtual environment.

This was a timely and interesting project that would be of interest to the VR community and would make a contribution to the literature. However, in trying to be concise the structure and process of the studies can be hard to follow. Authors may want to reorganize to first address the aspects of the study that are designed to mitigate VIMS, then separately discuss the aspects of the study that are designed to address the differences in the VR system mechanics. It seems there are two goals – reduce sickness and maintain level of presence – for clarity, may need to explicitly organize the introduction, methods, analysis, and discussion in this manner. While VIMS and presence are related, discussing them separately may make it easier for readers to follow.

A. Thank you for the positive view on our manuscript.

This manuscript is a story on continuous wheelchair virtual reality development with the users' comments being considered during this continuous development. This kind of development allows fast consideration of user's perception and faster iterations between VR systems. Separating every section to two or three parts could help with understanding each part better but will harm the flow of the “story”, making the logic of transitions between the sections harder to follow. Also, discussion on IPQ and MSAQ results are intertwined and there are several parts in the Discussion where it applies to the experiment as a whole, and not just to MSAQ or IPQ, such as the discussion on the sample size. Hence, with all due respect, we decided to keep the Methods as is. However, we did add subtitles of IPQ and MSAQ (next heading: training sessions/main sessions) to the Results and Discussion sections to organize the information.

42- In terms of the analysis, I indicated ‘no’ because it is not clear why non-parametric statistics are not used across the board as your research questions/goals seem to suggest “yes/no” rather than inferential answers. Nonparametric analyses are perfectly appropriate for the questions you were addressing (particularly since the MANOVA did not reveal significant difference – which was the hope of the researchers?).

A. The parametric methods are more powerful than non-parametric methods. On the other hand, small sample sizes reduce the power of a study. Since the power of this study for our research questions (that involve high between-subject variations) is fragile, we used every effort to add to the power of the analyses. Thus, where possible, parametric methods were used.

This explanation is now added to the Methods/Statistical procedure.

43- Just check for typos and tense/number agreement in sentences

A. The whole manuscript was checked for grammatical errors and typos.

44 For figure 6 – is there a reason why the three systems are presented out of order? If so, should be noted in the caption.

A. Yes. VR systems were named chronologically, but the experiments revealed that VR_sysII and VR_sysIII received the lowest and the highest presence scores, respectively. To assess what portion of this difference was accounted for by the VR system, a regression analysis was performed with having VR_sysII at the far left and VR_sysIII at the far right of the axis, leading to VR_sysI being in the middle.

An explanation was added to the caption.

45- For the word cloud figures – in the captions need to indicate what the reader should take note of – does the relative ‘strength’ of particular terms support your hypotheses?

A. To avoid repeating the same relatively long sentence in the caption of the three word-clouds, this sentence was added to the text referring to those figures:

“In these figures, the size of each word is related to the rank of its repetition in the participants’ comments. Thus bigger words indirectly show that those words are concerns to the participants”.

Attachment

Submitted filename: Response to Reviewers.docx

Decision Letter 1

Thomas A Stoffregen

17 May 2021

PONE-D-21-03454R1

Motion sickness and sense of presence in a virtual reality environment developed for manual wheelchair users, with three different approaches

PLOS ONE

Dear Dr. Salimi,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Reviewer 2 is satisfied with your changes. However, you will see that Reviewer 3 feels that considerable additional revision is needed. I agree with Reviewer 3; the requested changes seem reasonable, and will significantly enhance the value of your contribution.

Please submit your revised manuscript by Jul 01 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Thomas A Stoffregen, PhD

Academic Editor

PLOS ONE

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #2: All comments have been addressed

Reviewer #3: (No Response)

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #2: Yes

Reviewer #3: Partly

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #2: Yes

Reviewer #3: No

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #2: Yes

Reviewer #3: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #2: Yes

Reviewer #3: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #2: I want to thank the author for responding to/incorporating my feedback. I still think the underpowered analysis is problematic, but not fatal to the paper.

Reviewer #3: Thank you for the revisions you made and providing a rationale for the items that you left from the original submission. Unfortunately there are still some significant items that need to be addressed in the introduction and in your results/analysis.

You introduction is a general statement of the problem of VIMS in VR followed by a research example prior to discussing your work in the manuscript. The issue that the discussion of Chattha et al does not motivate or justify your manipulation/method, it is a jarring switch between the problem set-up and the overview of your project. You would be better served to address why movement/experience using the wheel chair is important (in general the connection between perception and action). In addition your stated goals for the project (process of developing a more viable VR interaction) really don't map onto the the analyses you propose. Again you state (as in the original) that the statistical outcomes were not the goal, rather the development of a compelling and sickness "free" virtual experience. Given this it is hard to understand the choice of analyses (more power does not help when the choice of analysis itself is non-appropriate for the stated goal). It appears that you are asking for a perceptually ranked ordering of the VR systems, again non-parametrics are tailored for this type of analysis. MANOVA is not designed to tell 'optimum' or 'best', just differences and it is not a 'solution' for dealing with small samples sizes (in fact it works much better with larger samples). Adjusting the frame of your introduction and analysis may solve these issues.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #2: No

Reviewer #3: Yes: L. James Smart Jr.

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2021 Aug 19;16(8):e0255898. doi: 10.1371/journal.pone.0255898.r004

Author response to Decision Letter 1


17 Jun 2021

Reviewer #2

Comment: I want to thank the author for responding to/incorporating my feedback. I still think the underpowered analysis is problematic, but not fatal to the paper.

Answer: We are happy to read that you feel more confident about the paper.

Reviewer #3

Overall: Thank you for the revisions you made and providing a rationale for the items that you left from the original submission. Unfortunately, there are still some significant items that need to be addressed in the introduction and in your results/analysis.

Answer: We are happy to address them. Thank you for your constructive approach.

1: Your introduction is a general statement of the problem of VIMS in VR followed by a research example prior to discussing your work in the manuscript. The issue that the discussion of Chattha et al does not motivate or justify your manipulation/method, it is a jarring switch between the problem set-up and the overview of your project. You would be better served to address why movement/experience using the wheel chair is important (in general the connection between perception and action).

Answer: The Chattha paragraph was relocated to a more proper place (where the factors triggering motion sickness was discussed). In addition, a paragraph was added, as suggested by the reviewer, before switching to discussing our study, to facilitate a better transition:

“As mentioned, when there is a mismatch between vestibular and visual stimuli, VIMS could trigger. This is roughly whenever the user moves around, or looks around, in the virtual world. We also mentioned that VR has a potential to be used to assist with many different areas, including for training and conducting related research studies for wheelchair users, as one important part of the society who face miscellaneous difficulties on a daily basis; difficulties ranging from troubles in moving around and navigating in and outside home and accessing buildings, to secondary injuries that they usually undergo as a consequence of the above-normal and repetitive load they experience in their upper-body which they have to use for ambulation. We developed this study to simulate navigation using wheelchair in the VR, and since combination of navigation and VR has a high probability of inducing VIMS, we concentrated on mitigating VIMS while trying to ascertain higher sense of presence.”

2: In addition, your stated goals for the project (process of developing a more viable VR interaction) really don't map onto the the analyses you propose. Again, you state (as in the original) that the statistical outcomes were not the goal, rather the development of a compelling and sickness "free" virtual experience. Given this it is hard to understand the choice of analyses (more power does not help when the choice of analysis itself is non-appropriate for the stated goal). It appears that you are asking for a perceptually ranked ordering of the VR systems, again non-parametrics are tailored for this type of analysis. MANOVA is not designed to tell 'optimum' or 'best', just differences and it is not a 'solution' for dealing with small samples sizes (in fact it works much better with larger samples). Adjusting the frame of your introduction and analysis may solve these issues.

Answer: We understand that the wording in the literature, “statistics are not the main focus of the study” has made you uncertain about why then we have made many difficult statistical roundabouts to deal with the data. Thus, we have revised that paragraph and tried to explain our approach better in the introduction, as below:

“We also wish to note that although Statistics was not the focus of this research study, the statistical approaches were selected with care rather than to simplify the analyses. In order to get the most rigorous results, we carefully assessed each outcome for the six analyses to determine which of the parametric or nonparametric methods would suit the conditions of that dataset and chose the proper statistical approach accordingly”.

In case you still are not very sure about the rational behind choosing the statistical approaches in this manuscript, we have provided more explanations in the following:

It seems that the point you are raising is more about “types of statistical analysis” e.g., difference-finding, similarity-finding, or optimum finding, than using parametric or non-parametric approaches, because you can perform all those analyses using both parametric and non-parametric methods, where conditions (e.g., normal distribution) are met. In this manuscript, we have used both methods, where appropriate, to see if the systems are different, and if yes, which of the 3 systems are better than the others; a normal and very frequent use of MANOVA in the literature.

As reported frequently in the literature, e.g., Hopkins (2018), parametric methods should be used when we have continuous data that is normally distributed, and non-parametric methods should be used when we have categorical data, or we do not have normal distribution (exactly what we did in this manuscript). Despite what you are suggesting, it is never advised in the literature to use nonparametric methods when we have normal distribution (mainly our case).

I am not getting different type of the results from the two methods. Even with the non-parametric method, Kruskal-Wallis, I report the significance level, and based on that, I judge if there was a meaningful difference- just as what we did with the MANOVA.

Both parametric and non-parametric methods work better with larger sample sizes. That is true. But for a given fragile sample size, and when I get the same information from running MANOVA or its non-parametric counterpart: Kruskal-Wallis, I would prefer to use the parametric method rather than use a less powerful method that does a similar job.

Finally, mixing parametric and non-parametric methods is not unprecedented in the literature. For example, the following has used both parametric and nonparametric tests in the same study, as per assumptions were satisfied, just like what we did in our study:

“Niksirat, K.S., Silpasuwanchai, C., Ahmed, M.M.H., Cheng, P., Ren, X., 2017. A framework for interactive mindfulness meditation using attention-regulation process. Conf. Hum. Factors Comput. Syst. - Proc. 2017-May, 2672–2684. https://doi.org/10.1145/3025453.3025914"

With all due respect, the authors of this manuscript understand that while using the non-parametric method for all the analyses of this manuscript would keep things neat, the more rigorous way is to use parametric methods wherever the conditions are met. This strengthens the integrity of the analysis and reduces the chance of type II errors.

Footnote: Hopkins S, Dettori JR, Chapman JR. Parametric and Nonparametric Tests in Spine Research: Why Do They Matter? Global Spine Journal. 2018;8(6):652-654. doi:10.1177/2192568218782679

Attachment

Submitted filename: Response to Reviewers.docx

Decision Letter 2

Thomas A Stoffregen

27 Jul 2021

Motion sickness and sense of presence in a virtual reality environment developed for manual wheelchair users, with three different approaches

PONE-D-21-03454R2

Dear Dr. Salimi,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Reviewer 2 was willing to accept your first revision, and Reviewer 3 recommends acceptance of the current manuscript; hence my decision to accept.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Thomas A Stoffregen, PhD

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #3: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #3: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #3: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #3: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #3: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #3: Thank you for making the adjustments to the introduction as requested. While I am still not 100% sure that that your analysis matches your research goal - I appreciate that you checked the data using both para and nonparametric methods to ensure that your outcomes were consistent. So I am ok with your explanation

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #3: Yes: L. James Smart Jr.

Acceptance letter

Thomas A Stoffregen

2 Aug 2021

PONE-D-21-03454R2

Motion sickness and sense of presence in a virtual reality environment developed for manual wheelchair users, with three different approaches

Dear Dr. Salimi:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Thomas A Stoffregen

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    Attachment

    Submitted filename: Review Feedback_ Motion sickness and sense of presence in three virtual reality environments developed for manual wheelchair users.pdf

    Attachment

    Submitted filename: Response to Reviewers.docx

    Attachment

    Submitted filename: Response to Reviewers.docx

    Data Availability Statement

    All datasets are available from the Figshare repository database at: https://doi.org/10.6084/m9.figshare.13669862.v1.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES