Abstract
Virtual reality (VR) is a promising tool for expanding the possibilities of psychological experimentation and implementing immersive training applications. Despite a recent surge in interest, there remains an inadequate understanding of how VR impacts basic cognitive processes. Due to the artificial presentation of egocentric distance cues in virtual environments, a number of cues to depth in the optic array are impaired or placed in conflict with each other. Moreover, realistic haptic information is all but absent from current VR systems. The resulting conflicts could impact not only the execution of motor skills in VR but also raise deeper concerns about basic visual processing, and the extent to which virtual objects elicit neural and behavioural responses representative of real objects. In this brief review, we outline how the novel perceptual environment of VR may affect vision for action, by shifting users away from a dorsal mode of control. Fewer binocular cues to depth, conflicting depth information and limited haptic feedback may all impair the specialised, efficient, online control of action characteristic of the dorsal stream. A shift from dorsal to ventral control of action may create a fundamental disparity between virtual and real-world skills that has important consequences for how we understand perception and action in the virtual world.
Keywords: VR, Haptic, Visually-guided, Dorsal, Ventral
Introduction
Despite the increasing popularity of virtual reality (VR) as a training tool in a range of industries, including sport, aviation and medicine, we know very little about the low-level perceptual effects of acting in a virtual world. Virtual reality is a collection of technologies that allow the user to interact with a simulation of some environment, in real-time, using their own senses and motor skills (Burdea and Coiffet 2003). Since the 1990s, VR has been adopted by psychological laboratories because it permits precise environmental control which can be untethered from the constraints of the physical world. This method has opened extensive experimental possibilities for the exploration of phenomena as diverse as the size-weight illusion (Buckingham 2019), allocentric memory (Serino et al. 2015) and movement-evoked pain (Harvie et al. 2015). In recent years, interest in the use of VR for a range of training purposes, including visually-guided motor skills, has also grown. Particular areas of application include surgery (Gurusamy et al. 2008), motor rehabilitation (Adamovich et al. 2009) and sport (Gray 2019). Visually guided skills such as these must be performed in a three-dimensional (3D) world, but the stereoscopic presentation of two-dimensional (2D) images in current head-mounted VR provides visual cues that have subtle, but important, differences from the real-world. It is not well understood how the unique perceptual environment of VR may influence how visually guided skills are performed and learned. In this short review, we highlight a number of findings which suggest visually guided action in the virtual world might differ substantively from the real-world. We propose that if fundamentally different modes of action control are activated in VR, skills performed in the virtual world will be unrepresentative of the real-world, and transfer of training will be compromised.
Vision for action
Visual information for guiding real-time action is thought to be processed separately from more abstract perception in the mammalian brain, reflecting an evolutionary specialisation for the control of movement (Goodale 2017). The prevailing characterisation of visual processing identifies a ventral pathway (projecting from primary visual cortex, V1, to the inferior temporal lobe) that is primarily concerned with perception and identification of visual inputs, and a dorsal pathway (projecting from V1 to the posterior parietal lobe) which provides visual information for guiding real-time action (Goodale 2017; Goodale and Milner 1992; Milner and Goodale 1993). Vision-for-action and vision-for-perception pathways are separately susceptible to disruption from brain damage, indicating they are functionally segregated in the normal brain. Naturally, the two pathways interact on some level (Goodale and Cant 2007), but the dorsal pathway maintains a specialisation for visual control of skilled movement. There is a reason to question, however, whether this normal functional separation is maintained in the virtual world.
Cues to depth in the virtual world
The primary reason vision for action may be disrupted in VR is the artificial presentation of depth information (Wann et al. 1995). Several findings have illustrated the impaired estimation of distance and a general perception of the virtual world as ‘flatter’, although this effect seems to attenuate in higher fidelity systems (Interrante et al. 2004, 2006). The dorsal stream relies primarily on binocular information (Mon-Williams et al. 2001), whereas monocular cues to distance (such as texture and perspective) tend to inform perceived distance through the ventral stream. Restricted binocular cues to depth do not preclude execution of visually guided tasks (Carey et al. 1998), but reliance on monocular cues does lead to increased use of the ventral stream for guiding action (Marotta et al. 1998) and, as a result, movement inefficiency (Loftus et al. 2004). The ventral stream is required for pre-planned or delayed movements but utilizes different information to guide action. If binocular cues are impaired in VR, as the general perception of ‘flatness’ suggests they might be, actions in the virtual world may be achieved using much greater ventral input than real-world skills.
The primary binocular cues to depth are binocular disparity and vergence. Vergence (the simultaneous horizontal rotation of the eyes to maintain binocular fixation) is an important cue to depth for the dorsal stream (Mon-Williams and Tresilian 1999; Mon-Williams et al. 2001). Perceived depth is constructed using a range of available cues, but Tresilian et al. (1999) propose that the weight afforded to vergence information decreases when there is a conflict between vergence and other depth cues—exactly as is the case in a VE. In the physical world, accommodation (the focusing of the lenses to maintain a clear image over distance) varies synchronously with vergence, but in head-mounted displays the normal connection is broken due to presentation of varying depth objects on a fixed depth screen (~ 5 cm from the eyes in head-mounted displays) (Eadie et al. 2000). This conflict may reduce the weight afforded to vergence as a cue to depth (Tresilian et al. 1999), leading to less reliable binocular information and a greater reliance on ventral processing (Marotta et al. 1998). Retinal image size also provides an effective cue to depth when object size is known. Lack of prior experience with and uncertainty about virtual objects may, however, make this cue uninformative as well. Consequently, general uncertainty about depth information may lead to a greater reliance on ventral mode control in VR.
Initial brain imaging findings have suggested that the normal pattern of dorsal and ventral activation may indeed be disrupted in VR. In the real-world, visual information about objects within arm’s reach (peripersonal space) tends to be encoded in the dorsal stream, while far-away objects (extrapersonal space) are processed using the ventral stream (Weiss et al. 2003). This reflects the archetypal dorsal/ventral distinction; near-by objects are potential targets for action, whereas far-away objects merely need to be recognised. To investigate this functional separation, Beck et al. (2010) asked participants to make spatial judgements about objects presented at near (60 cm) and far (150 cm) locations in virtual space. In contrast to the expected dissociation, fMRI indicated a disordered picture of dorsal and ventral activation, with near objects eliciting a high degree of ventral processing and far objects eliciting some dorsal activation. As discussed, visually guided motor skills can still be performed adequately with ventral mode control, (Loftus et al. 2004), but this finding raises concerns that visually guided actions in VR may operate through fundamentally different mechanisms to those performed in the real-world.
Haptic information in the virtual world
An additional concern for the execution of visually guided motor skills in VR is the dearth of haptic information, which may also have negative effects on the user experience (Berger et al. 2018). Haptic feedback is derived from the active experience of touch but hand-held controllers in common VR systems do not change their tactile properties, other than providing vibrations to signal contact between virtual hands (or tools) and other surfaces. This kind of haptic information, however, remains unlike real-world feedback for most movements. Specialised feedback devices are currently being developed, such as haptic gloves and the Tesla full body suit, but extensive haptic feedback from exoskeleton-based systems remains expensive and impractical. There is reason to believe this general lack of haptic information may further push users into a ventral mode of processing, as has been observed for basic reaching and grasping movements (Goodale et al. 1994).
Terminal tactile feedback from target objects, which is absent in VR, is necessary for normal, real-time, reaching and grasping. Reaching to a virtual target (e.g. a mirror reflection or imagined target object) with no end-point tactile feedback has disruptive effects on grasp kinematics (e.g. the normally tight scaling between in-flight grip apertures with object sizes) indicative of a switch from real-time visual control (dorsal mode) to one dependent on cognitive supervision (ventral mode) (Goodale et al. 1994; Whitwell et al. 2015). A recent investigation by Wijeyaratnam et al. (2019) showed that when reaching to a target in a virtual environment (where the hand was represented by a cursor and no end-point feedback was present) movement kinematics were indicative of offline (i.e. ventral) control and impaired online corrective processes, even though visual feedback was available.
Such pantomimed reaching movements—those made to imagined, remembered or virtual targets which provide no endpoint feedback—are informative for understanding how the lack of haptic information may impact actions in VR. Pantomimed reaches to a target are made more slowly, reach a lower peak velocity and have lower movement amplitude due to inefficient ventral mode control (Goodale et al. 1994; Whitwell et al. 2015). Movements in VR are effectively pantomimed, as they provide no endpoint feedback, and accordingly are also slower and more exaggerated (Whitwell and Buckingham 2013). Taken together, the artificial presentation of visual depth cues, the peculiarities of haptic feedback, and the general uncertainty created by impoverished sensory information, seems likely to elicit a more ventral mode of control in VR than the real-world. If visually guided skills in VR do indeed rely on ventral mode control, even in part, skills learned or performed using these altered perceptual inputs may not be representative of their real-world counterparts.
Other concerns
Accommodating to accommodation
The accommodation-vergence conflict in VR also raises questions about how visual performance could be impaired following VR use, and how cues to depth might be un-learned. Initial findings have shown that immediately following VR use there may be a greater tolerance for accommodative and vergence error, leading to faster accommodation and vergence (Hackney et al. 2018), but impaired ability to maintain focus on a target (Mosher et al. 2018). Transient reductions in visual acuity have also been observed following just 10 min in a head-mounted VR system (Mon-Williams et al. 1993). As well as these immediate perceptual effects of VR, it is feasible that when learning skills which rely heavily on accommodation/vergence changes—such as target and aiming tasks which require shifting of gaze between the target and the projectile—the redundancy of cues such as accommodation could lead to a degree of unlearning. Analogous maladaptive aftereffects have been observed following conflict between optical flow and bodily inertia in VEs (see Wright 2014). If visually guided actions are learnt in VR where cues to depth differ from the outside world, alternative weightings of depth information could be acquired (e.g. Tresilian et al. 1999), leading to impaired transfer of training.
Virtual bodies
A related issue that may be disruptive to the normal control of action is disembodiment in VR. Not only does the addition of a virtual body induce a greater sense of presence, but it influences distance estimation, a foundational input for action planning (Mohler et al. 2010). Gonzalez-Franco et al. (2019) found that in a blind walking task, where people typically underestimate distances by approximately 10% in virtual environments, the addition of a virtual body reduced the error, but only when users felt embodied. Further to this, a virtual body actually influences action control, improving stepping accuracy and lower limb coordination during obstacle avoidance (Kim et al. 2018). As such, inadequate representation of the physical body may be another barrier to realistic action control in virtual scenes.
How real are virtual objects?
Finally, there may also be more fundamental concerns about how we interpret virtual objects as targets for action. For example, Snow and colleagues have illustrated important differences in brain and behavioural responses when viewing real objects, which afford the ability to act, and pictures of those same objects, which do not (Gomez and Snow 2017; Holler et al. 2019). Object images do not appear to activate action responses in dorsal stream motor networks in the same way as graspable real objects (Squires et al. 2016). What is currently unknown, however, is the extent to which objects in the virtual world provide affordances for action. For real-world objects, 3D volumetric characteristics and stereo cues inform the viewer of how it can be grasped, but the unusual way in which objects are interacted with in VR (i.e. using handheld controllers) may mean that this normal mode of interaction is disrupted. This was recently demonstrated by Linkenauger et al. (2015) who found that an embodied cognition effect, where reaching capability influences perceived distance, only took effect after participants became familiar with their reaching ability in VR. Indeed, changes in virtual arm size had no effect on perceived distance until participants had gained some experience reaching their target. Consequently, it is unknown whether or not a virtual tool might elicit responses that are more akin to a picture of a tool than a real one, especially when participants do not have direct prior experience with the virtual objects.
Conclusions
In this brief review, we have raised a number of questions about how the novel perceptual environment and multisensory conflict experienced in VEs might substantively impact visually guided action. Unfortunately, it seems likely that many of these issues will remain despite the rapid advancement of VR technology. One problem that may be addressed in the near future is the vergence-accommodation conflict. Multifocal HMDs where multiple image planes are provided to span the viewer’s accommodation range are a potential solution, but currently require significant computing power (Mercier et al. 2017). Alternatively, advancements in augmented reality may soon be able to provide monocular focus cues that induce accommodation in line with eye vergence (Jang et al. 2017).
Nonetheless, the lack of realistic haptic information seems sure to be an ongoing issue. Devices such as haptic gloves and exoskeleton suits are able to provide rudimentary feedback, but they are unlikely to be sufficient for developing fine motor skills. More fundamental, is whether virtual entities are treated as real objects to act upon or more like pictorial stimuli. Advancing technologies are unlikely to address this issue. Additionally, some degree of sensory impairment, or at least uncertainty, seems likely to remain, all of which may contribute to fundamentally different modes of action control. It should be noted, however, that these issues only pertain to finely tuned, perceptual-motor abilities. As described by Slater (2009), virtual environments are able to elicit a range of realistic behavioural responses, such as actively avoiding illusory pits (Meehan et al. 2002) and maintaining social norms with virtual avatars (Sanz et al. 2015). The perceptual issues identified here are do not pose a problem for a range of behavioural outcomes such as these.
In light of the questions we have raised about the effect of impaired binocular cues on dorsal and ventral modes of processing, it may be informative for future work to investigate whether well-established signatures of dorsal/ventral control, measured through reaching and grasping kinematics, hold in VR (Ganel and Goodale 2003). Manipulating cues to depth in VEs may also prove instructive for understanding vision for action in virtual worlds, as well as addressing predictions of the perception–action model. As grasping kinematics for virtual or imagined targets appear to be qualitatively different (Goodale et al. 1994), it seems likely that other more complex actions might also diverge from the real skill. Overall, if different modes of visual processing are being engaged or different cues to depth are being relied upon, actions in VR may be more detached from real-world ones than we realise. Even if visually guided skills are performed adequately in VR, if a more ventral mode is being relied upon the skill is qualitatively different, which may have implications for transfer to real-world skills. These are important questions to address for the field of VR training and may help to explain when and why VR is an effective learning tool, and when it may be ineffective or even counterproductive.
Funding
This work was supported by a Royal Academy of Engineering UKIC Fellowship awarded to D Harris.
Footnotes
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Contributor Information
David J. Harris, Email: D.J.Harris@exeter.ac.uk
Gavin Buckingham, Email: G.Buckingham@exeter.ac.uk.
Mark R. Wilson, Email: Mark.Wilson@exeter.ac.uk
Samuel J. Vine, Email: S.J.Vine@exeter.ac.uk
References
- Adamovich SV, Fluet GG, Tunik E, Merians AS. Sensorimotor training in virtual reality: a review. NeuroRehabilitation. 2009;25:29–44. doi: 10.3233/NRE-2009-0497. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Beck L, Wolter M, Mungard NF, Vohn R, Staedtgen M, Kuhlen T, Sturm W. Evaluation of spatial processing in virtual reality using functional magnetic resonance imaging (fMRI) Cyberpsychol Behav Soc Netw. 2010;13:211–215. doi: 10.1089/cyber.2008.0343. [DOI] [PubMed] [Google Scholar]
- Berger CC, Gonzalez-Franco M, Ofek E, Hinckley K. The uncanny valley of haptics. Sci Robot. 2018;3:eaar7010. doi: 10.1126/scirobotics.aar7010. [DOI] [PubMed] [Google Scholar]
- Buckingham G. Examining the size-weight illusion with visuo-haptic conflict in immersive virtual reality. Q J Exp Psychol. 2019 doi: 10.17605/osf.io/2x3ju. [DOI] [PubMed] [Google Scholar]
- Burdea GC, Coiffet P. Virtual reality technology. New York: Wiley; 2003. [Google Scholar]
- Carey DP, Dijkerman HC, Milner AD. Perception and action in depth. Conscious Cogn. 1998;7:438–453. doi: 10.1006/ccog.1998.0366. [DOI] [PubMed] [Google Scholar]
- Eadie AS, Gray LS, Carlin P, Mon-Williams M. Modelling adaptation effects in vergence and accommodation after exposure to a simulated virtual reality stimulus. Ophthalmic Physiol Opt. 2000;20:242–251. doi: 10.1046/j.1475-1313.2000.00499.x. [DOI] [PubMed] [Google Scholar]
- Ganel T, Goodale MA. Visual control of action but not perception requires analytical processing of object shape. Nature. 2003;426:664–667. doi: 10.1038/nature02156. [DOI] [PubMed] [Google Scholar]
- Gomez MA, Snow JC. Action properties of object images facilitate visual search. J Exp Psychol Hum Percept Perform. 2017;43:1115–1124. doi: 10.1037/xhp0000390. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gonzalez-Franco M, Abtahi P, Steed A (2019) individual differences in embodied distance estimation in virtual reality. In: IEEE VR 3
- Goodale MA (2017) Duplex vision. In: The Blackwell companion to consciousness. Wiley, New York, pp. 648–661. 10.1002/9781119132363.ch46
- Goodale MA, Cant JS. Coming to grips with vision and touch. Behav Brain Sci. 2007;30:209–210. doi: 10.1017/S0140525X07001483. [DOI] [Google Scholar]
- Goodale MA, Milner AD. Separate visual pathways for perception and action. Trends Neurosci. 1992;15:20–25. doi: 10.1016/0166-2236(92)90344-8. [DOI] [PubMed] [Google Scholar]
- Goodale MA, Jakobson LS, Keillor JM. Differences in the visual control of pantomimed and natural grasping movements. Neuropsychologia. 1994;32:1159–1178. doi: 10.1016/0028-3932(94)90100-7. [DOI] [PubMed] [Google Scholar]
- Gray R. Virtual environments and their role in developing perceptual-cognitive skills in sports. In: Jackson RC, Williams AM, editors. Anticipation and decision making in sport. Abingdon: Taylor & Francis, Routledge; 2019. [Google Scholar]
- Gurusamy K, Aggarwal R, Palanivelu L, Davidson BR. Systematic review of randomized controlled trials on the effectiveness of virtual reality training for laparoscopic surgery. Br J Surg. 2008;95:1088–1097. doi: 10.1002/bjs.6344. [DOI] [PubMed] [Google Scholar]
- Hackney BC, Awaad MF, Del Cid DA, Mosher RL, Kangavary A, Drew SA (2018) Impact of virtual reality headset use on ocular function and subjective discomfort. In: Soc neurosci conf S-Diego
- Harvie DS, Broecker M, Smith RT, Meulders A, Madden VJ, Moseley GL. Bogus visual feedback alters onset of movement-evoked pain in people with neck pain. Psychol Sci. 2015;26:385–392. doi: 10.1177/0956797614563339. [DOI] [PubMed] [Google Scholar]
- Holler DE, Behrmann M, Snow JC. Real-world size coding of solid objects, but not 2-D or 3-D images, in visual agnosia patients with bilateral ventral lesions. Cortex. 2019 doi: 10.1016/j.cortex.2019.02.030. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Interrante V, Anderson L, Ries B (2004) An experimental investigation of distance perception in real vs. immersive virtual environments via direct blind walking in a high-fidelity model of the same room. In: Proceedings of the 1st symposium on applied perception in graphics and visualization. ACM, New York, pp 162–162. 10.1145/1012551.1012584
- Interrante V, Ries B, Anderson L (2006) Distance perception in immersive virtual environments, revisited. In: IEEE virtual reality conference (VR 2006), pp 3–10. 10.1109/VR.2006.52
- Jang C, Bang K, Moon S, Kim J, Lee S, Lee B. Retinal 3D: augmented reality near-eye display via pupil-tracked light field projection on retina. ACM Trans Graph. 2017;36:190:1–190:13. doi: 10.1145/3130800.3130889. [DOI] [Google Scholar]
- Kim A, Kretch KS, Zhou Z, Finley JM. The quality of visual information about the lower extremities influences visuomotor coordination during virtual obstacle negotiation. J Neurophysiol. 2018;120:839–847. doi: 10.1152/jn.00931.2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Linkenauger SA, Bülthoff HH, Mohler BJ. Virtual arm׳s reach influences perceived distances but only after experience reaching. Neuropsychologia. 2015;70:393–401. doi: 10.1016/j.neuropsychologia.2014.10.034. [DOI] [PubMed] [Google Scholar]
- Loftus A, Servos P, Goodale MA, Mendarozqueta N, Mon-Williams M. When two eyes are better than one in prehension: monocular viewing and end-point variance. Exp Brain Res. 2004;158:317–327. doi: 10.1007/s00221-004-1905-2. [DOI] [PubMed] [Google Scholar]
- Marotta JJ, DeSouza JFX, Haffenden AM, Goodale MA. Does a monocularly presented size-contrast illusion influence grip aperture? Neuropsychologia. 1998;36:491–497. doi: 10.1016/S0028-3932(97)00154-1. [DOI] [PubMed] [Google Scholar]
- Meehan M, Insko B, Whitton M, Brooks FP Jr (2002) Physiological measures of presence in stressful virtual environments. In: Proceedings of the 29th annual conference on computer graphics and interactive techniques. ACM, New York, pp 645–652. 10.1145/566570.566630
- Mercier O, Sulai Y, Mackenzie K, Zannoli M, Hillis J, Nowrouzezahrai D, Lanman D. Fast gaze-contingent optimal decompositions for multifocal displays. ACM Trans Graph. 2017;36:237:1–237:15. doi: 10.1145/3130800.3130846. [DOI] [Google Scholar]
- Milner AD, Goodale MA. Visual pathways to perception and action. In: Hicks TP, Molotchnikoff S, Ono T, editors. Progress in brain research. Amsterdam: Elsevier; 1993. pp. 317–337. [DOI] [PubMed] [Google Scholar]
- Mohler BJ, Creem-Regehr SH, Thompson WB, Bülthoff HH. The effect of viewing a self-avatar on distance judgments in an HMD-based virtual environment. Presence Teleoper Virtual Environ. 2010;19:230–242. doi: 10.1162/pres.19.3.230. [DOI] [Google Scholar]
- Mon-Williams M, Tresilian JR. Some recent studies on the extraretinal contribution to distance perception. Perception. 1999;28:167–181. doi: 10.1068/p2737. [DOI] [PubMed] [Google Scholar]
- Mon-Williams M, Wann JP, Rushton S. Binocular vision in a virtual world: visual deficits following the wearing of a head-mounted display. Ophthalmic Physiol Opt. 1993;13:387–391. doi: 10.1111/j.1475-1313.1993.tb00496.x. [DOI] [PubMed] [Google Scholar]
- Mon-Williams M, Tresilian JR, McIntosh RD, Milner DA. Monocular and binocular distance cues: insights from visual form agnosia I (of III) Exp Brain Res. 2001;139:127–136. doi: 10.1007/s002210000657. [DOI] [PubMed] [Google Scholar]
- Mosher RL, Lundqvist S, Morales R, Armendariz J, Hackney BC, Drew SA (2018) Examining changes in oculomotor function after immersive virtual reality use. In: Soc neurosci conf S-Diego
- Sanz FA, Olivier A, Bruder G, Pettré J, Lécuyer A (2015) Virtual proxemics: locomotion in the presence of obstacles in large immersive projection environments. In: 2015 IEEE virtual reality (VR), pp 75–80. 10.1109/VR.2015.7223327
- Serino S, Pedroli E, Keizer A, Triberti S, Dakanalis A, Pallavicini F, Chirico A, Riva G. Virtual reality body swapping: a tool for modifying the allocentric memory of the body. Cyberpsychol Behav Soc Netw. 2015;19:127–133. doi: 10.1089/cyber.2015.0229. [DOI] [PubMed] [Google Scholar]
- Slater M. Place illusion and plausibility can lead to realistic behaviour in immersive virtual environments. Philos Trans R Soc B Biol Sci. 2009;364:3549–3557. doi: 10.1098/rstb.2009.0138. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Squires SD, Macdonald SN, Culham JC, Snow JC. Priming tool actions: are real objects more effective primes than pictures? Exp Brain Res. 2016;234:963–976. doi: 10.1007/s00221-015-4518-z. [DOI] [PubMed] [Google Scholar]
- Tresilian J, Mon-Williams M, Kelly BM. Increasing confidence in vergence as a cue to distance. Proc R Soc Lond B Biol Sci. 1999;266:39–44. doi: 10.1098/rspb.1999.0601. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wann JP, Rushton S, Mon-Williams M. Natural problems for stereoscopic depth perception in virtual environments. Vis Res. 1995;35:2731–2736. doi: 10.1016/0042-6989(95)00018-U. [DOI] [PubMed] [Google Scholar]
- Weiss PH, Marshall JC, Zilles K, Fink GR. Are action and perception in near and far space additive or interactive factors? NeuroImage. 2003;18:837–846. doi: 10.1016/S1053-8119(03)00018-1. [DOI] [PubMed] [Google Scholar]
- Whitwell RL, Buckingham G. Reframing the action and perception dissociation in DF: haptics matters, but how? J Neurophysiol. 2013;109:621–624. doi: 10.1152/jn.00396.2012. [DOI] [PubMed] [Google Scholar]
- Whitwell RL, Ganel T, Byrne CM, Goodale MA. Real-time vision, tactile cues, and visual form agnosia: removing haptic feedback from a “natural” grasping task induces pantomime-like grasps. Front Hum Neurosci. 2015 doi: 10.3389/fnhum.2015.00216. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wijeyaratnam DO, Chua R, Cressman EK. Going offline: differences in the contributions of movement control processes when reaching in a typical versus novel environment. Exp Brain Res. 2019 doi: 10.1007/s00221-019-05515-0. [DOI] [PubMed] [Google Scholar]
- Wright WG. Using virtual reality to augment perception, enhance sensorimotor adaptation, and change our minds. Front Syst Neurosci. 2014 doi: 10.3389/fnsys.2014.00056. [DOI] [PMC free article] [PubMed] [Google Scholar]