Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2017 Feb 1.
Published in final edited form as: Neuropsychologia. 2016 Jan 6;82:84–90. doi: 10.1016/j.neuropsychologia.2016.01.005

Relative Contributions of Visual and Auditory Spatial Representations to Tactile Localization

Jean-Paul Noel 1,2, Mark Wallace 2,3,4,5
PMCID: PMC4752883  NIHMSID: NIHMS753681  PMID: 26768124

Abstract

Spatial localization of touch is critically dependent upon coordinate transformation between different reference frames, which must ultimately allow for alignment between somatotopic and external representations of space. Although prior work has shown an important role for cues such as body posture in influencing the spatial localization of touch, the relative contributions of the different sensory systems to this process are unknown. In the current study, we had participants perform a tactile temporal order judgment (TOJ) under different body postures and conditions of sensory deprivation. Specifically, participants performed non-speeded judgments about the order of two tactile stimuli presented in rapid succession on their ankles during conditions in which their legs were either uncrossed or crossed (and thus bringing somatotopic and external reference frames into conflict). These judgments were made in the absence of 1) visual, 2) auditory, or 3) combined audio-visual spatial information by blindfolding and/or placing participants in an anechoic chamber. As expected, results revealed that tactile temporal acuity was poorer under crossed than uncrossed leg postures. Intriguingly, results also revealed that auditory and audio-visual deprivation exacerbated the difference in tactile temporal acuity between uncrossed to crossed leg postures, an effect not seen for visual-only deprivation. Furthermore, the effects under combined audio-visual deprivation were greater than those seen for auditory deprivation. Collectively, these results indicate that mechanisms governing the alignment between somatotopic and external reference frames extend beyond those imposed by body posture to include spatial features conveyed by the auditory and visual modalities – with a heavier weighting of auditory than visual spatial information. Thus, sensory modalities conveying exteroceptive spatial information contribute to judgments regarding the localization of touch.

Keywords: Tactile Localization, Auditory, Visual, Deprivation, Space, TOJ

INTRODUCTION

The multisensory nature of our external world poses a number of challenges for the central nervous system in decoding the incoming information, including the fact that this information is initially encoded in a variety of reference frames. Visual information is first encoded in a frame of reference tied to the retina (i.e., retinotopic), auditory information in a frame based on the position of the head (i.e., craniotopic), and somatosensory information in a somatotopic reference frame. How these coordinate frameworks interact in order to solve the problem of accurately locating (and acting upon) a stimulus in space is a question of intensive inquiry.

The example of tactile localization, in particular while using a tactile temporal order (TOJ) task to stimuli delivered on the hands, has received much attention under this framework. The observation that participants’ performance in establishing the order with which tactile stimulation is administered – a task seemingly achievable without taking body posture into account – is heavily influenced by proprioceptive information (Yamamoto & Kitazawa 2001a; Shore Spry & Spence 2002) has resulted in a number of interpretations regarding the solution to the reference frame problem.

Kitazawa and colleagues (Yamamoto & Kitazawa, 2001a; Kitazawa, 2002; Kitazawa et al., 2008) suggest that tactile stimulation is processed in space with the assumption of a ‘standard’ (i.e., aligned) posture and then projected back onto skin location taking into account body posture. An interesting line of evidence for this space-to-body directionality comes from studies describing the path of saccades to single tactile stimuli. When a tactile stimulus is administered to crossed hands, saccades occasionally initiate toward the opposite hand – namely, toward the external space routinely occupied by the stimulated hand – and correct toward the appropriate hand mid-action (Overvliet et al. 2011).

Similarly, Shore and colleagues (Shore et al. 2002; Cadieux et al., 2010) postulate that a tactile stimulus is initially represented according to its somatotopic location on the skin and only after it is remapped onto external coordinates. In a crossed hand condition, the decreased performance in tactile TOJ is presumed to be a result of a misalignment between the somatotopic and external spatiotopic coordinate frames. Lastly, Heed, Badde, and colleagues (Heed et al., 2015; Badde et al., 2014) propose that somatotopic and external spatial reference frames are concurrently active, and that the precise localization of touch in space is determined by integrating across sources of information and according to task demands. Their account posits that information is pooled across different reference frames (with varying weights) and that TOJ crossing effects reflect this (uneven) integration of spatial information once the remapping between frames of reference is complete. Noteworthy, hence, is that, although somewhat different in their implementation, all theoretical accounts posit a transformation process from one reference frame to another (Heed & Azañón 2014) implying a space-to/from-body remapping process (e.g., Yamamoto & Kitazawa 2001a; Shore et al. 2002). Thus, in addition to scrutinizing the bodily aspects that govern tactile localization (i.e., posture, features of the particular body part being stimulated), delineating what features of a particular spatial representation, or even simply what spatial representations at all (i.e., auditory, visual, both) are implicated in the process of localizing touch in space is a fundamental question that remains to be answered.

Whereas the impact of body position on tactile localization has been well studied, less work has focused on the exteroreceptive senses and the role that the spatial representation(s) constructed from these senses play in tactile localization. Intriguingly, whereas the crossed hands effect occurs in sighted individuals in the absence of visual information congenitally blind participants are unaffected by crossing the hands (Röder et al. 2004). Additionally, individuals who turn blind later in life performed just as the sighted. Further, crossing effects are weaker, albeit present, when hands are crossed behind the back in sighted individuals (Kóbor et al. 2006). These results suggest that whereas visual experience drives the establishment of a crossing effect, the sustained presence of vision is not required for it to be demonstrated. Virtually unknown is the role of the auditory system, as well as how multisensory audiovisual spatial representations mediate this effect. In the current study we sought to examine the role of auditory, visual and combined audiovisual spatial representations on tactile TOJ performance for both uncrossed (i.e., aligned somatotopic and external reference frames) and crossed (i.e., misaligned somatotopic and external reference frame) leg conditions, by examining performance in the absence of auditory, visual and combined audiovisual spatial information. This was achieved by placing participants in an anechoic chamber (auditory absent), blindfolding them (vision absent), or placing them in an anechoic chamber while blindfolded (audiovisual absent). Prior studies (e.g., Yamamoto & Kitazawa, 2001) have masked task-relevant sounds due to tactile stimulation using white noise. We consider our use of the anechoic environment to be comparable, but not identical, to the use of white noise in that both mask spatially informative sounds. These two conditions do differ in that white noise masks all sounds, including those that are self-generated, whereas the anechoic chamber maintains self-generated sounds, sounds that have been shown to be important in the maintenance of an implicit body-representation and tactile perception (Tajadura-Jimenez et al, 2012).

MATERIALS AND METHODS

Participants

A total of forty-eight participants took part in this study (20 females, mean age = 19.43 ± 1.1, nauditory = 17, nvisual = 15, naudiovisual = 16). All participants reported normal touch and hearing, and had normal or corrected-to-normal visual acuity. No participant had a history of either psychiatric or neurological condition. All participants gave their informed consent to take part of this study. The protocols were approved by Vanderbilt University Medical Center’s Institutional Review Board.

Materials and Apparatus

Tactile stimulation consisted of 10ms of vibrotactile stimulation delivered on the ankles (medial malleolus) by means of a Pico Vibe Vibrator Motor (9mm diameter, 25 mm length, 230Hz, 4 g amplitude) driven at 3.3V by an Arduino™ microcontroller (http://arduino.cc; Arduino Mega 2560, 16MHz). Experimental protocols were carried out by in-house software (ExpyVR, http://lnco.epfl.ch/expyvr) at 100Hz. Tactile stimulation was applied on the ankles in order to minimize a putative auditory occurrence as a consequence to mechanical tactile stimulation (see Schicke & Röder, 2006, for a previous account of a tactile TOJ on foot). Further, in order to rule out the possibility that vibrotactile stimulation produced an auditory signal that participants were capable of employing during tactile temporal order judgments we ran a control experiment in which an independent group of 14 participants judged the temporal order of tactile stimulation that was not given to the subjects, but rather to the experimenter. We opted for this control experiment, as opposed to, say, performing tactile TOJ with electrical stimulation or simply detaching the vibrotactile stimulators from the participant (but not placing them on the experimenter), as this condition most closely matched the putative auditory signal delivered to participants during vibrotactile stimulation in the experimental conditions (see below). The experimenter placed his legs as to mimic the relative ear-leg location of the participant’s legs in the main experiment, and participants were asked to make their temporal order judgment based not on vibrotactile stimulation, but on the auditory stimulation associated with this vibrotactile stimulation. Only the legs uncrossed, and not crossed, condition was tested, as these conditions are unchanged vis-à-vis the participants when vibrotactile stimulation is given to the experimenter and not to the subject. The rest of procedures followed as for the main experiment (see below). Results indicated no effect of SOA (p = 0.388) – that is, performance did not increase with longer auditory SOAs as would be expected if participants made use of this information in determining temporal order – thus ruling out the possibility that auditory information from vibrotactile stimulation played considerable role in tactile TOJ.

Deprivation of visual spatial information was accomplished by blindfolding participants for 15 min prior the tactile temporal order judgment (TOJ) task, as well as during the protocol itself. Similarly, reduced far auditory spatial information was accomplished by placing participants in an anechoic chamber (ambient noise = 15 dB(A)) 15 min prior to and during the experiment. Far environmental audiovisual spatial information was reduced by combining the two aforementioned approaches.

Procedure

A tactile TOJ task was performed where participants were successively given vibrotactile stimulation on both their ankles, and were to report (in a non-speeded fashion) which one (left or right) was stimulated first. The task was conducted both under conditions of reduced environmental sensory information (audio, visual, or audio-viusal – as a between-subjects factor) and under standard environmental sensory conditions (no blindfolding and ambient noise = 42 dB(A)). Acclimatization to sensory environments prior to testing was performed by placing the participant in the given environment 15 mins prior to initiation of the protocol. Additionally, the TOJ task was performed with legs uncrossed and crossed (in this latter condition misaligning external and somatotopy reference frames). Both under crossed and uncrossed leg conditions, the instructions were always to report which ankle (and not spatial side) was stimulated first. All subjects carried out the task in both postures and under standard and reduced sensory environments, yet the nature of the sensory information impaired (auditory, visual, or audio-visual) was a between-subjects factor.

Stimuli were presented at stimulus onset asynchronies (SOAs) of 20, 30, 50, 100, 150, 200, 300, 500, and 1500 ms, each with both the left (arbitrarily denoted by negative values) and right (arbitrarily denoted by positive values) leg stimuli leading. Participants stood for the duration of the experiment, and their feet were separated by 40 cm (both in crossed and uncrossed conditions). Stimuli were presented in a randomized order and 20 repetitions of each condition were effectuated, for a total of 1,440 trials (20 repetitions X 18 SOAs X 2 postures X 2 environmental conditions) per participant. Inter-trial interval was randomized between 1 and 2 seconds (uniform distribution, average 1.5s). The order in which conditions were presented was pseudo-randomized, as was the manner in which participants crossed their legs (left over right, or right over left) and responded to tactile stimulation – response keys were stacked vertically and their mapping (i.e., left leg equal either top or bottom button) was pseudo-randomized.

Analysis

Individual participant’s reports (proportion of ‘right first’) were fitted by means of MATLAB’s glmfit routine (linking function; ‘probit’), and goodness of fit was quantified by a Chi-square test. If this test was significant for a particular individual – and thus their data was not successfully fitted by the glmfit routine – their data were excluded from further statistical analysis. This procedure resulted in the exclusion of 1 participant in the audio-deprivation condition, and 2 participants in the visual-deprivation condition. After this procedure, overall fitting was successful (R2 = 0.83). The standard deviation of this fit was taken to be participant’s just noticeable difference (JND) – a measure of sensitivity. After examining JND within sensory deprivation conditions – as a means of providing a descriptive account of raw data and in order to allow for comparison with the existing literature – we examine the relative contribution of a particular sensory environment to the localization of touch. For this latter analysis, for each participant’s JND measure under a particular posture (uncrossed or crossed legs) we subtract the JND-value obtained under the sensory deprived condition from that of the standard sensory environment. Further, in order to specifically address the contribution of a particular sensory environment to the localization of touch in space (thus re-aligning external spatiotopic and somatotopic reference frames) we subtract the aforementioned contribution of the sensory environment under legs uncrossed from that of legs crossed. In short;

ΔJND=(JNDCROSSED deprivedJNDCROSSED standard)(JNDUNCROSSED deprivedJNDUNCROSSED standard) (eq. 1)

Lastly, as TOJ curves do not always result in standard S-shaped psychometric curves (Yamamoto & Kitazawa, 2001), we probit transformed reports of ‘right first’ between the SOAs of −100 and 100, and fitted these values with a linear slope (Shore et al., 2002). Analysis undertaken on the linear slope values indicated the same pattern of results as did the analysis on standard deviation of the glmfit. As these procedures are reciprocal, but the regression of probit transformed values only takes into account a portion of the data (i.e., small SOAs), we focus our report of the JND values obtained from the GLM fitting procedure.

RESULTS

As revealed by a 2 (Sensory Environment; Deprived vs. Standard) X 2 (Posture; Uncrossed vs. Crossed) X 3 (Sensory Modality; Audio, Visual, Audio-Visual) Mixed ANOVA, misaligning external and somatotopic reference frames significantly decreased participants’ sensitivity (i.e., increased JND; main effect of Posture, F(1, 44) = 29.499, p < 0.001, partial η2 = .401) in judging the order of tactile stimulation administered on their ankles. Importantly, these changes in JND were strongly dependent upon the sensory environment in which participants completed the task, as demonstrated by the significant main effect of Sensory Environment (F(1, 44) = 11.194, p = 0.002, partial η2 = .203), and perhaps most importantly, significant interactions between Sensory Environment and Posture (F(1, 44) = 7.006, p = 0.011, partial η2 = .237), as well as between Sensory Environment, Sensory Modality, and Posture (F(2, 44) = 3.352, p = 0.045, partial η2 = 0.183) (see Figure 1, means are represented by solid dark lines and individual subject data in solid semi-transparent lines). In order to further elucidate the root of the mentioned interactions, we performed a 2 (Sensory Environment; Deprived vs. Standard) X 2 (Posture; Uncrossed vs. Crossed) within-subject ANOVA on each of the sensory modalities (audio, visual, audio-visual) separately.

Figure 1.

Figure 1

Tactile temporal order judgment (TOJ) performance as a function of posture (legs uncrossed – left panels; legs crossed – right panels) and sensory environment (standard sensory environment – in black; deprived environment with absence of spatial information in the auditory (red), visual (blue), or audiovisual (green) domains). The proportion of ‘right first’ responses (y-axis) is plotted as a function of SOA (negative values indicating left-leading tactile stimulation and positive values indicating right-leading tactile administration). Solid lines represent the sample mean, while dashed lines represent individual participants data.

For those individuals in which auditory spatial information was absent during task performance, the 2 (Sensory Environment; Deprived vs. Standard) X 2 (Posture; Uncrossed vs. Crossed) within-subject ANOVA on JNDs revealed a significant main effect of Sensory Environment (F(1, 16) = 13.95, p = 0.002, partial η2 = 0.46; Deprived, M = 442.59ms, S.E. = 80.90ms; Standard M = 215.77ms, S.E. = 38.66ms) and a main effect of Posture (F(1, 16) = 18.21, p = 0.002, η2 = 0.64; Crossed M = 452.13ms, S.E = 71.60ms; Uncrossed, M = 206.22ms, S.E. = 40.14ms). In addition, the results showed a significant Sensory Environment X Posture interaction (F(1, 16) = 5.19, p = 0.037, partial η2 = 0.24). For participants that were deprived of auditory spatial information, results demonstrated that both under the Uncrossed and Crossed both postures, sensitivity was better under the standard than the deprived sensory environment (Uncrossed and Crossed, respectively, p = 0.031 and p = 0.009). Thus, though performance not only in the Crossed but also the Uncrossed condition was affected by sensory deprivation, the significant Sensory Environment X Posture interaction is seemingly explained by a detriment in tactile localization under a deprived audio-spatial sensory environment, particularly when limbs are crossed.

With regard to the deprivation of visual information, the within-subjects ANOVA demonstrated a significant main effect of Posture (F(1,13) = 11.48, p = 0.005, partial η2 = 0.46; Crossed M = 514.85ms, S.E. = 79.48ms; Uncrossed M = 233.10ms, S.E. = 40.90ms), but no main effect of Sensory Environment (F(1, 13) = 0.061, p = 0.801; Deprived M = 361.46ms, S.E =69.74ms; Standard M = 386.50ms, S.E. = 69.09ms), nor an interaction between these variables (F(1, 13) = 0.192, p = 0.669).

In the case of audiovisual multisensory deprivation, the within-subjects ANOVA demonstrated a significant main effect of Posture (F(1, 15) = 21.57, p < 0.001, partial η2 = 0.59; Crossed M = 603.90ms, S.E. = 92.26ms; Uncrossed M = 224.18ms, S.E. = 35.44ms), as well as a main effect of Sensory Environment (F(1, 15) = 10.68, p = 0.005, partial η2 = 0.41; Deprived M = 548.68ms, S.E. = 93.41ms; Standard M = 279.40ms, S.E. = 33.08ms), and an interaction between these conditions (F(1, 15) = 6.93, p = 0.019, partial η2 = 0.69). Findings demonstrated that the interaction between Sensory Environment and Posture for these participants was driven by a significant decline in performance when limbs were crossed (p = 0.006). In contrast, there was no effect when the limbs were uncrossed (p = 0.896).

The relative contributions of the different sensory manipulations to changes in the JND during the tactile localization task when external spatial and bodily-centered reference frames are misaligned was calculated as stated in eq. 1 (see above), and compared across conditions by means of a between-subjects one-way ANOVA. As illustrated in Figure 2b, this analysis revealed a significant difference between Sensory Environment groups (F(2, 44) = 3.352, p = 0.045, partial η2 = 0.183), which was driven by a significant difference between the auditory (red) and audiovisual (green) conditions (t(28) = 2.078, p = 0.047, partial η2 = 0.177 – Bonferroni-corrected), as well as between the visual (blue) and audiovisual conditions (t(26) = 2.146, p = 0.041, partial η2 = 0.235 – Bonferroni-corrected). One-sample t-tests revealed that the JND changes were significantly different from zero when the external spatiotopic and somatotopic reference frames were misaligned in audiovisual and auditory deprived conditions, but not in the visual deprived condition (audiovisual t = 3.244, p = 0.008; auditory t = 1.983, p = 0.048; visual t = 0.963, p = 0.862).

Figure 2.

Figure 2

Change JND when spatiotopic and somatotopic reference frames are misaligned as a function of the sensory modality/modalities (auditory (red), visual (blue), or audiovisual (green)) in which spatial information is reduced. Error bars represent S.E.M, * indicated p < 0.05, and ** indicate p < 0.01).

The present study investigated how non-informative auditory and visual information contribute to the spatial localization of touch. The results of the current study shed important new light on how external spatial representations (i.e., those built by vision and audition) can influence judgments about a touch to the body, with a specific emphasis on changes when the somatotopic and external coordinate frames are placed in conflict (in this case, by crossing the legs). Interestingly, the results reveal a dramatic difference between the removal of visual (via blindfolding) and auditory (via placing subjects in an anechoic environment) spatial information. In addition, the results reveal that the most impactful situation (i.e., detrimental to tactile localization) is one in which combined audiovisual information is removed. Thus, whereas the absence of visual spatial information had little impact on tactile localization, when combined with the absence of auditory spatial information, blindfolding accentuated the effects relative to auditory alone manipulations. Hence, in answer to the overarching question as to which type of spatial information (auditory, visual, audio-visual) is remapped/integrated onto/with somatosensory anatomical locations under the context of tactile localization, the answer appears to be both auditory and audio-visual. The directionality of the effect (i.e., detriment versus benefit under exteroceptive sensory deprivation) may seem surprising given that each of the sensory systems are taken to be referenced on distinct frames. Thus it would seem that removal of a sensory modality with a conflicting reference frame should facilitate rather than hinder anatomical tactile localization. On the other hand, it is well know that in the “near” space (that is, space near the body or a particular body part), both visual and auditory information may be mapped in a body-centered fashion (Graziano et al., 1997, 1999), and thus may provide redundant rather than conflicting information.

One of the most robust effects here was that the removal of auditory spatial information decreased the sensitivity with which participants were able to carry out the tactile temporal order judgment (TOJ) task, as evidenced by changes in JND. From a control standpoint, it must be pointed out that this was true both for the case when participants crossed and did not cross their legs, further excluding the possibility that sound stimuli (as a consequence of virbrotactile stimulation in the anechoic chamber) reached a threshold (in order to aid tactile TOJ inside the anechoic room) only when paired with tactile stimuli - (the possibility of auditory stimuli contributing to tactile TOJ even in the absence of tactile stimulation being excluded by the control experiment mentioned above). From an experimental point of view, although not directly a spatial judgment, the TOJ task can and has been extensively employed as a proxy for spatial processing, as participants need to decide whether the left or right ankle was stimulated first (Schicke & Roder, 2006; Heed Backhaus & Roder, 2012). We believe that the finding that tactile TOJ is impaired under a condition in which there is no informative auditory spatial signals is in keeping with evidence that the auditory modality is the most temporally reliable modality (Repp & Penel 2002; Bertelson & Aschersleben, 2003; Guttman et al. 2005 – perhaps in particular when paired with the somatosensory system - see Noel et. al., 2015), and that this high degree of temporal acuity is mediating the greater influences of audition as demonstrated in this task. Future work should seek to verify and extend these findings in the context of a more “pure” spatial task. In addition, it remains unclear from a tactile temporal order judgment standpoint whether placing participants in an acoustically deprived and spatially uninformative environment such as an anechoic chamber is equivalent to simply administering white noise in order to mask sounds. Placing participants in an anechoic chamber selectively eliminates exteroceptive sounds, while internally generated sounds (e.g., gastrointestinal) remain. Thus, by noting that self-generated sounds contribute to building a body representation (Tajadura-Jimenez et al, 2012) it is conceivable that the crossed legs condition is further impaired in an anechoic room than it would be under white noise administration. This is due to the fact that the stronger the body representation, the stronger will be the conflict between a reference frame grounded on the body (somatotopic) and a reference frame with external coordinates (spatiotopic) when limbs are crossed over the midline. This hypothesis, nonetheless, remains to be tested.

The results seen in the absence of spatially informative visual signals are in keeping with prior work in showing the presence of a crossing effect even in the absence of visual information (Roder et al., 2004) and reinforce that these crossing effects appear to be similar with and without vision. Additionally, they extend the existing literature (on hands and feet) to the case of the ankles, and illustrate that although on its own the visual spatial representation does not significantly contribute to tactile spatial localization (at least under the context of a tactile TOJ task), it does play an important role when working in concert with auditory spatial representation.

Finally, the current results show that the largest effects on tactile TOJ sensitivity (JND) between uncrossed and crossed conditions is seen for combined audiovisual deprivation. These results are intriguing in that the effects exceed those seen for auditory deprivation alone, suggesting that an integrated audiovisual spatial representation is playing the strongest role in shaping the tactile judgments being measured here. Qualitatively, the effect on tactile TOJ sensitivity (JND) between crossed and uncrossed postures was greater under the audiovisual deprivation than would have been predicted based on adding the effects of auditory and visual deprivations alone. Unfortunately, a limitation of the current study is that auditory, visual, and audiovisual deprivation were carried out as a between-subject variable, and thus quantitative analysis based on independent/dependent channel models (e.g., additivity, supra-additivity) is not feasible.

Vision has been suggested to predominate over other sensory modalities with regard to spatial processing, an idea captured well within the multisensory field by the “modality appropriateness hypothesis,” which argues that vision outperforms audition on spatial tasks and that audition outperforms vision on temporal tasks (Welch & Warren, 1980; Alais & Burr, 2004). Furthermore, vision is well established to play a central role in sensorimotor coordination, as the location of reaching targets within peripersonal space is often encoded in eye-centered coordinates regardless of the target modality and the effectors to be used (Cohen & Anderson, 2000; but see Bernier & Grafton, 2010, for fMRI data evidencing flexibility in reference frame representation for motor action according to sensory context). Accordingly, some have suggested that the external reference frame utilized by the tactile system may be visual in nature (Batista et al. 1999). However, it is entirely possible that the nature of the external coordinates used to encode tactile stimuli in space is task-dependent (see Badde et al. 2014, for top-down modulation on TOJ). In the current case of a tactile temporal order judgment task in which the somatotopic reference frame is either aligned or misaligned with the external reference frame, both spatial and temporal factors are of vital significance. In such a scenario, the integration of the visual spatial representation onto the body representation (i.e., skin) would be important in solving the spatial demands of the task (i.e., left or right), whereas the integration of auditory spatial representation would contribute to the temporal component of the task (i.e., which came first). This account is well in line with recent evidence that TOJ limb crossing effects index the weighted integration of information across distinct reference frames (Heed et al., 2015). That is, it is conceivable that the audiovisual deprivation was the most detrimental as it encompassed both the primary frame of reference for spatial tasks (the visual frame of reference) and the primary frame of reference for temporal tasks (the auditory frame of reference). The conjecture that deprivation induced a change in the weighting of anatomical and external coordinates (namely, increased weighting for the anatomical coordinates) illuminates why a deprivation effect was selectively revealed when limbs were crossed – because it puts the external and anatomical reference frames in conflict.

The existence in non-human primates of not only visuo-tactile (Graziano Cooke & Taylor 2000), but also audio-tactile (Graziano Reiss & Gross 1999; Serino et al., in press) receptive fields encoding both the body and the space immediately surrounding the body, as well as the demonstration of speeded tactile processing in humans when auditory stimuli are close to the body (Noel et al., 2014; Noel et al., 2015; Galli et al., 2015) support the possibility of a network implementing a system projecting and/or integrating both an auditory and visual spatial representation onto the body.

Indeed, in the context of the TOJ task, studies have demonstrated that the crossing effect is not only apparent when crossing body parts, but also when crossing tools while keeping the body parts uncrossed (Yamamoto & Kitazawa 2001b). This suggests that spatial features beyond posture are remapped/integrated and weighted when localizing touch, and imply that crossing effects are at least partially reliant on neural networks instantiating the enlargement of multisensory receptive fields during tool-use. Evidence for such representational changes appear to center on regions within the intraparietal cortex (Maravita & Iriki 2004). The current findings extend on those suggesting that spatial features beyond the body are important during tactile localization by specifically demonstrating that tactile TOJ is most altered under conditions in which combined audiovisual – as opposed to its constituent unisensory auditory and visual – spatial information is temporarily absent.

This final observation is important in regard to the timescales of the observed effects. As previously described, early visual experience is necessary for the crossing effect during tactile TOJ (Roder et al. 2004). However, as shown here and in other studies (Kobor, 2006; Ley et al. 2013), tactile TOJ crossing effects do not necessitate visual experience beyond that received early in development. In contrast, the current results suggest that recent auditory and audiovisual experience is important for the tactile TOJ judgment, suggesting that whereas early visual experience is necessary for the instantiation of the crossing effect, the active maintenance and/or fine-tuning of the effect is dependent on access to immediate auditory and audiovisual spatial representations. The concept of the importance of current sensory representations for tactile localization is in keeping with recent evidence that such performance is rapidly and dynamically tuned according to recent sensory experience (Azanon et al. 2015).

  • Impact of audio, visual, audiovisual environments on touch localization was tested.

  • Deprivation of audio but not visual environment deteriorated touch localization.

  • Deprivation of audiovisual environment was the most detrimental.

Acknowledgments

The project was supported by the Simons Foundation for Autism Research, the Wallace Foundation and NIH T32MH064913-12. The authors are grateful to Dr. Daniel Ashmead for technical and logistical assistance.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Conflict of Interest: The authors declare no competing financial interests.

REFERENCES

  1. Alais D, Burr D. The ventriloquist effect results from near-optimal bimodal integration. Curr. Biol. 2004;14:257–262. doi: 10.1016/j.cub.2004.01.029. [DOI] [PubMed] [Google Scholar]
  2. Azañón, et al. Dynamic Tuning of Tactile Localization to Body Posture. Current Biology. 2015 doi: 10.1016/j.cub.2014.12.038. http://dx.doi.org/10.1016/j.cub.2014.12.038. [DOI] [PubMed] [Google Scholar]
  3. Azañón E, Soto-Faraco S. Alleviating the “crossed-hands” deficit by seeing uncrossed rubber hands. Exp. Brain Res. 2007;182:537–548. doi: 10.1007/s00221-007-1011-3. [DOI] [PubMed] [Google Scholar]
  4. Badde S, Heed T, Röder B. Processing load impairs coordinate integration for the localization of touch. Attention, Perception, & Psychophysics. 2014;76(4):1136–1150. doi: 10.3758/s13414-013-0590-2. [DOI] [PubMed] [Google Scholar]
  5. Badde S, et al. Multiple spatial representations determine touch localization on the fingers. J. Exp. Psychol. 2014;40:784–801. doi: 10.1037/a0034690. [DOI] [PubMed] [Google Scholar]
  6. Batista AP, Buneo CA, Snyder LH, Andersen RA. Reach plans in eye-centered coordinates. Science. 1999;285:257–260. doi: 10.1126/science.285.5425.257. [DOI] [PubMed] [Google Scholar]
  7. Bertelson P, Aschersleben G. Temporal ventriloquism: crossmodal interaction on the time dimension: 1. evidence from auditory – visual temporal order judgment. Int. J. Psychophysiol. 2003;50:147–155. doi: 10.1016/s0167-8760(03)00130-2. [DOI] [PubMed] [Google Scholar]
  8. Cadieux ML, Barnett-Cowan M, Shore DI. Crossing the hands is more confusing for females than males. Exp. Brain Res. 2010;204:431–446. doi: 10.1007/s00221-010-2268-5. [DOI] [PubMed] [Google Scholar]
  9. Cohen YE, Andersen RA. Reaches to sounds encoded in an eye-centered reference frame. Neuron. 2000;27:647–652. doi: 10.1016/s0896-6273(00)00073-8. [DOI] [PubMed] [Google Scholar]
  10. Galli G, Noel JP, Canzoneri E, Blanke O, Serino A. The wheelchair as a full-body tool extending the peripersonal space. Front. Psychol. 2015;6:639. doi: 10.3389/fpsyg.2015.00639. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Graziano MSA, Cooke DF, Taylor CSR. Coding the location of the arm by sight. Science. 2000;290:1782–1786. doi: 10.1126/science.290.5497.1782. [DOI] [PubMed] [Google Scholar]
  12. Graziano MSA, Reiss LA, Gross CG. A neuronal representation of the location of nearby sounds. Nature. 1999;397:428–430. doi: 10.1038/17115. [DOI] [PubMed] [Google Scholar]
  13. Graziano MS, Hu XT, Gross CG. Visuospatial properties of ventral premotor cortex. J Neurophysiol. 1997;77:2268–2292. doi: 10.1152/jn.1997.77.5.2268. [DOI] [PubMed] [Google Scholar]
  14. Groh JM, Sparks DL. Saccades to somatosensory targets. I. behavioral characteristics. J. Neurophysiol. 1996;75:412–427. doi: 10.1152/jn.1996.75.1.412. [DOI] [PubMed] [Google Scholar]
  15. Guttman SE, Gilroy LA, Blake R. Hearing what the eyes see: auditory encoding of visual temporal sequences. Psychol. Sci. 2005;16:228–235. doi: 10.1111/j.0956-7976.2005.00808.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Heed T, Azañón E. Using time to investigate space: a review of tactile temporal order judgments as a window onto spatial processing in touch. Front. Psychol. 2014;5:76. doi: 10.3389/fpsyg.2014.00076. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Heed T, Backhaus J, Röder B. Integration of hand and finger location in external spatial coordinates for tactile localization. J. Exp. Psychol. Hum. Percept. Perform. 2012;38:386–401. doi: 10.1037/a0024059. [DOI] [PubMed] [Google Scholar]
  18. Heed T, Buchholz V, Engel AK, Roder B. Tactile remapping: from coordinate transformation to integration in sensorimotor processing. TICS. 2015;19(5) doi: 10.1016/j.tics.2015.03.001. [DOI] [PubMed] [Google Scholar]
  19. Kitazawa S. Where conscious sensation takes place. Conscious. Cogn. 2002;11:475–477. doi: 10.1016/s1053-8100(02)00031-4. [DOI] [PubMed] [Google Scholar]
  20. Kitazawa S, Moizumi S, Okuzumi A, Saito F, Shibuya S, Takahashi T, et al. Reversal of subjective temporal order due to sensory and motor integrations. In: Haggard P, Rossetti Y, Kawato M, editors. Sensorimotor Foundations of Higher Cognition Attention and Performance. Oxford: Oxford University Press; 2008. pp. 73–97. [Google Scholar]
  21. Kóbor I, Füredi L, Kovács G, Spence C, Vidnyánszky Z. Back-to-front: improved tactile discrimination performance in the space you cannot see. Neurosci. Lett. 2006;400:163–167. doi: 10.1016/j.neulet.2006.02.037. [DOI] [PubMed] [Google Scholar]
  22. Ley P, Bottari D, Shenoy BH, Kekunnaya R, Röder B. Partial recovery of visual – spatial remapping of touch after restoring vision in a congenitally blind man. Neuropsychologia. 2013;51:1119–1123. doi: 10.1016/j.neuropsychologia.2013.03.004. [DOI] [PubMed] [Google Scholar]
  23. Maravita A, Iriki A. Tools for the body (schema) Trends Cogn. Sci. 2004;8:79–86. doi: 10.1016/j.tics.2003.12.008. [DOI] [PubMed] [Google Scholar]
  24. Noel JP, Grievaz P, Marmarroli P, Herve L, Blanke O, Serino A. Full body action remapping of peripersonal space: the case of walking. Neuropsychologia. 2014;70:375–384. doi: 10.1016/j.neuropsychologia.2014.08.030. [DOI] [PubMed] [Google Scholar]
  25. Noel JP, Pfeiffer C, Blanke O, Serino A. Peripersonal space as the space of the bodily self. Cognition. 2015;144:49–57. doi: 10.1016/j.cognition.2015.07.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Noel JP, Wallace MT, Orchard-Mills E, Alais D, Van der Burg True and perceived synchrony are preferentially associated with particular sensory pairings. Scientific Reports. 2015 doi: 10.1038/srep17467. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Repp BH, Penel A. Auditory dominance in temporal processing: new evidence from synchronization with simultaneous visual and auditory sequences. J. Exp. Psychol. Hum. Percept. Perform. 2002;28:1085–1099. [PubMed] [Google Scholar]
  28. Röder B, Rösler F, Spence C. Early vision impairs tactile perception in the blind. Curr. Biol. 2004;14:121–124. [PubMed] [Google Scholar]
  29. Schicke T, Röder B. Spatial remapping of touch: confusion of perceived stimulus order across hand and foot. Proc. Natl. Acad. Sci. U.S.A. 2006;103:11808–11813. doi: 10.1073/pnas.0601486103. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Serino A, Noel JP, Galli G, Canzoneri E, Marmaroli P, Lissek H, Blanke O. Body part-centered and full-body centered peripersonal space representation. Scientific Reports. doi: 10.1038/srep18603. (in press) [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Shore DI, Spry E, Spence C. Confusing the mind by crossing the hands. Brain Res Cogn Brain Res. 2002;14:153–163. doi: 10.1016/s0926-6410(02)00070-8. [DOI] [PubMed] [Google Scholar]
  32. Shore DI, Gray K, Spry E, Spence C. Spatial modulation of tactile temporal-order judgments. Perception. 2005;34:1251–1262. doi: 10.1068/p3313. [DOI] [PubMed] [Google Scholar]
  33. Tajadura-Jiménez A, Väljamäe A, Toshima I, Kimura T, Tsakiris M, Kitagawa N. Action sounds recalibrate perceived tactile distance. Curr Biol. 2012;22(13):R516–R517. doi: 10.1016/j.cub.2012.04.028. [DOI] [PubMed] [Google Scholar]
  34. Welch RB, Warren DH. Immediate perceptual response to intersensory discrepancy. Psychol Bull. 1980;88(3):638–667. [PubMed] [Google Scholar]
  35. Yamamoto S, Kitazawa S. Reversal of subjective temporal order due to arm crossing. Nature neuroscience. 2001;4:759–765. doi: 10.1038/89559. [DOI] [PubMed] [Google Scholar]
  36. Yamamoto S, Kitazawa S. Sensation at the tips of invisible tools. Nat. Neurosci. 2001b;4:979–980. doi: 10.1038/nn721. [DOI] [PubMed] [Google Scholar]
  37. Overvliet KE, Azanon E, Soto-Faraco S. Somatosensory saccades reveal the timing of tactile spatial remapping. Neuropsychology. 2011;49:3046–3052. doi: 10.1016/j.neuropsychologia.2011.07.005. [DOI] [PubMed] [Google Scholar]

RESOURCES