Skip to main content
The Journal of Neuroscience logoLink to The Journal of Neuroscience
. 2024 Jan 24;44(13):e2457202023. doi: 10.1523/JNEUROSCI.2457-20.2023

Modality-Independent Effect of Gravity in Shaping the Internal Representation of 3D Space for Visual and Haptic Object Perception

Theo Morfoisse 1,*, Gabriela Herrera Altamira 1,*, Leonardo Angelini 2,3, Gilles Clément 4, Mathieu Beraneck 1, Joseph McIntyre 5,6, Michele Tagliabue 1,
PMCID: PMC10977025  PMID: 38267257

Abstract

Visual and haptic perceptions of 3D shape are plagued by distortions, which are influenced by nonvisual factors, such as gravitational vestibular signals. Whether gravity acts directly on the visual or haptic systems or at a higher, modality-independent level of information processing remains unknown. To test these hypotheses, we examined visual and haptic 3D shape perception by asking male and female human subjects to perform a “squaring” task in upright and supine postures and in microgravity. Subjects adjusted one edge of a 3D object to match the length of another in each of the three canonical reference planes, and we recorded the matching errors to obtain a characterization of the perceived 3D shape. The results show opposing, body-centered patterns of errors for visual and haptic modalities, whose amplitudes are negatively correlated, suggesting that they arise in distinct, modality-specific representations that are nevertheless linked at some level. On the other hand, weightlessness significantly modulated both visual and haptic perceptual distortions in the same way, indicating a common, modality-independent origin for gravity’s effects. Overall, our findings show a link between modality-specific visual and haptic perceptual distortions and demonstrate a role of gravity-related signals on a modality-independent internal representation of the body and peripersonal 3D space used to interpret incoming sensory inputs.

Keywords: 3D object perception, distortions, haptic, microgravity, multisensory integration, vision

Significance Statement

Both visual and haptic 3D-object perception are plagued by anisotropic patterns of errors, as shown in a task of “squaring” the faces of an adjustable cube. We report opposing and negatively correlated perceptive errors for the visual and haptic perceptions, suggesting a strong interaction between the two sensory modalities, even when the task was fundamentally unimodal. In addition, the similar effect of microgravity observed on both visual and haptic perception indicates that gravity acts on a modality-independent representation of 3D space used to process these sensory inputs. These findings foster awareness that even simple, unimodal, egocentric tasks are likely to involve complex, cross-modal signal processing.

Introduction

Perception of three-dimensional (3D) objects includes the ability to determine an item’s location in space, as well as its geometrical properties, such as the relative size along each of three dimensions and the relative orientation of its edges. Given its importance for interacting with the physical world, 3D object perception has been deeply investigated. Visual perception has received the most attention, showing how various features of the stimuli, such as disparities, size, occlusions, perspective, motion, shadows, shading, texture, and blur, all influence 3D visual perception (Welchman, 2016) and how internal models shape the interpretation of the sensory signals (Curry, 1972; Kersten and Yuille, 2003; Kersten et al., 2004; Lee, 2015).

Despite its critical importance to perception and action, visual perception suffers from measurable distortions: i.e. height underestimation with respect to width, also known as the horizontal-vertical, or “L”, illusion (Avery and Day, 1969) and a systematic underestimation of depth (Loomis and Philbeck, 1999; Todd and Norman, 2003). Nonvisual factors, such as gravity, also appear to affect visual perception. For example, tilting the body with respect to gravity affects object recognition (Leone, 1998; Barnett-Cowan et al., 2015), orientation and distance perception (Marendaz et al., 1993; Harris and Mander, 2014), and other phenomena such as the tilted frame illusion (Goodenough et al., 1981; Howard, 1982), the oblique effect (Lipshits and McIntyre, 1999; Luyat and Gentaz, 2002; McIntyre and Lipshits, 2008), and some geometric illusions (Prinzmetal and Beck, 2001; Clément and Eckardt, 2005). Furthermore, weightlessness significantly alters the perception of stimulus size and shape, especially in tasks involving depth, during both short-term (Villard et al., 2005; Clément and Bukley, 2008; Clément et al., 2008, 2016; Harris et al., 2010; Clément and Demel, 2012; Bourrelly et al., 2016) and long-term (Clément et al., 2012, 2013; De Saedeleer et al., 2013; Bourrelly et al., 2015) exposure.

One hypothesis to explain gravity-related changes in visual perception is that gravity affects both the eye movements underlying visual exploration (Clément et al., 1986; Reschke et al., 2017, 2018) and eye positioning that contributes to the estimation of the visual eye-height, a key reference within the visual scene (Goltz et al., 1997; Bourrelly et al., 2016). Gravity’s influence on oculomotor control should specifically affect visual perception, although weightlessness might also induce distinct distortions in other sensory modalities. An alternative hypothesis is that gravity does not affect visual signals per se, but rather affects an internal representation of space (Clément et al., 2009, 2012), based on prior knowledge, that serves to interpret those signals, independent of the sensory system from which they come (Wolbers et al., 2011; Loomis et al., 2013). An example, among many, of the use of an internal model of space for perception is the famous “Ames room” illusion, where a persons’ size is misperceived due to the use of the inappropriate prior that the room is rectangular (O’Reilly et al., 2012). A direct implication of this second hypothesis is that microgravity should distort all spatial perceptions in the same way, regardless of the sensory modality. Because previous studies in microgravity were focused on visual tasks only, however, these proposed hypotheses have never been tested.

To investigate these two assumptions, we first compared distortions of visual versus haptic perception of 3D shape in a normal, upright posture on Earth. Next, we studied the effect of changing the subject’s orientation with respect to gravity to assess whether any visual or haptic distortions are egocentric or gravity-centric. Third, we tested the consequences of removing the effects of gravity by performing both haptic and visual experiments in weightlessness during parabolic flight.

Materials and Methods

In an analogy with previous experiments on visual perception (Clément et al., 2008, 2013), our paradigm was conceptually designed to detect distortions in the perception of three-dimensional shape, i.e., the relative lengths of the sides of a 3D cube. The sequential nature of haptic perception induced us, however, to focus each trial on the comparison of the relative size between two out of three possible dimensions. In both the visual and the haptic cases, the task consisted of adjusting one side of a rectangle to match the other, to form a square. The adjustments were performed using a trackball held in the left hand. In the haptic task the right hand was used to explore the rectangle. Subjects pressed a button on a trackball when they perceived the object to be perfectly square.

For the haptic tasks, subjects were asked to close their eyes and to feel, through haptic sense only, a rectangular cutout in a rigid, virtual plank generated by a Force Dimension Omega.3 haptic robot (Fig. 1A). This manipulandum was able to simulate the presence of a 3D object by applying the appropriate contact forces on the right hand of the subject when he/she performed exploration movements aimed at perceiving its shape and size. During each trial the robot constrained the subject’s hand movement to lie within the plane of the virtual plank and to remain inside the rectangle prescribed by the virtual cutout. To allow direct comparisons between the experimental results from haptic and visual tests, an analogous bi-dimensional task was also used for visual perception. Subjects were shown planar rectangles with different orientations in 3D space, without being able to manually explore it. For trials involving visual perception, an Oculus Rift virtual reality headset was used to provide a stereoscopic view of the virtual object. The visual environment was dark and the shapes were represented by light-gray frames. For both sensory conditions, the virtual object was located approximatively 40 cm in front of the subject’s right shoulder.

Figure 1.

Figure 1.

A, Haptic device and virtual reality headset used for the haptic and visual experiments, respectively. In panels (B, C) are reported the name of the orthogonal directions defined in an egocentric, body-centered (longitudinal, LO; lateral, LA; anterior–posterior, AP) and external, gravity-centered (up–down, UD; east–west, EW; north–south, NS) reference frames respectively. The bottom part of the figure represents the planes in which the task is performed expressed in the body-centered (Transversal, Sagittal and Frontal) and gravity-centered (Horizontal, Meridian and Latitudinal) reference frames.

Although there were no instructions to work quickly, subjects were asked to attempt to perform each trial in a fixed time window (20 s for all experiments except those performed on board the parabolic flight plane, for which a 10 s time window was used). An audible cue indicated to the subject when the end of the allotted time was approaching. The apparatus recorded the subject’s final responses (dimensions of each rectangle judged to be square), which is the main output of the tests. For the haptic tasks, the movements of the subject’s hand and the contact forces applied against the virtual constraints were also recorded via the haptic device.

The use of two-dimensional tasks allowed the estimation of the perceptive error in one plane at a time. Subjects in our experiments judged the squareness of rectangles lying in each of three anatomical planes: frontal, sagittal, or transversal (see bottom part of Fig. 1B). The combination of the three possible planes and the two rectangle dimensions resulted in six different geometric configurations that the subject had to deal with. They are represented in the upper part of Figure 2. At the beginning of each trial, an audio command told the subject in which anatomical plane the rectangle was lying and which of the two dimensions of the rectangle had to be adjusted. In our paradigm, the reference dimension was always 40 mm, but subjects were not informed of this fact. The initial length of the adjustable side was randomly selected between 15, 25, 35, 45, 55, and 65 mm. Subjects performed five series of trials in all; each series being composed of a random permutation of the six geometric configurations (total number of trials per condition: 30). In all three experiments described below, each subject was tested in two different conditions, so that in total each subject performed 60 trials. The two conditions, which depended on the experiment, were tested successively and their order was counterbalanced (half of subjects started with condition 1 and the other half with condition 2).

Figure 2.

Figure 2.

Geometrical configurations of the task. The first row represents the six geometric configurations, which correspond to the combination of the three planes in which the rectangle could lie and the two different dimensions of the rectangle that the subject had to adjust. For each combination of geometric configuration and postural conditions (upright and supine), the table reports with black bold text the anatomical (egocentric) plane in which the task was performed as well as the anatomical direction of the adjustable (Adj.) and reference (Ref.) dimensions of the rectangles. The gray text in the lower part of the table corresponds to the definitions, in a gravity-centered reference frame arbitrarily looking north, of the task planes, as well as of the adjustable and reference dimensions of each rectangle. These allocentric definitions are independent of the postural condition. These terms are useful to refer to the various planes when testing the hypotheses of egocentric versus allocentric distortions.

Experiment 1: effect of sensory modality

To study the differences and similarities between haptic and visual perception of 3D shapes in normo-gravity, 18 seated subjects (8 males, 10 females, aged 29 ± 9) performed the task for all six geometrical configurations in each of the two sensory conditions: haptic and visual. The order of the two sensory conditions was randomized across subjects.

Experiment 2: effect of body orientation

To study the perceptive distortions of both haptic and visual senses and whether the information is encoded in an egocentric (body-centered) or allocentric (gravity-centered) reference frame, a group of 18 subjects (9 males and 9 females, aged 25.5 ± 5 years) performed the haptic task while seated (upright) and while lying on the back (supine), while a second group of 18 subjects (11 male and 7 female, aged 24 ± 4 years) performed the visual task in the same two postures (upright and supine). For the supine posture, subjects lied on a medical bed. The two postures are represented in Figure 2 together with the respective correspondence between egocentric and allocentric references. The virtual object was placed always at the same distance from the subject’s shoulder, independent of the posture. In order to compensate for possible learning effects, the order of the postural conditions was randomized in both sensory conditions.

Experiment 3: effect of weightlessness

To study the role of gravitational cues in the encoding of haptic and visual signals we performed the haptic (18 subjects: 10 males, 8 females, aged 38 ± 11 years) and visual (18 subjects: 9 males, 9 females, aged 41 ± 11 years) paradigm in normal gravity (1G) and during the weightlessness phases of parabolic flight (0G). For the haptic experiment, a third condition was added: the subjects were also tested in normal gravity, but with the arm supported by a strap (Supp.), to differentiate the biomechanical effect of gravity on the arm from the gravitational stimulation of graviceptors, such as the otoliths.

Parabolic flight provides short intervals (∼20 s) of weightlessness within a stable visual environment inside the airplane, bracketed by periods of hyper-gravity (1.6–1.8 G) just before and just after each period of weightlessness. Given the short duration of 0G phases during parabolic flight, the subjects were trained to perform the task in about 10 s (two trials per parabola). Since each subject performed the experiment during 15 consecutive parabolas, he or she could perform all 30 trials per condition.

All experimental conditions were performed inflight onboard the Novespace Zero-G airplane in order to minimize possible undesired changes in uncontrolled factors. The 1G and Support conditions were tested during the level-flight phase just preceding the first parabola or just following the last parabola of its session, depending on the subject. The subjects were very firmly restrained with belts so that their relative position with respect to the apparatus and the virtual rectangles did not vary between gravitational conditions.

Ethical approval

The experimental protocols of experiment 1 and 2 performed at Université Paris Cité were approved by the university review board “Comité Éthique de la Recherche” CER (approval #2016/33). The experiments performed on board of the Zero-G airplane were approved by the French national ethics committee “Comité de Protection des Personnes”, CPP (approval #2014-A01949-38)

Data analysis

For each trial, t, the error, ε, between the length, l, of the adjustable and reference sides of the rectangle was computed. If the egocentered definition of the three dimensions (lateral, LA; longitudinal, LO; anterior–posterior, AP) of Figure 1B is used, the errors of the six geometric configurations are defined as LA-LO, LO-LA, LA-AP, AP-LA, LO-AP, and AP-LO, where the minuend and subtrahend are the adjustable and reference dimensions respectively.

Table 1 shows how the perceptive distortion associated with each of the three dimensions contributes to the error made on the six geometric configurations. Positive errors correspond to underestimations of the adjustable dimension and/or to overestimations of the reference dimension. Thus, the present experimental paradigm, similar to the one previously used by Clément et al. (2008, 2013), allows the quantification of the perceptive errors of one dimension relative to another, but cannot lead to a measure of the absolute perceptive errors for each dimension separately.

Table 1.

Definition of the squaring errors for all six geometrical configurations of the task

Plane Adjustable dimension Reference dimension Task error
Frontal LA LO εLALO=lLAlLO
LO LA εLOLA=lLOlLA
Transversal LA AP εLAAP=lLAlAP
AP LA εAPLA=lAPlLA
Sagittal LO AP εLOAP=lLOlAP
AP LO εAPLO=lAPlLO

Estimation of 3 orthogonal perceptual errors

Table 1 shows that the error in estimating one dimension has opposite effects for the two tasks performed within a given plane. For instance, an overestimation of the AP dimension should result in negative and positive errors in the AP-LA and LA-AP tasks, respectively. These relationships appear to be confirmed by the experimental results (Fig. 4A), because this hypothesis accounts for 96% of the data variance. It follows that the theoretical relationships below are valid:

εLAAP=εAPLAεLALO=εLOLAεLOAP=εAPLO. (1)

Exploiting this property, it was possible to combine the five errors obtained for one geometric condition, with the additive inverse of the five errors obtained for the other geometric condition performed in the same plane. This allowed computing the combined mean and the variance of the errors for each of the three planes (transversal, Tra; frontal, Fro; sagittal, Sag), instead of individually for each of the 6 geometrical configurations of the task. This technique has the considerable advantage of being more robust, because it is based on 10 samples instead of only 5:

ε¯Tra=t=15(εLAAP,tεAPLA,t)10σTra2=t=15((εLAAP,tε¯Tra)2+(εAPLA,tε¯Tra)2)10ε¯Fro=t=15(εLALO,tεLOLA,t)10σFro2=t=15((εLALO,tε¯Fro)2+(εLOLA,tε¯Fro)2)10ε¯Sag=t=15(εAPLO,tεLOAP,t)10σFro2=t=15((εAPLO,tε¯Sag)2+(εLOAP,tε¯Fro)2)10. (2)

With the above formulas, one can characterize perceptual distortions in each of the three different planes as illustrated in Figure 3. By our convention, a rectangle lying in one of the two vertical planes (sagittal or frontal) is associated with a positive error (stubby rectangle) if the longitudinal dimension is smaller than the other dimension. In the transverse plane, a positive error (stubby rectangle) corresponds to the AP dimension being smaller than the LA dimension. It is worth noting that if the subject produced a “stubby” rectangle (positive errors) this means that he/she perceived a square to be “slender”, and vice versa. The global variance was computed as the average of the three planar variances.

Figure 4.

Figure 4.

Method used for data filtering and for their vectorial representations. A, Fictitious individual errors recorded for the squaring task in the three anatomical planes (sagittal, transversal, and frontal) with the corresponding filtered value (see following panel). B, Each triplet of measured errors is represented as a point in a 3D space. The errors in the three anatomical planes should theoretically fulfill the constraint described by Equation 3, corresponding to the solution plane represented in gray. The 3D point (black dot) is hence projected on the solution plane (blue dot), removing the inconsistent components of the recorded errors. The three components of the projection (blue dot) are then used for the representation of the data in terms of the three planar error (filtered error in the first panel) and for the polar plot representation reported in the third panel. C, To improve readability, the data projected on the solution plane are reported as 2D polar plot, where the error triplets are represented as 2D vectors. In panels B-C the discontinuous lines represent the locations of triplets of errors lying in the solution plane and characterized by the following additional relationships: ε¯Fro=0 and hence ε¯Sag=ε¯Tra (dot-dashed line);ε¯Tra=0 and thus ε¯Sag=ε¯Fro (dotted line); ε¯Sag=0 and ε¯Tra=ε¯Fro (dashed line). The center of the polar plot corresponds to null errors in all three planes. D, Graphical representation of the “Mis” parameter used to quantify the misalignment between two individual vectors and corresponding to the gray area of the parallelogram having the two vectors as sides.

Figure 3.

Figure 3.

Sign conventions for the errors in the transveral, frontal, and sagittal planes. The gray squares represent the correct answer (i.e., a square). The black lines represent the distorted answers. Positive planar error values correspond to “stubby” rectangles. Negative values correspond to “slender” rectangles. The same conventions are used for the error expressed in the allocentered planes. In this case, north–south (NS), east–west (EW) and up–down (UD) directions replace anterior–posterior (AP), lateral (LA), and longitudinal (LO), respectively. Horizontal, latitudinal and meridian replace transversal, frontal, and sagittal planes, respectively.

The estimation of the three planar errors is then improved by considering that if the (distorted) metrics used to compare distances in 3D space are locally smooth and consistent for the different dimensions in space, the three planar errors ε are not independent and that, given the sign conventions of Figure 3, they should fulfill the following relationship:

ε¯Sag+ε¯Tra=ε¯Fro. (3)

Note that Equation 3 is a particular case of the formula describing a plane, ax + by + cz = d, where a = b = 1, c = −1 and d = 0. Thus, if the metrics in each plane are consistent with each other, the vectors of measured planar errors ε¯=[ε¯Sagε¯Traε¯Fro] should fall on that plane and points outside the plane can be considered to be noise. By projecting the individual vectors ε¯ onto the plane corresponding to Equation 3 as shown in Figure 4A-B, this noise is effectively filtered out. Using the resulting 2D representation of the distortion (Fig. 4C) is a conservative choice, especially when comparing their orientation in different conditions, because the 3D representation may lead to consider distortion directions and components of data variability that have no functional meaning. On average, the data projected on the plane of Equation 3 account for 98% of the variance of the original data, suggesting that the recorded responses tend to well fulfill this constraint.

We used the same Equations (13) to compute the analogous parameter in the allocentric reference frame after having replaced the egocentrically defined planes and directions with the world-centered planes (horizontal, Hor; latitudinal, Lat; meridian, Mer) and directions (east–west, north–south, and up–down) as shown in Figure 2. Table 2 shows the relationships between the planar distortions defined in the body-centered and gravity-centered reference frame for the upright and supine posture.

Table 2.

Relationship between ego- and allocentrically defined distortions for the upright and supine condition

Upright ε¯Mer=ε¯Sag ε¯Lat=ε¯Fro ε¯Hor=ε¯Tra
Supine ε¯Mer=ε¯Sag ε¯Lat=ε¯Tra ε¯Hor=ε¯Fro

Perceptive cuboids

Although, as stated before, the present experimental paradigm does not allow a measure of the absolute perceptive errors for each dimension separately, we have devised a methodology that allows one to visualize the 3D patterns of distortion as a “perceptive cuboid” that is an elongated box compared to an ideal undistorted cube. To compute the dimensional errors, we first solved the system of equations of Table 1 reported below in the matrix form:

[εLALOεLOLAεLAAPεAPLAεLOAPεAPLO]=A[εLAεAPεLO]=[101101110110011011][εLAεAPεLO].

If we call A the matrix of linear coefficient, then the solutions of this underdetermined problem are as follows:

[εLAεAPεLO]=A[εLALOεLOLAεLAAPεAPLAεLOAPεAPLO]+(IAA)*[εLAεAPεLO]=A[εLALOεLOLAεLAAPεAPLAεLOAPεAPLO]+(IAA)w=A[εLALOεLOLAεLAAPεAPLAεLOAPεAPLO]+[www]

where the A is the pseudo inverse of A and w is a free scalar parameter that reflects the fact that the observed results can be explained by an infinity of triplets of dimensional distortions differing by isotropic component, w, only (underdetermination of the problem).

To define a set of dimensional errors, (εLAεAPεLO) to be used for a graphical representation, we arbitrary decided to select the solution that minimizes the Euclidean norm of the error vectors.

Although the w parameter cannot be univocally defined, the difference between the errors along the three dimensions are correctly quantified and then used to test the anisotropy of the perceptive errors. The dimensional errors, however, cannot be rigorously compared between postures or gravitational conditions, because the differences between experimental conditions could be due to differences in defining the w parameter for each condition.

Polar representation of errors

The 2D vector resulting from the projection of ε¯ to the plane of Equation 3 was computed for each subject (Fig. 4C) and represented with a polar plot. The vector length corresponds to the Euclidian sum of the filtered error triplets and its direction provides information about the “shape” of the pattern of errors, meaning the relative magnitude and sign of the errors in the three anatomical planes: a pattern of errors restricted to an expansion or contraction along the anterior–posterior axis, with no errors in the fronto-parallel plane will give a vector pointing along the 0° or 180°axes, respectively; a pattern of errors restricted to a contraction or expansion along the lateral axis, with no errors in the sagittal plane corresponds to a vector with a 60° or 240° orientation, respectively; a pattern of errors that is restricted to an expansion or contraction in the longitudinal direction, with no distortion between the axes in the transversal plane will give a vector that points along the 120° or 300° axes in the polar plot, respectively. Vectors that point along intermediate angles indicate more complex patterns wherein an over-estimation along one anatomical axis and an underestimation along another axis are combined (e.g., the 30° orientation corresponds to AP and LA dimensions that are respectively over-estimated and underestimated compared to LO).

The strength of the misalignment, Mis, between the individual 2D vectors representing the two conditions tested in an experiment, was computed as the cross-product of the two individual vectors. The value of Mis, which, as illustrated in Figure 4D, corresponds to the area of the parallelogram having the two vectors as adjacent sides, is zero when the two vectors are in the same, or opposite, direction and maximal when they are orthogonal. Importantly, Mis amplitude depends also on the vectors’ lengths so that the Mis value associated to long vectors is larger than for short vectors for the same amount of misalignment. This gives a desired feature that large vectors, which have a well-defined direction, are given greater weight in statistical analyses than small vectors whose direction can be significantly deviated by experimental noise.

In each experimental condition, the vectorial mean of the 2D individual vectors was computed to represent the average perceptive error.

Reaction forces during haptic task

To estimate changes of the contact forces between gravitational conditions in the haptic tasks, we computed the average of the reaction forces generated by the haptic device when the subject’s hand was in contact with the edges of the virtual cutout or when the hand tried to move out of the task plane.

Microgravity effect and theoretical prediction

To quantify the effect of microgravity on the perceptive errors, for each subject, s, the mean planar error in 1G was subtracted from the corresponding error in 0G:

Δε¯s=ε¯s,0Gε¯s,1G.

To predict the perceptive distortion expected in microgravity under the hypothesis that the 0G effect was identical for the haptic and visual modalities, we averaged all error triplets Δε¯s representing the measured individual microgravity effects from both the haptic and visual experiments (18 haptic subjects, 18 visual subjects):

Δε¯=s=136Δε¯s36.

The obtained average triplet was then added to the individual visual and haptic errors measured in normo-gravity conditions to compute the predicted error in microgravity, ε^s0G:

ε^s,0G=ε¯s,1G+Δε¯.

We then compared these individual predictions to the errors measured in 0G for both visual and haptic modalities, to see to what extent a common mechanism for visual and haptic captures the data.

Statistical analysis

For each experiment, we first tested the significance of the squaring errors by testing for each plane whether the constant errors were on average different from zero (two-sided Student’s t test). Then, we performed repeated-measures ANOVA on the constant and variable errors. The sign conventions (Fig. 3) being arbitrary, they allow a rigorous comparison of the errors within a given plane, but they do not allow the comparison between different planes. For this reason, in the statistical analyses, the results on each plane were tested with independent ANOVAs for repeated measures.

Experiment 1: For each of the three task planes we tested for an effect of Sensory Modality on the perceptive error as a single within-subject independent factor with two levels (Haptic, Visual).

Experiment 2: We tested for an effect of body posture as a within-subject independent factor with two levels (Upright, Supine) in separate ANOVAs for each group/sensory modality (Visual and Haptic). Note that this separation is justified by the hypotheses being tested, for which cross effects between posture and modality would have little meaning. To test whether errors are tied to a body-centric or gravity-centric reference frame, we defined the task planes both in terms of anatomical axes and world axes. Invariance of the patterns of error (lack of a statistical difference) for the anatomically defined planes, but not the world-defined frames, would indicate that the errors are primarily egocentrically, rather than allocentrically, aligned.

Experiment 3: For each of the three task planes we tested for an effect of Gravity on the squaring error as a single within-subject independent factor with three (1G, 0G, Supported) and two (1G, 0G) levels for the haptic and visual experiment, respectively.

Before performing each ANOVA, we tested for normality and homogeneity of the distributions using the Kolmogorov-Smirnov and Levenes tests, respectively. To achieve the normal distribution for the response variability, the standard deviations were transformed by the log(σ + 1) function (Tagliabue and McIntyre, 2011). For the errors expressed in both allocentric and egocentric reference frames the data were distributed normally (all p > 0.20), and the data variability was similar among all conditions (all p > 0.50).

In order to test whether the variability of the individual squaring errors in the haptic modality can explain the errors in the visual modality (and vice versa), their coefficient of correlation R, with the relative p value, was computed.

Because the Mis parameter did not always show a normal distribution, it is presented in terms of median ± inter-quartile range and a nonparametric Sign Test was used to test whether its distribution is significantly different from zero.

To test whether the pattern of errors (2D vectors) differs between two conditions (experiment 1, Visual vs Haptic; experiments 2, Upright vs Supine; experiments 3, 1G vs 0G), a bootstrap technique was used. This technique, which allows one to correctly take into account both direction and amplitude of the individual vectors, consisted of using 10,000 re-samplings with replacement of the 18 subjects to estimate the statistical distribution of the difference in amplitude, ΔAmp, and the angle, θ, between the vectorial average of two conditions, and to compute the probability of error in rejecting the null hypothesis, H0, that θ = 0. Following a Bayesian approach, taking into account a prior uniform distribution of all possible angles (θ range ±180°), we evaluated the ratio, R0/1, between the probability to obtain the observed data under the null hypothesis, H0, and the probability under the alternative hypothesis, H1, that θ≠0 (Wagenmakers et al., 2018).

In experiment 3, to test whether the effect of microgravity has the same direction for visual and haptic modalities the bootstrap re-sampling was performed independently for the two sensory conditions, because different groups of subjects were tested for each of the two modalities.

Results

Experiment 1: haptic and visual perception

Figure 5A shows that for the six geometric configurations of the squaring task (see Materials and Methods) the subjects made systematic errors in both visual and haptic conditions. The comparison of the errors made using haptic information alone versus visual information alone shows consistent, opposing results for the two sensory modalities. Hence, in each task, when subjects made on average significant positive errors in the haptic condition, they made negative errors in the visual condition, and vice versa. Figure 5B represents the more robust evaluation of the errors obtained by considering the constraints existing between the errors performed in the six squaring tasks (see Materials and Methods, Eqs. 13). The amplitude of the error was significantly different from zero for both visual and haptic perception in the sagittal (visual, F(17) = 5.86, p < 10−4; haptic, F(17) = −8.10, p < 10−6) and transversal plane (visual, F(17) = −7.22, p < 10−5; haptic, F(17) = 9.22, p < 10−6), but in the frontal plane neither modality was significantly different from zero (visual, F(17) = −1.26, p = 0.22; haptic, F(17) = −0.57, p = 0.58). Sensory modality had a significant effect in the sagittal (F(1,17) = 60.8, p < 10−5) and transversal (F(1,17) = 94.96, p < 10−6) planes but not in the frontal plane (F(1,17) = 0.14, p = 0.71). Remarkably, the significant perceptive errors in the sagittal and transversal planes had opposite sign between the two sensory conditions: when using haptic sense, subjects produced rectangles with the anterior–posterior dimension smaller than the longitudinal and lateral dimension, while, when using vision, they made rectangles with the anterior–posterior dimension larger than the longitudinal and lateral dimension. Moreover, when looking at the individual error in Figure 5C a strong (negative) correlation can be observed between visual and haptic errors (R = −0.79, p < 10−12), showing a clear relationship between the two, meaning that subjects who showed a stronger distortion in the visual domain also showed a stronger distortion, but in the opposite direction, in the haptic domain. The correlation remained significant when the average error in each plane was subtracted from the corresponding individual values (insert of Fig. 5C, R = −0.28, p < 0.05).

Figure 5.

Figure 5.

A, Errors for the task performed in each of the six geometrical conditions using haptic information only (light blue bars) or visual information only (red bars). Each geometrical condition is characterized by the plane in which the rectangle lies (sagittal, transversal, frontal), and by which direction within the plane was adjustable or held constant: longitudinal (Lo), anterior–posterior (AP), and lateral (La). Positive errors correspond to the final size of the adjustable dimension being greater than the reference dimension. Vertical whiskers represent 95% confidence intervals. A significant difference between the two tasks performed in the same plane is indicative of an important perceptive distortion in that specific plane. B, Perceptive errors in the three task planes for haptic and visual conditions. ***p < 10−3 in the ANOVA testing the modality effect. p < 10−3 for the t test ascertaining differences from zero. C, Individual planar errors in the visual tasks as function of the errors in the haptic tasks. Each marker type corresponds to a specific subject. Their level of gray represents the plane of the task (black = sagittal, light-gray = frontal, dark-gray = transversal). The dashed line represents the data linear regression. The top-right insert represents the same data after subtracting to each point the mean error of the corresponding task plane. D, Vectorial representation of participant errors. Thicker vectors correspond to the vectorial average of the individual responses (thinner vectors). For details about the meaning of the polar plot representation see Figure 4C. E, Perceptive cuboids illustrating of how a cube (gray shape) would be perceived by the subjects when using haptic or visual information alone, respectively. For illustration purposes, the distortions of this panel are scaled up by a factor of 5. Data reported in all panels are based on the performances of 18 subjects.

The vectorial representation of the individual errors for the two sensory modalities in Figure 5D fall along the same axis, but in opposite directions, meaning that the perceptual errors were in both cases restricted to an expansion (haptic) or contraction (visual) along the anterior–posterior axis with little or no distortion in the fronto-parallel plane. The pattern of errors for the two modalities appear therefore complementary, in that they would tend to mutually cancel out when combined. Consistently, the analysis of the cross-product between the haptic and visual individual vectors does not reveal any significant misalignment (Mis = −52 ± 55 mm2, signed test: p = 0.48). The angle θ between the average visual and haptic vector is 172 ± 6° and not significantly different from 180° (bootstrap p = 0.07). Taking into account all possible orientations for the two groups of vectors, the observed results are 9 times more likely under the hypothesis that pattern of errors of the two senses are complementary (H0: θ = 180°), than under the alternative hypothesis (H1), i.e. θ180. The average visual and haptic vectors show, on the other hand, amplitudes that are significantly different (bootstrap: ΔAmp = 5.8 ± 2 mm p = 0.003), meaning that, although the pattern of errors for the two modalities are complementary, they would not exactly cancel each other out, although the difference would be small. The illustration of the “perceptive cuboids” corresponding to the two sensory modalities reported in Figure 5E confirms that the haptic and visual perceptive errors would mainly consist of a depth overestimation and underestimation for the haptic and visual sense, respectively.

Even though the amplitude of the perceptive biases (constant components of the errors reported in Fig. 5) appear smaller for the haptic than for the visual modality, the latter is characterized by a clearly smaller intra-personal variability of the responses (σhapt = 6.1 ± 2.6 mm, σvis = 4.2 ± 2.2 mm, sensory modality effect: F(1,17) = 12.02, p < 10−2), corresponding to a higher precision for the visual than for the haptic task.

In summary, Experiment 1 shows clear differences in the patterns of visual and haptic distortions. For both modalities the errors appeared primarily in the sagittal and transversal planes, and amplitude and sign of the errors in one modality depended on amplitude and sign of the errors in the other modality. More precisely, the pattern errors were opposite (contraction and expansion of perceived depth for visual and haptic, respectively).

Experiment 2: effect of body orientation

The responses of the subjects upright were characterized by constant errors similar to those observed in Experiment 1 (Experiment effect, Wilks’ λ = 0.85, F(4,32) = 1.35, p = 0.27). The left columns of Table 3 and left panels of Figure 6 show that for both haptic and visual experiments the squaring error appears consistent between postures if expressed egocentrically: we observed no statistically significant effects of posture on the errors for any of the three planes when expressed in body-centered reference frame. The misalignment, Mis, between the individual vectors corresponding to upright and supine conditions (lower-left part of Fig. 6A,B) is not significantly different from zero (haptic: Mis = 20 ± 47 mm2, signed test p = 0.81; vision: Mis = 2 ± 12 mm2, signed test: p = 1). For both sensory modalities, the difference in amplitude and direction between average vector representing the pattern of errors in the upright and supine position do not differ significantly from zero (bootstrap for haptics: ΔAmp = 0.1 ± 1.1 mm p = 0.56, θ=6 ± 14° p = 0.33, R0/1 = 9.3; bootstrap for vision: ΔAmp = −2 ± 1.5 mm p = 0.09, θ=2 ± 3° p = 0.25; R0/1 = 38).

Table 3.

Results of ANOVA for the posture effect on the planar perceptive distortion

Sagittal Transversal Frontal Meridian Horizontal Latitudinal
Haptic

F(1,17) = 0.40

p = 0.53

F(1,17) = 0.58

p = 0.46

F(1,17) = 0.001

p = 0.97

F(1,17) = 52.28

p < 10−5

F(1,17) = 13.01

p = 0.002

F(1,17) = 12.18

p = 0.003

Visual

F(1,17) = 2.00

p = 0.18

F(1,17) = 1.32

p = 0.27

F(1,17) = 0.15

p = 0.70

F(1,17) = 25.46

p < 10−3

F(1,17) = 19.92

p < 10−3

F(1,17) = 22.87

p < 10−3

Figure 6.

Figure 6.

Errors within each plane when the subjects are seated normally (upright) or lying supine. The upper (A) and lower (B) panels represent the results for the haptic and visual modalities, respectively. The left panels represent the errors per anatomical, egocentric planes. The right panels represent the data per allocentric (fixed with respect to gravity) planes. **p < 10−2; and ***p < 10−3 in the ANOVA. and p < 10−2 and p < 10−3 for the t-test ascertaining differences from zero. Vertical whiskers represent 95% confidence intervals. In each barplot the inset reports the perceptive cuboids corresponding to the 3D perceptive distortion (amplified ×5) of a cube. The polar plots report the vectorial representation of the individual errors. Thicker vectors represent the average vectorial response. For details about the meaning of the polar plot representation see Figure 4C. Data reported in this figure are based on the performances of 36 subjects (18 for haptic and 18 for visual experiment).

On the other hand, if the errors are represented in terms of allocentrically defined planes, i.e. fixed with respect to gravity (last three columns of Table 3 and right panels of Fig. 6), a clear effect of posture can be observed in all planes for both sensory modalities on the orientation of the pattern of errors with significant misalignments: haptic Mis = 38 ± 19 mm2 signed test: p = 0.007; vision: Mis = 109 ± 55 mm2 signed test: p = 0.001). Consistently, the angle between the average vectors representing the errors in the allocentric space for the two postures is significantly different for both modalities: bootstrap p < 10−4 for haptics and vision.

The intra-personal variability of the responses was not affected by the posture for the haptic modality (σupright = 6.2 ± 6.1 mm, σsupine = 6.6 ± 6.0 mm, posture effect: F(1,17) = 0.12, p = 0.73), but significantly increased in the supine position for the visual experiment (σupright = 3.5 ± 3.2 mm, σsupine = 4.8 ± 4.7 mm, posture effect: F(1,17) = 6.81, p = 0.02).

In conclusion, in this experiment we found that patterns of errors of both visual and haptic perception were invariant when expressed in an egocentric reference frame, but not when expressed in an allocentric one.

Experiment 3: gravity’s effect on visual and haptic perception

While the visual inputs are not different on ground and in weightlessness, the forces exerted against the virtual constraints during haptic exploration might be different in 0G due to biomechanical and neurophysiological effects. We therefore first analyze the changes in the contact forces between the subject’s hand and the virtual object and then the pattern of squaring errors (Fig. 7A–C). The left plot of Figure 7A shows that vertical forces applied by the subjects on the upper and lower edge of the sensed object were modulated (F(2,34) = 3.9, p = 0.02) by the experimental conditions (1G, 0G, Supported). As expected, upward and downward forces increased and decreased respectively in microgravity (post hoc 1G vs 0G, p = 0.02), coherent with a reduction of the weight of the upper limb. When the weight of the arm was supported (see Materials and Methods), the vertical forces also tended to differ from the 1G condition (post hoc Supp vs 1G p = 0.09) and were modulated in the same direction as in 0G (post hoc Supp vs 0G, p = 0.29). Horizontal forces were also significantly affected by the experimental condition (F(2,34) = 6.32, p < 0.01; Fig. 7A, right plot), with a significant increase of the contact forces in microgravity with respect to the 1G and Support conditions.

Figure 7.

Figure 7.

Results of the microgravity experiments for the haptic (A-C and F panels) and visual (D-E and G panels) tasks. A, Contact forces in the three experimental conditions: normo-gravity (1G), microgravity (0G) and with a mechanical support of the arm (Supp). Left, Vertical forces generated against the upper and lower edges of the rectangle. Right, Horizontal forces generated against all other edges of the rectangle. B, D, Errors observed in the three task planes for each experimental condition, together with the error predicted in microgravity assuming the same effect of gravity on both haptic and visual tasks. C, E, are polar plots representing individual errors. Thicker vectors represent the average vectorial response. For details about the meaning of the polar plot representation see Figure 4C. F, G, Illustration of the perceptive cuboids (experimental results scaled up by 5) in normal gravity and in microgravity together with the reference cube (gray). *p < 0.05; **p < 10−2; and ***p < 10−3 in the ANOVA. ∤, †, and ‡ p < 0.05, p < 10−2, and p < 10−3 for the t-test ascertaining difference from zero. Data reported are based on the performances of 36 subjects: 18 for the haptic and 18 for the visual experiment.

This increase of the contact force in 0G, similar to what was previously observed in haptic tests during parabolic flights (Mierau et al., 2008), could be the result of a specific strategy aimed at keeping muscular tension, and hence muscle spindle sensitivity, similar to normal gravity conditions. This strategy would avoid the decrease in proprioception precision previously observed in weightlessness for “open-chain” motor tasks, for which the same strategy could not be adopted, resulting in a decrease in muscle tension (Clément and Reschke, 2010). This hypothesis well matches the fact that the precision of haptic responses was not significantly affected by the experimental condition (response variability: 1G 6.8 ± 2.6, 0G 7.1 ± 3.1, Sup 6.4 ± 2.9; F(2,34) = 1.75, p = 0.19), suggesting that neither microgravity nor the arm support significantly interfered with the subjects” ability to perform the task. This lack of microgravity effect on haptic precision appears in line with the results of previous orbital experiments (McIntyre and Lipshits, 2008).

Importantly, the results about the vertical contact forces and response variability suggest that the “arm support” condition successfully mimicked the expected lightening of the arm observed in microgravity. Therefore, if haptic perceptive distortions (constant errors) are affected by microgravity, but not by the arm support, they would not be directly ascribable to the biomechanical action of microgravity on the arm.

The comparison of the constant errors in the three experimental conditions, reported in Figure 7B, clearly shows that the perceptive distortion characterizing haptic perception in the Sagittal plane was significantly amplified (became more negative) by microgravity, but was not affected by the arm support (condition effect F(2,34) = 12.49, p < 10−4), suggesting a perceptive rather than biomechanical effect. Similarly, the haptic distortion in the Transversal plane was amplified (became more positive) in 0G and was not affected by the support, either (condition effect F(2,34) = 11.13, p = <10−3). Finally, the lack of distortion in the Frontal plane persisted independent of the gravitational and support condition (F(2,34) = 0.33, p = 0.71). Figure 7C shows a clear increase of the amplitude of the average error vector in 0G (bootstrap: ΔAmp = 5 ± 1 mm, p < 10−4). A nonsignificant misalignment between the individual haptic errors in the two gravitational conditions is reported (Mis = 2 ± 33 mm2, signed test p = 1) and consistently, the angle θ between the two average vectors is not significantly different from 0 (bootstrap −5 ± 16° p = 0.62; R0/1 = 8.4).

For the visual tasks, Figure 7D shows that, as for the haptic sense, microgravity significantly modulated the perceptive distortions. More precisely, the large errors characterizing both sagittal and transversal planes in 1G were significantly reduced in weightlessness (F(1,17) = 15.41, p = 0.0011 and F(1,17) = 7.87, p = 0.012 respectively). In the frontal plane, a small but significant height underestimation appeared in 0G (F(1,17) = 9.531, p = 0.007). The polar plot of Figure 7E shows that the amplitude of the average error vector decreases in microgravity (bootstrap ΔAmp = −2.8 ± 0.8 p < 10−4). Note that there is a small but significant misalignment between the 1G and 0G vectors (Mis = 16 ± 12, signed test p = 0.007, bootstrap θ=7 ± 3° p < 10−4). The analysis of the variable component of the errors shows that microgravity did not significantly affect subjects’ visual precision (F(1,17) = 4.3, p = 0.054), although the response variability tended to increase from 4.4 ± 2.5 to 5.2 ± 2.4 mm.

The qualitative comparison of Figure 7F and G illustrates that the effect of gravity on both sensory modalities mainly consists of a stretch of depth perception with respect to normo-gravity conditions (an increase in slenderness for haptic; a decrease in stubbiness for visual).

In neither haptic nor visual 0G tasks did the amplitude of the errors appear to change over the parabolas (trial number effect on haptic errors: Sagittal F(4,60) = 0.79, p = 0.54; Transversal F(4,60) = 0.23, p = 0.92; Frontal F(4,60) = 0.49, p = 0.74; and on visual errors: Sagittal F(4,68) = 1.23, p = 0.30; Transversal F(4,68) = 0.60, p = 0.67; Frontal F(4,68) = 0.63, p = 0.64) suggesting a lack of significant adaptation to microgravity during the experiment duration.

The direct quantitative comparison of the effect of microgravity, Δε¯s, between the two groups of subjects of the visual and haptic experiments (Fig. 8A) shows similar modulations of the perceptual distortion for both senses (Wilks’ λ = 0.91, F(3,32) = 0.96, p = 0.42). Although the amplitude of the microgravity effect tends to be larger for haptic than for visual perception (bootstrap, p = 0.06), the average directions of the microgravity effect on visual and haptic sense appear very similar (Fig. 8B): the angle θ between the two vectors representing the average effect of gravity on the two modalities is only 15.6 ± 15.6° and not significantly larger than zero (bootstrap, p = 0.14). When considering the range of all possible θ (±180°), Bayesian statistics suggest that the observed data are 5.2 times more likely under the hypothesis that θ=0° (H0) than under the hypothesis θ 0° (H1). As shown in Figure 7B,D, the perceptive error predicted in 0G, ε^s,0G, by assuming that the gravity effect is identical for the haptic and visual modalities (both in terms of direction and amplitude) are indeed indistinguishable from the observed results (Wilks’ λ = 0.73, F(6,12) = 0.73, p = 0.63), despite the small difference in orientation between Δεvisual and Δεhaptic and despite the slight change in orientation of the visual vector when passing from 1G to 0G (see above).

Figure 8.

Figure 8.

Comparison of the effect of microgravity on the Haptic and Visual senses. A, Difference between the constant errors made by the subjects in the 0G and 1G conditions for the tasks in the three anatomical planes. Vertical whiskers represent 95% confidence interval. B, Vectorial representation of the gravity effect. Thicker vectors represent the average response. For details about the meaning of the polar plot representation see Figure 4C. Data reported are based on the performances of 36 subjects (18 for the visual and 18 for the haptic experiment).

To summarize, the parabolic flight experiments show that, although opposite perceptive errors characterize vision and haptic sense in normal gravity conditions, the effects of microgravity on each of those patterns of errors are in the same direction for the two sensory modalities.

Results summary

Experiment 1 revealed strong, complementary distortions between haptic and visual perception of 3D geometry. Subjects visually underestimated an object’s depth with respect to both height and width, whilst overestimating depth when exploring the object haptically. In Experiment 2, the comparison of seated versus supine body orientation clearly showed that both visual and haptic distortions align with the subject’s body rather than with gravity. Experiment 3, conducted during parabolic flight, showed a clear effect of microgravity on both haptic and visual distortion. Importantly, despite the fact that the perceptive errors in normo-gravity were in the opposite directions for visual and haptic tasks, the changes induced by microgravity were in the same direction along the anterior–posterior axis: weightlessness increases the haptic over-estimation of depth with respect to width and height and decreases the visual under-estimation of depth with respect to width and height.

Discussion

The experiments presented here aimed to understand how gravity affects the perception of 3D shapes. We extend previous studies restricted to vision to include haptic sensation by using the same experimental paradigm for the two modalities. In the following we argue for a modality-independent role of gravity in interpreting incoming sensory signals.

Haptic and visual perception in normo-gravity conditions

Individually, the visual and haptic distortions observed here are consistent with previous findings obtained without using head-mounted displays or haptic devices, supporting the validity of the present experimental paradigms. Our haptic results concur with overestimation in the radial dimension observed for haptic tasks (Lipshits et al., 1994; Armstrong and Marks, 1999; Fasse et al., 2000; Henriques and Soechting, 2003). Similarly, visual underestimation of depth has been previously reported in the horizontal plane (Wagner, 1985; Loomis and Philbeck, 1999). Surprisingly, we observed no significant “horizontal-vertical illusion” previously observed in the frontal plane (Avery and Day, 1969). Stimulus placement in front of the right shoulder in our experiment, rather than straight ahead, may have impeded interpreting vertical and horizontal lines as depth cues, which is purported to be the source of the illusion cited here (Girgus and Coren, 1975).

Our experiments with supine subjects also show that the patterns of visual and haptic errors are tied to the axes of the body, not to gravity. Although in apparent contradiction with the effects of body tilt on visual tasks (Marendaz et al., 1993; Leone, 1998; Barnett-Cowan et al., 2015), or external forces on haptic perception (Wydoodt et al., 2006), our observed posture-invariant error pattern concurs with previously reported body-centered and eye-centered encoding of haptic (Gurfinkel et al., 1993; Dupin et al., 2018) and visual information (Avery and Day, 1969; Howard et al., 1990; McIntyre et al., 1997; Henriques et al., 1998; Vetter et al., 1999) and with the lack of body-tilt effect in unimodal, but not cross-modal, tasks (Bernard-Espina et al., 2022).

Although perceptual biases are already known to differ between visually and haptically guided pointing (van Beers et al., 1999; Liu et al., 2018), we show for the first time a complementarity and a negative correlation between the two. Although we cannot fully discard the hypothesis of a fortuitous correspondence between modality-specific mechanisms, such as integration of eye vergence signals for vision (Murdison et al., 2019) or exploratory movement kinematics for haptic (Armstrong and Marks, 1999), our findings suggest some level of shared neural processing. In previous studies, the sequential nature of haptic shape exploration, requiring information storage in working memory, was shown to contribute to perceptive distortions (McFarland & Soechting, 2007). Similarly, both pointing to memorized targets (McIntyre et al., 1998) and haptic-visual comparisons (McIntyre and Lipshits, 2008) showed distortions related to memory storage and coordinate transformations. The sequential nature of the haptic explorations in our experiments, and the likely need for sequential visual scanning, plus the need to compare lengths along different directions, would require similar central processing of spatial information. The clearly different distortions in visual versus haptic suggest that these tasks are carried out by separate, modality-specific processes. Nevertheless, the link between modality-specific squaring errors reported here suggests that central neural processes associated with memory storage and coordinate transformations are shared between the two.

3D object perception in microgravity

Although the egocentric patterns observed for visual and haptic errors would suggest that an external cue, such as gravity, should not influence shape perception, the strong microgravity effects observed in parabolic-flight clearly show the contrary. How can these apparently contradictory results be reconciled? We have shown that the observed effects of microgravity on both haptic and visual perceptive distortions are not directly ascribable to a decrease in their precision, nor to the mechanical action of gravity on the arm in the haptic task (arm support and supine conditions). Moreover, the remarkable similarity between microgravity’s effects on visual and haptic distortions makes it unlikely that they are caused by independent effects on the two sensory systems, such as modifications of proprioceptive-tactile receptors’ output for haptic tasks (Lipshits et al., 1994) or alterations of eye movement control for visual tests (Clément et al., 1989;Clarke et al., 2013). A more parsimonious and likely explanation is an effect of gravity on sensory processing that is shared by the two sensory modalities, which could be only hypothesized in previous unimodal studies (Clément et al., 2009, 2012, 2013).

Through what mechanism does gravity affect shape perception?

The observed modality-independent effects of gravity on shape perception can be associated to vestibular/otolithic projections toward the neural-network that recurrently connect the brain areas involved in the haptic and visual representation of objects and whose existence has been revealed by various brain imaging and electrophysiological studies (Fig. 9A). The lateral occipital complex (LOC), known to be activated by 3D object images, is also active during haptic shape recognition. Similarly, S1, S2, vPM, and BA5 areas, commonly associated with haptic object perception are activated also by images of manipulable objects. These cross-modal activations are mediated by the intraparietal sulcus (IPS), whose activity is enhanced during cross-modal, visuo-haptic object recognition. That IPS plays a role in reconstructing a visual representation of a haptically sensed object, and vice versa, is supported by electrophysiological activity consistent with recurrent neural networks able to perform cross-modal sensory re-encoding (Pouget et al., 2002; Avillac et al., 2005). The coexistence of visual and haptic object representations, as depicted in Figure 9B, is consistent with behaviorally observed concurrent representations of reaching/grasping tasks (McGuire and Sabes, 2009, 2011; Tagliabue and McIntyre, 2011, 2012, 2013, 2014; Tagliabue et al., 2013; Arnoux et al., 2017) and with the link that we observed here between haptic and visual perceptive errors in normo-gravity conditions.

Figure 9.

Figure 9.

A, Evidence of neural activation associated to haptic (blue), visual (red) and cross-modal (orange) object perception. The regions primarily involved in haptic object representation are the primary and secondary somatosensory areas (S1 and S2), the Brodmann area 5 (BA5), and the ventral premotor (vPM) area. The 3D object visual representation is known to reside in the lateral occipital complex (LOC). The numbers’ font size qualitatively represents the intensity of the neural activation during object perception tasks: 1 Sakata et al., 1973; 2 Koch & Fuster, 1989; 3 Moore & Engel, 2001; 4 James et al., 2002; 5 Grefkes et al., 2002; 6 Amedi et al., 2002; 7 Grill-Spector, 2003; 8 Deshpande et al., 2008; 9 Stilla & Sathian, 2008; 10 Vingerhoets, 2008; 11 Lacey et al., 2009; 12 Meyer et al., 2011; 13 Snow et al., 2014; 14 Sun et al., 2016; 15 Yau et al., 2016. Green letters represent studies reporting otolithic projection in the intraparietal sulcus (IPS) area: a Blanke et al., 2000; b Miyamoto et al., 2007; c Schlindwein et al., 2008; d-e Chen et al., 2011, 2013. B, Proposed schematic of information processing underlying object perception. Space/body internal representation reciprocally connects concurrent haptic and visual object representation and allows building a visual representation of the object from haptic signals and vice versa. Otolithic signals affect the body/space internal representation, distorting both haptic and visual object representations. Beneath the blocs are reported their identified cortical location based on electrophysiological and brain imaging findings reported in the literature.

We propose the trans-modal processing performed by IPS, as depicted in Figure 9, as the source of the modality-independent distortions observed when performing the experiment in 0G. To transform a visually-acquired object into a stable haptic representation (and vice versa), despite potential independent movements of the two sensory systems, the IPS network must use a stable internal representation of the body and/or peripersonal space (Andersen et al., 1997; Cohen and Andersen, 2002; Land, 2014), built by constantly integrating signals about the eye-hand kinematic chain and the body position in space, including vestibular inputs. Clear evidence that internal models of body/space affect the interpretation of incoming sensory information in a Bayesian fashion has been extensively reported, e.g. the “Ames room” and the Müller-Lyer visual illusions being based on prior knowledge about the geometry of constructed environments (O’Reilly et al., 2012) or the cutaneous Rabbit illusion (Goldreich, 2007). The contribution of gravitational signals to the body/space representation concurs with (a) vestibular (i.e., otolithic) projections to IPS-area reported in numerous electrophysiological studies (Blanke et al., 2000; Miyamoto et al., 2007; Schlindwein et al., 2008; Chen et al., 2011, 2013), (b) the observed interference of head-tilt with the re-encoding of sensory signals between visual and haptic space (Tagliabue and McIntyre, 2011, 2013; Burns and Blohm, 2010; Bernard-Espina et al., 2022) and (c) the effect of vestibular stimulation on self-body-size perception (Mast et al., 2014).

The similar effect of microgravity on both visual and haptic object perception observed here could hence be explained by a deformation of the body schema and/or internal representation of the peripersonal 3D space due to the unusual lack of gravity. Indeed, IPS recurrent neural network connections are set/learnt for working in the presence of tonic, gravity-dependent, otolithic inputs. If the network lacks this input, without appropriate adjustments to the synaptic weights, the cross-modal transformations, and thus the concurrent object representations, would be inexorably and similarly affected. In experiments studying visual perception in microgravity it was indeed observed that distortions of object size perception are accompanied by a modification of subjective eye height estimation (Clément et al., 2008, 2013; Bourrelly et al., 2015, 2016), that, in the light of our hypothesis, would reflect a distortion of the internal representation of the body and/or peripersonal space.

Conclusions

Our study offers a better understanding of human perception of 3D geometry. We have provided evidence for separate, modality-specific representations for visual and haptic object perception in our tasks. Nevertheless, the observed link between the errors characterizing the two senses, together with recent findings about reciprocal activations of the visual and haptic cortical systems, indicate a tight interaction between concurrent visual and haptic object representations. Furthermore, the observation that microgravity has the same incremental effect on visual and haptic object perception argues for a modality-independent perceptive mechanism. Via this mechanism, modality-specific object information would be treated by neural networks of the parietal cortex and interpreted through an internal representation of the body and egocentric 3D space, shaped by gravity (otolithic) signals. These microgravity experiments, therefore, provide fundamental cues to better understand the neurophysiology of perception on Earth. They suggest that fully independent, modality-specific 3D object perception does not exist, as the modalities are inexorably linked by gravity. This implies that restricting future investigations to the brain areas associated with a single sensory modality, even when studying only a modality-specific behavior, would be a clear limiting factor in understanding the neural mechanisms underlying 3D object perception.

References

  1. Amedi A, Jacobson G, Hendler T, Malach R, Zohary E (2002) Convergence of visual and tactile shape processing in the human lateral occipital complex. Cereb Cortex 12:1202–1212. 10.1093/cercor/12.11.1202 [DOI] [PubMed] [Google Scholar]
  2. Andersen RA, Snyder LH, Bradley DC, Xing J (1997) Multimodal representation of space in the posterior parietal cortex and its use in planning movements. Annu Rev Neurosci 20:303–330. 10.1146/annurev.neuro.20.1.303 [DOI] [PubMed] [Google Scholar]
  3. Armstrong L, Marks LE (1999) Haptic perception of linear extent. Percept Psychophys 61:1211–1226. 10.3758/BF03207624 [DOI] [PubMed] [Google Scholar]
  4. Arnoux L, Fromentin S, Farotto D, Beraneck M, McIntyre J, Tagliabue M (2017) The visual encoding of purely proprioceptive intermanual tasks is due to the need of transforming joint signals, not to their interhemispheric transfer. J Neurophysiol 118:1598–1608. 10.1152/jn.00140.2017 [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Avery GC, Day RH (1969) Basis of the horizontal-vertical illusion. J Exp Psychol 81:376–380. 10.1037/h0027737 [DOI] [PubMed] [Google Scholar]
  6. Avillac M, Denève S, Olivier E, Pouget A, Duhamel JR (2005) Reference frames for representing visual and tactile locations in parietal cortex. Nat Neurosci 8:941–949. 10.1038/nn1480 [DOI] [PubMed] [Google Scholar]
  7. Barnett-Cowan M, Snow JC, Culham JC (2015) Contribution of bodily and gravitational orientation cues to face and letter recognition. Multisens Res 28:427–442. 10.1163/22134808-00002481 [DOI] [PubMed] [Google Scholar]
  8. Bernard-Espina J, Dal Canto D, Beraneck M, McIntyre J, Tagliabue M (2022) How tilting the head interferes with eye-hand coordination: the role of gravity in visuo-proprioceptive, cross-modal sensory transformations. Front Integr Neurosci 16:788905. 10.3389/fnint.2022.788905 [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Blanke O, Perrig S, Thut G, Landis T, Seeck M (2000) Simple and complex vestibular responses induced by electrical cortical stimulation of the parietal cortex in humans. J Neurol Neurosurg Psychiatry 69:553–556. 10.1136/jnnp.69.4.553 [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Bourrelly A, McIntyre J, Luyat M (2015) Perception of affordances during long-term exposure to weightlessness in the international space station. Cogn Process 16:171–174. 10.1007/s10339-015-0692-y [DOI] [PubMed] [Google Scholar]
  11. Bourrelly A, McIntyre J, Morio C, Despretz P, Luyat M (2016) Perception of affordance during short-term exposure to weightlessness in parabolic flight. PLoS One 11:e0153598. 10.1371/journal.pone.0153598 [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Burns JK, Blohm G (2010) Multi-sensory weights depend on contextual noise in reference frame transformations. Front Hum Neurosci 4. 10.3389/fnhum.2010.00221 [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Chen A, DeAngelis GC, Angelaki DE (2011) Representation of vestibular and visual cues to self-motion in ventral intraparietal cortex. J Neurosci 31:12036–12052. 10.1523/JNEUROSCI.0395-11.2011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Chen X, Deangelis GC, Angelaki DE (2013) Diverse spatial reference frames of vestibular signals in parietal cortex. Neuron 80:1310–1321. 10.1016/j.neuron.2013.09.006 [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Clarke AH, Just K, Krzok W, Schönfeld U (2013) Listing's plane and the 3D-VOR in microgravity – The role of the otolith afferences. J Vestib Res 23:61–70. 10.3233/VES-130476 [DOI] [PubMed] [Google Scholar]
  16. Clément G, Andre-Deshays C, Lathan CE (1989) Effects of gravitoinertial force variations on vertical gaze direction during oculomotor reflexes and visual fixation. Aviat Space Environ Med 60:1194–1198. [PubMed] [Google Scholar]
  17. Clément G, Bukley A (2008) Mach’s square-or-diamond phenomenon in microgravity during parabolic flight. Neurosci Lett 447:179–182. 10.1016/j.neulet.2008.10.012 [DOI] [PubMed] [Google Scholar]
  18. Clément G, Demel M (2012) Perceptual reversal of bi-stable figures in microgravity and hypergravity during parabolic flight. Neurosci Lett 507:143–146. 10.1016/j.neulet.2011.12.006 [DOI] [PubMed] [Google Scholar]
  19. Clément G, Eckardt J (2005) Influence of the gravitational vertical on geometric visual illusions. Acta Astronaut 56:911–917. 10.1016/j.actaastro.2005.01.017 [DOI] [PubMed] [Google Scholar]
  20. Clément G, Fraysse MJ, Deguine O (2009) Mental representation of space in vestibular patients with otolithic or rotatory vertigo. Neuroreport 20:457–461. 10.1097/WNR.0b013e328326f815 [DOI] [PubMed] [Google Scholar]
  21. Clément G, Lathan C, Lockerd A (2008) Perception of depth in microgravity during parabolic flight. Acta Astronaut 63:828–832. 10.1016/j.actaastro.2008.01.002 [DOI] [Google Scholar]
  22. Clément G, Loureiro N, Sousa D, Zandvliet A (2016) Perception of egocentric distance during gravitational changes in parabolic flight. PLoS One 11:e0159422. 10.1371/journal.pone.0159422 [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Clément G, Reschke MF (2010) Neuroscience in space. New York: Springer Science & Business Media. [Google Scholar]
  24. Clément G, Skinner A, Lathan C (2013) Distance and size perception in astronauts during long-duration spaceflight. Life 3:524–537. 10.3390/life3040524 [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Clément G, Skinner A, Richard G, Lathan C (2012) Geometric illusions in astronauts during long-duration spaceflight. Neuroreport 23:894–899. 10.1097/WNR.0b013e3283594705 [DOI] [PubMed] [Google Scholar]
  26. Clément G, Vieville T, Lestienne F, Berthoz A (1986) Modifications of gain asymmetry and beating field of vertical optokinetic nystagmus in microgravity. Neurosci Lett 63:271–274. 10.1016/0304-3940(86)90368-X [DOI] [PubMed] [Google Scholar]
  27. Cohen YE, Andersen RA (2002) A common reference frame for movement plans in the posterior parietal cortex. Nat Rev Neurosci 3:553–562. 10.1038/nrn873 [DOI] [PubMed] [Google Scholar]
  28. Curry RE (1972) A bayesian model for visual space perception. In Seventh Annual Conference on Manual Control, pp 187.
  29. De Saedeleer C, Vidal M, Lipshits M, Bengoetxea A, Cebolla AM, Berthoz A, Cheron G, McIntyre J (2013) Weightlessness alters up/down asymmetries in the perception of self-motion. Exp Brain Res 226:95–106. 10.1007/s00221-013-3414-7 [DOI] [PubMed] [Google Scholar]
  30. Deshpande G, Hu X, Stilla R, Sathian K (2008) Effective connectivity during haptic perception: a study using Granger causality analysis of functional magnetic resonance imaging data. NeuroImage 40:1807–1814. 10.1016/j.neuroimage.2008.01.044 [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Dupin L, Hayward V, Wexler M (2018) Radial trunk-centred reference frame in haptic perception. Sci Rep 8:13550. 10.1038/s41598-018-32002-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Fasse ED, Hogan N, Kay BA, Mussa-Ivaldi FA (2000) Haptic interaction with virtual objects. Spatial perception and motor control. Biol Cybern 82:69–83. 10.1007/PL00007962 [DOI] [PubMed] [Google Scholar]
  33. Girgus JS, Coren S (1975) Depth cues and constancy scaling in the horizontal-vertical illusion: the bisection error. Can J Psychol 29:59–65. 10.1037/h0082021 [DOI] [PubMed] [Google Scholar]
  34. Goldreich DA (2007) Bayesian perceptual model replicates the cutaneous rabbit and other tactile spatiotemporal illusions. PLoS ONE 2:e333. 10.1371/journal.pone.0000333 [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Goltz HC, Irving EL, Steinbach MJ, Eizenman M (1997) Vertical eye position control in darkness: orbital position and body orientation interact to modulate drift velocity. Vision Res 37:789–798. 10.1016/S0042-6989(96)00217-9 [DOI] [PubMed] [Google Scholar]
  36. Goodenough DR, Oltman PK, Sigman E, Cox PW (1981) The rod-and-frame illusion in erect and supine observers. Percept Psychophys 29:365–370. 10.3758/BF03207346 [DOI] [PubMed] [Google Scholar]
  37. Grefkes C, Weiss PH, Zilles K, Fink GR (2002) Crossmodal processing of object features in human anterior intraparietal cortex: an fMRI study implies equivalencies between humans and monkeys. Neuron 35:173–184. 10.1016/S0896-6273(02)00741-9 [DOI] [PubMed] [Google Scholar]
  38. Grill-Spector K (2003) The neural basis of object perception. Curr Opin Neurobiol 13:159–166. 10.1016/S0959-4388(03)00040-0 [DOI] [PubMed] [Google Scholar]
  39. Gurfinkel VS, Lestienne F, Levik Y, Popov KE (1993) Egocentric references and human spatial orientation in microgravity. I. Perception of complex tactile stimuli. Exp Brain Res 95:339–342. 10.1007/BF00229791 [DOI] [PubMed] [Google Scholar]
  40. Harris LR, Jenkin M, Jenkin H, Dyde R, Zacher J, Allison RS (2010) The unassisted visual system on earth and in space. J Vestib Res 20:25–30. 10.3233/VES-2010-0352 [DOI] [PubMed] [Google Scholar]
  41. Harris LR, Mander C (2014) Perceived distance depends on the orientation of both the body and the visual environment. J Vis 14:17. 10.1167/14.12.17 [DOI] [PubMed] [Google Scholar]
  42. Henriques DY, Klier EM, Smith MA, Lowy D, Crawford JD (1998) Gaze-centered remapping of remembered visual space in an open-loop pointing task. J Neurosci 18:1583–1594. 10.1523/JNEUROSCI.18-04-01583.1998 [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Henriques DYP, Soechting JF (2003) Bias and sensitivity in the haptic perception of geometry. Exp Brain Res 150:95–108. 10.1007/s00221-003-1402-z [DOI] [PubMed] [Google Scholar]
  44. Howard I (1982) Human visual orientation. New York: Wiley. [Google Scholar]
  45. Howard IP, Bergström SS, Ohmi M (1990) Shape from shading in different frames of reference. Perception 19:523–530. 10.1068/p190523 [DOI] [PubMed] [Google Scholar]
  46. James TW, Humphrey GK, Gati JS, Servos P, Menon RS, Goodale MA (2002) Haptic study of three-dimensional objects activates extrastriate visual areas. Neuropsychologia 40:1706–1714. 10.1016/S0028-3932(02)00017-9 [DOI] [PubMed] [Google Scholar]
  47. Kersten D, Mamassian P, Yuille A (2004) Object perception as Bayesian inference. Annu Rev Psychol 55:271–304. 10.1146/annurev.psych.55.090902.142005 [DOI] [PubMed] [Google Scholar]
  48. Kersten D, Yuille A (2003) Bayesian models of object perception. Curr Opin Neurobiol 13:150–158. 10.1016/S0959-4388(03)00042-4 [DOI] [PubMed] [Google Scholar]
  49. Koch KW, Fuster JM (1989) Unit activity in monkey parietal cortex related to haptic perception and temporary memory. Exp Brain Res 76:292–306. 10.1007/BF00247889 [DOI] [PubMed] [Google Scholar]
  50. Lacey S, Tal N, Amedi A, Sathian K (2009) A putative model of multisensory object representation. Brain Topogr 21:269–274. 10.1007/s10548-009-0087-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Land MF (2014) Do we have an internal model of the outside world? Philos Trans R Soc Lond B Biol Sci 369:20130045. 10.1098/rstb.2013.0045 [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Lee TS (2015) The visual system’s internal model of the world. Proc IEEE Inst Electr Electron Eng 103:1359–1378. 10.1109/JPROC.2015.2434601 [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Leone G (1998) The effect of gravity on human recognition of disoriented objects. Brain Res Brain Res Rev 28:203–214. 10.1016/S0165-0173(98)00040-X [DOI] [PubMed] [Google Scholar]
  54. Lipshits MI, Gurfinkel EV, McIntyre J, Droulez J, Gurfinkel VS, Berthoz A (1994) Influence of weightlessness on haptic perception. In: Life sciences research in space, proceedings of the fifth European symposium held 26 September - 1 October, 1993 (Oser H, Guyenne TD, eds), pp 367. Arcachon, France: European Space Agency. [Google Scholar]
  55. Lipshits M, McIntyre J (1999) Gravity affects the preferred vertical and horizontal in visual perception of orientation. Neuroreport 10:1085–1089. 10.1097/00001756-199904060-00033 [DOI] [PubMed] [Google Scholar]
  56. Liu Y, Sexton BM, Block HJ (2018) Spatial bias in estimating the position of visual and proprioceptive targets. J Neurophysiol 119:1879–1888. 10.1152/jn.00633.2017 [DOI] [PubMed] [Google Scholar]
  57. Loomis JM, Klatzky RL, Giudice NA (2013) Representing 3D space in working memory: spatial images from vision, hearing, touch, and language. In: Multisensory imagery (Lacey S, Lawson R, eds), pp 131–155. New York: Springer. 10.1007/978-1-4614-5879-1_8 [DOI] [Google Scholar]
  58. Loomis JM, Philbeck JW (1999) Is the anisotropy of perceived 3-D shape invariant across scale? Percept Psychophys 61:397–402. 10.3758/BF03211961 [DOI] [PubMed] [Google Scholar]
  59. Luyat M, Gentaz E (2002) Body tilt effect on the reproduction of orientations: studies on the visual oblique effect and subjective orientations. J Exp Psychol Hum Percept Perform 28:1002–1011. 10.1037/0096-1523.28.4.1002 [DOI] [PubMed] [Google Scholar]
  60. Marendaz C, Stivalet P, Barraclough L, Walkowiac P (1993) Effect of gravitational cues on visual search for orientation. J Exp Psychol Hum Percept Perform 19:1266–1277.. 10.1037/0096-1523.19.6.1266 [DOI] [PubMed] [Google Scholar]
  61. Mast FW, Preuss N, Hartmann M, Grabherr L (2014) Spatial cognition, body representation and affective processes: the role of vestibular information beyond ocular reflexes and control of posture. Front Integr Neurosci 8:44. 10.3389/fnint.2014.00044 [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. McFarland J, Soechting JF (2007) Factors influencing the radial-tangential illusion in haptic perception. Exp Brain Res 178:216–227. 10.1007/s00221-006-0727-9 [DOI] [PubMed] [Google Scholar]
  63. McGuire LMM, Sabes PN (2009) Sensory transformations and the use of multiple reference frames for reach planning. Nat Neurosci 12:1056–1061. 10.1038/nn.2357 [DOI] [PMC free article] [PubMed] [Google Scholar]
  64. McGuire LMM, Sabes PN (2011) Heterogeneous representations in the superior parietal lobule are common across reaches to visual and proprioceptive targets. J Neurosci 31:6661–6673. 10.1523/JNEUROSCI.2921-10.2011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  65. McIntyre J, Lipshits M (2008) Central processes amplify and transform anisotropies of the visual system in a test of visual-haptic coordination. J Neurosci 28:1246–1261. 10.1523/JNEUROSCI.2066-07.2008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  66. McIntyre J, Stratta F, Lacquaniti F (1997) Viewer-centered frame of reference for pointing to memorized targets in three-dimensional space. J Neurophysiol 78:1601–1618. 10.1152/jn.1997.78.3.1601 [DOI] [PubMed] [Google Scholar]
  67. McIntyre J, Stratta F, Lacquaniti F (1998) Short-term memory for reaching to visual targets: psychophysical evidence for body-centered reference frames. J Neurosci 18:8423–8435. 10.1523/jneurosci.18-20-08423.1998 [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Meyer K, Kaplan JT, Essex R, Damasio H, Damasio A (2011) Seeing touch is correlated with content-specific activity in primary somatosensory cortex. Cereb Cortex 21:2113–2121. 10.1093/cercor/bhq289 [DOI] [PMC free article] [PubMed] [Google Scholar]
  69. Mierau A, Girgenrath M, Bock O (2008) Isometric force production during changed-Gz episodes of parabolic flight. Eur J Appl Physiol 102:313–318. 10.1007/s00421-007-0591-8 [DOI] [PubMed] [Google Scholar]
  70. Miyamoto T, Fukushima K, Takada T, de Waele C, Vidal PP (2007) Saccular stimulation of the human cortex: a functional magnetic resonance imaging study. Neurosci Lett 423:68–72. 10.1016/j.neulet.2007.06.036 [DOI] [PubMed] [Google Scholar]
  71. Moore C, Engel SA (2001) Neural response to perception of volume in the lateral occipital complex. Neuron 29:277–286. 10.1016/S0896-6273(01)00197-0 [DOI] [PubMed] [Google Scholar]
  72. Murdison TS, Leclercq G, Lefèvre P, Blohm G (2019) Misperception of motion in depth originates from an incomplete transformation of retinal signals. J Vis 19:1–15. 10.1167/19.12.21 [DOI] [PubMed] [Google Scholar]
  73. O’Reilly JX, Jbabdi S, Behrens TEJ (2012) How can a Bayesian approach inform neuroscience? Eur J Neurosci 35:1169–1179. 10.1111/j.1460-9568.2012.08010.x [DOI] [PubMed] [Google Scholar]
  74. Pouget A, Deneve S, Duhamel JR (2002) A computational perspective on the neural basis of multisensory spatial representations. Nat Rev Neurosci 3:741–747. 10.1038/nrn914 [DOI] [PubMed] [Google Scholar]
  75. Prinzmetal W, Beck DM (2001) The tilt-constancy theory of visual illusions. J Exp Psychol Hum Percept Perform 27:206–217. 10.1037/0096-1523.27.1.206 [DOI] [PubMed] [Google Scholar]
  76. Reschke MF, Kolev OI, Clément G (2017) Eye-head coordination in 31 space shuttle astronauts during visual target acquisition. Sci Rep 7:14283. 10.1038/s41598-017-14752-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  77. Reschke MF, Wood SJ, Clément G (2018) Ocular counter rolling in astronauts after short- and long-duration spaceflight. Sci Rep 8:7747. 10.1038/s41598-018-26159-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  78. Sakata H, Takaoka Y, Kawarasaki A, Shibutani H (1973) Somatosensory properties of neurons in the superior parietal cortex (area 5) of the rhesus monkey. Brain Res 64:85–102. 10.1016/0006-8993(73)90172-8 [DOI] [PubMed] [Google Scholar]
  79. Schlindwein P, Mueller M, Bauermann T, Brandt T, Stoeter P, Dieterich M (2008) Cortical representation of saccular vestibular stimulation: VEMPs in fMRI. NeuroImage 39:19–31. 10.1016/j.neuroimage.2007.08.016 [DOI] [PubMed] [Google Scholar]
  80. Snow JC, Strother L, Humphreys GW (2014) Haptic shape processing in visual Cortex. J Cogn Neurosci 26:1154–1167. 10.1162/jocn_a_00548 [DOI] [PubMed] [Google Scholar]
  81. Stilla R, Sathian K (2008) Selective visuo-haptic processing of shape and texture. Hum Brain Mapp 29:1123–1138. 10.1002/hbm.20456 [DOI] [PMC free article] [PubMed] [Google Scholar]
  82. Sun HC, Welchman AE, Chang DHF, Di Luca M (2016) Look but don’t touch: visual cues to surface structure drive somatosensory cortex. NeuroImage 128:353–361. 10.1016/j.neuroimage.2015.12.054 [DOI] [PMC free article] [PubMed] [Google Scholar]
  83. Tagliabue M, Arnoux L, McIntyre J (2013) Keep your head on straight: facilitating sensori-motor transformations for eye-hand coordination. Neuroscience 248:88–94. 10.1016/j.neuroscience.2013.05.051 [DOI] [PubMed] [Google Scholar]
  84. Tagliabue M, McIntyre J (2011) Necessity is the mother of invention: reconstructing missing sensory information in multiple, concurrent reference frames for eye-hand coordination. J Neurosci 31:1397–1409. 10.1523/JNEUROSCI.0623-10.2011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  85. Tagliabue M, McIntyre J (2012) Eye-hand coordination when the body moves: dynamic egocentric and exocentric sensory encoding. Neurosci Lett 513:78–83. 10.1016/j.neulet.2012.02.011 [DOI] [PubMed] [Google Scholar]
  86. Tagliabue M, McIntyre J (2013) When kinesthesia becomes visual: a theoretical justification for executing motor tasks in visual space. PLoS ONE 8:e68438. 10.1371/journal.pone.0068438 [DOI] [PMC free article] [PubMed] [Google Scholar]
  87. Tagliabue M, McIntyre J (2014) A modular theory of multisensory integration for motor control. Front Comput Neurosci 8:1. 10.3389/fncom.2014.00001 [DOI] [PMC free article] [PubMed] [Google Scholar]
  88. Todd JT, Norman JF (2003) The visual perception of 3-D shape from multiple cues: are observers capable of perceiving metric structure? Percept Psychophys 65:31–47. 10.3758/BF03194781 [DOI] [PubMed] [Google Scholar]
  89. van Beers RJ, Sittig AC, Gon JJ (1999) Integration of proprioceptive and visual position-information: an experimentally supported model. J Neurophysiol 81:1355–1364. 10.1152/jn.1999.81.3.1355 [DOI] [PubMed] [Google Scholar]
  90. Vetter P, Goodbody SJ, Wolpert DM (1999) Evidence for an eye-centered spherical representation of the visuomotor map. J Neurophysiol 81:935–939. 10.1152/jn.1999.81.2.935 [DOI] [PubMed] [Google Scholar]
  91. Villard E, Garcia-Moreno FT, Peter N, Clément G (2005) Geometric visual illusions in microgravity during parabolic flight. Neuroreport 16:1395–1398. 10.1097/01.wnr.0000174060.34274.3e [DOI] [PubMed] [Google Scholar]
  92. Vingerhoets G (2008) Knowing about tools: neural correlates of tool familiarity and experience. NeuroImage 40:1380–1391. 10.1016/j.neuroimage.2007.12.058 [DOI] [PubMed] [Google Scholar]
  93. Wagenmakers EJ, Marsman M, Jamil T, Ly A, Verhagen J, Love J, Selker R, Gronau QF, Šmíra M (2018) Bayesian inference for psychology. Part I: Theoretical advantages and practical ramifications. Psychon Bull Rev 25:35–57. 10.3758/s13423-017-1343-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  94. Wagner M (1985) The metric of visual space. Percept Psychophys 38:483–495. 10.3758/BF03207058 [DOI] [PubMed] [Google Scholar]
  95. Welchman AE (2016) The human brain in depth: how we see in 3D. Ann Rev Vis Sci 2:345–376. 10.1146/annurev-vision-111815-114605 [DOI] [PubMed] [Google Scholar]
  96. Wolbers T, Klatzky RL, Loomis JM, Wutte MG, Giudice NA (2011) Modality-independent coding of spatial layout in the human brain. Curr Biol 21:984–989. 10.1016/j.cub.2011.04.038 [DOI] [PMC free article] [PubMed] [Google Scholar]
  97. Wydoodt P, Gentaz E, Streri A (2006) Role of force cues in the haptic estimations of a virtual length. Exp Brain Res 171:481–489. 10.1007/s00221-005-0295-4 [DOI] [PubMed] [Google Scholar]
  98. Yau JM, Kim SS, Thakur PH, Bensmaia SJ (2016) Feeling form: the neural basis of haptic shape perception. J Neurophysiol 115:631–642. 10.1152/jn.00598.2015 [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from The Journal of Neuroscience are provided here courtesy of Society for Neuroscience

RESOURCES