Skip to main content
PLOS Computational Biology logoLink to PLOS Computational Biology
. 2010 Feb 19;6(2):e1000680. doi: 10.1371/journal.pcbi.1000680

Self versus Environment Motion in Postural Control

Kalpana Dokka 1,*, Robert V Kenyon 2, Emily A Keshner 3,4, Konrad P Kording 5
Editor: Jörn Diedrichsen6
PMCID: PMC2824754  PMID: 20174552

Abstract

To stabilize our position in space we use visual information as well as non-visual physical motion cues. However, visual cues can be ambiguous: visually perceived motion may be caused by self-movement, movement of the environment, or both. The nervous system must combine the ambiguous visual cues with noisy physical motion cues to resolve this ambiguity and control our body posture. Here we have developed a Bayesian model that formalizes how the nervous system could solve this problem. In this model, the nervous system combines the sensory cues to estimate the movement of the body. We analytically demonstrate that, as long as visual stimulation is fast in comparison to the uncertainty in our perception of body movement, the optimal strategy is to weight visually perceived movement velocities proportional to a power law. We find that this model accounts for the nonlinear influence of experimentally induced visual motion on human postural behavior both in our data and in previously published results.

Author Summary

Visual cues typically provide ambiguous information about the orientation of our body in space. When we perceive relative motion between ourselves and the environment, it could have been caused by our movement within the environment, or the movement of the environment around us, or the simultaneous movements of both our body and the environment. The nervous system must resolve this ambiguity for efficient control of our body posture during stance. Here, we show that the nervous system could solve this problem by optimally combining visual signals with physical motion cues. Sensory ambiguity is a central problem during cue combination. Our results thus have implications on how the nervous system could resolve sensory ambiguity in other cue combination tasks.

Introduction

Our visual system senses the movement of objects relative to ourselves. Barring contextual information, a car approaching us rapidly while we stand still may produce the same visual motion cues as if we and the car were approaching each other. The nervous system thus needs to deal with this problem of ambiguity which will be reflected in the way we control our body posture [1][3]. Consequently, neuroscientists have extensively studied such situations. In such studies, a subject typically stands in front of a visual display and postural reactions to varied movements of the displayed visual scene are measured [4][11]. Even in the absence of direct physical perturbations, subjects actively produce compensatory body movements in response to the movement of the visual scene. This indicates that subjects attribute part of the visual motion to their own body while they resolve the ambiguity in visual stimuli.

Here we constructed a Bayesian attribution model (Fig. 1A) to examine how the nervous system may solve this problem of sensory ambiguity. This model shows that optimal solutions will generally take on the form of power laws. We found that the results from experiments with both healthy subjects and patients suffering from vestibular deficits are well fit by power laws. The nervous system thus appears to combine visual and physical motion cues to estimate our body movement for the control of posture in a fashion that is close to optimal.

Figure 1. Sensory ambiguity influences postural behavior.

Figure 1

(A) Graphical model, a compact way of describing the assumptions made by a Bayesian model. Inline graphic is the velocity of body motion, while Inline graphic is the velocity of the environment motion. Inline graphic represents a noisy estimate of the body velocity that is sensed by kinesthetic and vestibular signals. Inline graphic represents the visually perceived velocity of the relative motion between the body and the environment. The attribution model estimates Inline graphic from these perceived cues. (B) Distribution of body velocities during unperturbed stance averaged across subjects tested in our experiment (C) Experimental data and model fits for healthy subjects tested in our experiment.

Results

To test our Bayesian attribution model, we considered data from two published experiments with healthy subjects [4],[5] as well as a new experiment we performed to cover the range of visual scene velocities that are relevant to the model predictions. Any purely linear model, for example a Kalman controller, predicts that the gain of the postural response, which is the influence of visual scene motion on the amplitude of postural reactions, remains constant. For these datasets, however, the gain of the postural response decreased with increasing velocities of visual scene motion (Fig. 1C, 2A and 2C; slope = −0.78±0.15 s.d. across datasets, p<0.005). At low velocities, the gain was close to one which would be expected if the nervous system viewed the body as the sole source of the visually perceived motion. At higher velocities though, the gain decreased which would be expected if the nervous system no longer attributed all of the visually perceived motion to the body. The nervous system thus does not appear to simply assume that visually perceived motion can be fully attributable to the body.

Figure 2. Gain of the postural response of healthy subjects and patients with vestibular loss.

Figure 2

(A) and (B) represent the experimental data and model fits of healthy subjects and patients tested in ref. 5 (Mergner et al. 2005). (C) and (D) represent the experimental data and model fits of healthy subjects and patients tested in ref. 4 (Peterka and Benolken 1995).

To explain this nonlinear influence of visual scene velocity on the postural response, we constructed a model that describes how the nervous system could solve the problem of sensory ambiguity (Fig. 1A). The nervous system can combine visual cues with physical motion cues, such as vestibular and kinesthetic inputs, to estimate our body movement [12][16]. However, our sensory information is not perfect and recent studies have emphasized the importance of uncertainty in such cue combination problems [17][19]. Visual information has little noise when compared with physical motion cues [20]. However, it is ambiguous as it does not directly reveal if the body, the environment or both are the source of the visually perceived movement. In comparison to visual cues, physical motion cues are typically more noisy but they are not characterized by the same kind of ambiguity. For these reasons, the nervous system can never be certain about the velocity of the body movement, but can at best estimate it using principles of optimal Bayesian calculations [21][25]. To solve the ambiguity problem, the model estimated the velocity of body's movement for which the perceived visual and physical motion cues were most likely.

Such estimation is only possible if the nervous system has additional information about two factors: typical movements in the environment and typical uncertainty about body movements [26]. For example, if a car sometimes moves fast and our body typically moves slowly, then the nervous system would naturally attribute fast movement to the car and slow movement to our body. Indeed, recent research has indicated that human subjects use the fact that slow rather than fast movements are more frequent in the environment when they estimate velocities of moving visual objects [27][30]. This distribution, used by human subjects, is called a prior. Following these studies our model used a sparse prior for movements in the visual environment, that is a prior which assigns high probability to slower movements in the environment and low probability to faster movements in the environment [29].

We wanted to estimate the form of the prior over body movements from our experimental data. We found that when subjects maintained an upright body posture while viewing a stationary visual scene, the distribution of their body velocity was best described by a Gaussian (Fig. 1B). Therefore, we used a Gaussian to represent the prior over body velocity.

The attribution model derives from five assumptions. We assume the above sparse prior over movements in the environment [29]. We assume that for the movement of visual environment that is vivid and has high contrast, visual cues provide an estimate of relative movement that has vanishing uncertainty. We assume a Gaussian for the prior over body movement (see Methods for details). We also assume a Gaussian for the likelihood of the physical motion cues which indicate that the body is not actually moving and is close to the upright position. Lastly we assume that visual scene velocities are large in comparison to the uncertainty in our detection of our body movements [31]. Under these assumptions, we can analytically derive that the best solution has a gain that varies as a power law with the visual scene velocity (see Methods for details). We thus obtain a compact, two parameter model that predicts the influence of visual perturbations on the estimates of body movement.

Our attribution model calculates how the nervous system should combine information from visual and physical senses to optimally estimate the velocity of body movement. However, the nervous system does not need to solve its problems in an optimal way, but may use simple heuristics [32]. We thus proceeded to compare the attribution model with other models in its ability to explain the decrease in the gain of postural reactions. For this purpose, we compared models using the Bayesian Information Criterion (BIC) which is a technique that allows the comparison of models with different numbers of free parameters [33]. For the gains observed in our experiment (Fig. 1C), the Bayesian model had a BIC of −7.5±1.84 (mean BIC±s.e.m. across subjects). We found that a linear model that predicted constant gain of postural reactions could not explain the observed results (BIC = 1.08±0.59, p<0.001, paired t-test between BIC values).

We then considered a model in which the amplitude of postural response increased logarithmically up to a threshold stimulus velocity and then saturated. This model predicted the response gains observed at higher scene velocities more poorly than the attribution model (BIC = −3.45±0.99, p<0.05). We also tested another model in which the gain was initially constant but decreased monotonically with increasing visual scene velocities. This model did worse at predicting the gain than the Bayesian model (BIC = 5.82±0.04, p<0.001). Thus, the Bayesian model that estimated the velocity of the body movement best fit the available data.

Another way of applying the attribution model is to human behavior in disease states. Patients with bilateral vestibular loss have vestibular cues of inferior quality [34]. The attribution model suggests that these patients' postural behavior would be based more strongly on visual feedback and that their gain should decrease less steeply as a function of stimulus velocity. Indeed, patients tested in previous studies [4],[5] showed a greater influence of vision on posture and gains that decreased less steeply (Fig. 2B, 2D slope = −0.22±0.1 s.d. across datasets, p<0.005) when compared with healthy subjects, a phenomenon that is well mimicked by the attribution model.

The postural behavior of patients showed marked differences from that of healthy subjects [4]. At low visual scene velocities, patients and healthy subjects had similar gain values. However, at higher scene velocities, patients exhibited larger gains when compared with healthy subjects. If the postural responses in patients were only influenced by elevated noise in the vestibular channels, the gain should vary in a similar manner at all visual scene velocities. That is, the gain of patients should be higher than healthy subjects at all visual scene velocities. However, increased gain of patients only at higher scene velocities alludes to a change in how patients interact with large movements in the visual environment. In our model, the best fit to the data of healthy subjects corresponds to a prior of about Inline graphic, while the fit to the patients' data corresponds to a prior of Inline graphic (see Methods for details). It would thus appear that rather than a sparse prior, patients have a prior that is closer to a Gaussian. It is not surprising that patients interact with the extrinsic environment differently from healthy subjects. In fact, such patients can develop space and motion phobia particularly in situations where there is a conflict between visual and vestibular cues and may actively avoid such conflicting environments [35][37]. Our model fits suggest that patients may seek out environments that are devoid of fast movement of large field stimuli. This is a prediction that can be tested in future research, for example by equipping patients with telemetric devices with cameras that record velocities in their environment.

Discussion

When we visually perceive displacement between ourselves and the environment, it may be caused by the movement of our body, movement of the environment, or both. In this paper, we have presented a model that formalizes how the nervous system could solve the problems of both ambiguity (self vs environment) and noise in perceived sensory cues. We suggest that the nervous system could solve these problems by estimating the movement of the body as per the principles of Bayesian calculations. We found that the model can account for the gain of postural responses when both healthy subjects and patients with vestibular loss viewed movement of a visual scene at various velocities. Importantly, our model predicts a simple functional form, power laws, as the best cue combination strategy. This makes it easy to test predictions without having to implement complicated estimation procedures.

Postural stabilization during stance is a two-step process comprised of estimation and control and in this paper we have only focused on estimation. Computational models in the past have examined how the nervous system implements this two-step process and have explained a wide range of data [5], [8], [34], [38][40]. In these models, cue combination was implemented as a change in the sensory weights [8],[14] and incorporation of nonlinear elements [5],[34],[39]. The control aspect was typically implemented by approximating the human body to a single- or double-link inverted pendulum, linearized about the upright position. These models are powerful tools for describing human behavior as they can describe changes both in amplitude and in phase as stimulus parameters are varied. As current models already largely separate postural control into an estimation part and an estimation-dependent control part, it would be straightforward to combine our estimation system with a dynamical control system.

When the control strategy is linear then any nonlinearity has to come from the estimation stage. If control is nonlinear, then there will be interactions between nonlinearities in estimation and control. Our attribution model exclusively focuses on the source of the nonlinearity inherent in the estimation process. If control is nonlinear then parts of the effects we describe here may be due to nonlinearities in control and parts due to estimation. The influence of the nonlinearity in each could be tested by experiments that decouple estimation from control. Importantly, though past models have assumed nonlinearities in the estimation part of the model [5],[8],[14],[34], we give a systematic reason for why this nonlinearity should exist and why it should have approximately the form that has been assumed in past studies.

To test our model, we used visual scene velocities that were in all likelihood, larger than the uncertainty in our perception of our body sway. Our model analytically demonstrates that for these velocities, the gain is proportional to a power law over the visual scene velocity. This leads us to question how the model would perform over a different range of scene velocities. There could be two possible solutions to this question. Firstly, the nervous system may use power laws to estimate the gain of the postural responses at all visual scene velocities. However, this solution does not make any sense as it would predict infinite gain near zero velocity. Secondly, at very small scene velocities, the nervous system may adopt a strategy different from power laws. We argue in favor of the latter possibility. We predict that at scene velocities that are close to our perceptual threshold of body sway, our attribution model would fail to explain the gain of postural responses. In this situation, the Taylor series expansion that we use can no longer be truncated after the first term and quadratic elements need to be considered (see Methods). The attribution model will predict power laws if the prior over visual movements is locally smooth within the range of uncertainty in our perception of body movement.

Ambiguity is a central aspect of various cue combination problems in perception and motor control and here we have characterized its influence on postural control. The success of the attribution model in predicting human behavior suggests that the nervous system may employ simple schemes, such as power laws, to implement the best solution to the problem of sensory ambiguity. While recent research indicates how the nervous system could integrate cues that have Gaussian likelihoods [41] or priors [29], little is known about the way non-Gaussian probability distributions may be represented at the neuronal level. The nonlinearity in cue combination that we observed here raises interesting questions about the underlying neural basis of these computations in the nervous system.

Methods

Ethics statement

Ten healthy young adults (age: 20–34 years) participated in our experiment. Subjects had no history of neurological or postural disorders and had normal or corrected-to-normal vision. Subjects were informed about the experimental procedures and informed consent was obtained as per the guidelines of the Institutional Review Board of Northwestern University.

Experimental setup

A computer-generated virtual reality system was used to simulate the movement of the visual environment. Subjects viewed a virtual scene projected via a stereo-capable projector (Electrohome Marquis 8500) onto a 2.6 m×3.2 m back-projection screen. The virtual scene consisted of a 30.5 m wide by 6.1 m high by 30.5 m deep room containing round columns with patterned rugs and painted ceiling. Beyond the virtual scene was a landscape consisting of mountains, meadows, sky and clouds. Subjects were asked to wear liquid crystal stereo shutter glasses (Stereographics, Inc.) which separated the field sequential stereo images into right and left eye images. Reflective markers (Motion Analysis, Inc.) attached to the shutter glasses provided real-time orientation of the head that was used to compute correct perspective and stereo projections for the scene. Consequently, virtual objects retained their true perspective and position in space regardless of the subject's movement.

Subjects stood in front of the visual scene with their feet shoulder-width apart and their arms bent approximately 90° at their elbows. The location of subjects' feet on the support surface was marked; subjects were instructed to stand at the same location at the beginning of each trial. During each trial, subjects were instructed to maintain an upright posture while looking straight ahead at the visual scene. Subjects viewed anterior-posterior sinusoidal oscillation of the scene at 0.2 Hz and 5 peak amplitudes: 1, 3, 25, 100 and 150 cm. The visual scene thus oscillated at peak velocities of 1.2, 3.7, 31, 125 and 188 cm/s, respectively. Subjects viewed each scene velocity once for a period of 60 s in random order. In addition, subjects experienced a control condition in which they viewed the stationary visual scene.

Reflective markers were placed on the shoulder joints and fifth lumbar vertebra. A six infra-red camera (Motion Analysis, Inc.) system was used to record the displacement of the reflective markers at 120 Hz. Displacement data of the markers was low pass filtered using a fourth order Butterworth digital filter with a cutoff at 6 Hz. Trunk displacement, chosen as an indicator of postural response, was calculated using the displacement of the shoulder and spine markers [42]. Amplitude of the postural response at the frequency of the visual scene motion, that is 0.2 Hz, was calculated in a manner adopted in neurophysiological studies [43],[44]. A sinusoid of frequency 0.2 Hz was chosen. The amplitude and the phase of this sinusoid were estimated such that the squared error between the trunk displacement and the fitted sinusoid was minimized. The amplitude of the fitted sinusoid thus indicated the amplitude of the postural response at the frequency of the visual scene motion. The gain of the trunk displacement was then computed as the ratio of the amplitude of the fitted sinusoid to the amplitude of visual scene motion.

Bayesian model of ambiguity resolution

We formalize the ambiguity problem encountered by the nervous system with the help of a graphical model (Fig. 1A). The visual scene projected on the display sinusoidally oscillates with a velocity Inline graphic, while the velocity of the body movement is Inline graphic. Inline graphic represents a noisy estimate of body velocity that is sensed by vestibular and kinesthetic signals. On the other hand, Inline graphic represents the visually perceived velocity of the relative movement between the body and the environment. Our Bayesian model combines the sensory cues, Inline graphic and Inline graphic, to obtain the best estimate of body velocity, Inline graphic. As the amplitude of postural reactions are influenced by subject's perceived body movement [2],[45], we assume that the nervous system produces body movements proportional to the estimated body velocity Inline graphic.

Using Bayes' rule we obtain:

graphic file with name pcbi.1000680.e016.jpg (1)

We assume that the visual and physical channels are affected by independent noise. Therefore, we get:

graphic file with name pcbi.1000680.e017.jpg (2)

We estimated the form of the prior over body velocity, Inline graphic, from our data. In our experiment, subjects experienced a control condition where they maintained upright body posture when viewing a stationary visual scene. We computed the average velocity of the trunk displacement across all subjects [42]. We then computed a histogram of the body velocity and observed that a Gaussian best described the distribution of body velocity (Fig. 1B). We, therefore, assumed that subjects prior over body movements would be represented by a Gaussian. While the actual body movements during unperturbed stance are large, the more relevant information is the underlying uncertainty in our perception of our body sway. The uncertainty in our perception of our body sway is much narrower than the width of the distribution of the actual body velocities seen in Fig. 1B [31]. This is because for small body movements during normal stance, the nervous system may not constrain the body even though it is aware that the body has moved away from the upright position [46].

As the likelihood of the physical motion cues, Inline graphic, can also be represented by a Gaussian, we define:

graphic file with name pcbi.1000680.e020.jpg (3)

Here Inline graphic represents a Gaussian for the combined prior-and-likelihood with variance Inline graphic.

The likelihood of visual motion cues, Inline graphic, is given by:

graphic file with name pcbi.1000680.e024.jpg (4)

Humans expect visual objects in their environment to move slowly more often than rapidly. This bias has been interpreted as a prior in a Bayesian system. We therefore use a sparse prior of the functional form Inline graphic [29]. As visual cues are precise when compared with other sensory cues, we assume that the variance of the noise in visual channels is negligible. Furthermore, in the experimental situations we model here, movement of the visual display is relatively fast in comparison to the typical uncertainty subjects may have about their body velocity [31].

We therefore marginalize over all possible Inline graphic to obtain:

graphic file with name pcbi.1000680.e027.jpg (5)

Substituting Equations 3 and 5 in Equation 2, we get:

graphic file with name pcbi.1000680.e028.jpg (6)

In the situations we model here, subjects stood on a stationary support surface. Thus, the physical motion cues indicated that the body was close to the upright position; that is Inline graphic.

We therefore get:

graphic file with name pcbi.1000680.e030.jpg (7)

For body movements close to the upright position, we can use a Taylor series expansion and drop elements of order 2 and higher to solve the second exponent term in Equation 7. We thus get:

graphic file with name pcbi.1000680.e031.jpg (8)

Importantly, when visual scene velocities are large in comparison to the typical uncertainty in our perception of our body movements, then the maximum of the (visual) environmental prior is far away. As that is far away and the uncertainty in the perception of body movement is narrow, the approximation that only zero- and first-order terms will be important is well justified.

The resulting estimate represents a Gaussian with a maximum at:

graphic file with name pcbi.1000680.e032.jpg (9)

Thus, the best estimate of the body velocity Inline graphic, as long as the environment velocity is large in comparison to the typical uncertainty in our perception of body sway, can be represented as a power law over the environment velocity Inline graphic.

graphic file with name pcbi.1000680.e035.jpg (10)

Our model thus has two free parameters: Inline graphic the variance of the noise in prior-and-likelihood of the physical motion cues; Inline graphic, the parameter associated with the prior over environmental velocities.

We fitted the model (Equation 10) to the experimentally measured gain of healthy subjects tested in our experiment. We then fitted the model to the experimentally measured gains of healthy subjects and vestibular-deficient patients tested in previous studies [4],[5]. We chose the model parameters such that the mean squared error between the model fits and the experimental data was minimized.

For healthy subjects, the values of free parameters were as follows: Inline graphic = 0.34 and Inline graphic = 1.32 (for subjects tested in our experiment); Inline graphic = 0.37 and Inline graphic = 1.28 (for subjects tested by Peterka et al.); Inline graphic = 0.33 and Inline graphic = 1.03 (for subjects tested by Mergner et al.). For vestibular-deficient patients, the values of free parameters were as follows: Inline graphic = 0.46 and Inline graphic = 1.7 (for patients tested by Peterka et al.); Inline graphic = 0.524 and Inline graphic = 1.85 (for patients tested by Mergner et al.).

Model comparisons

To test the performance of our attribution model, we compared it with other simple models of postural control.

We first considered a linear model in which the gain of postural response was constant (Fig. 3A). This model had a single free parameter, the gain Inline graphic, and had a functional form:

graphic file with name pcbi.1000680.e049.jpg

Figure 3. Alternative models of postural control.

Figure 3

(A) Model that predicts constant gain of the postural response. (B) Model where amplitude of the postural response increases logarithmically with visual scene velocity and then saturates. (C) Model where the gain is initially constant and then monotonically decreases with the visual scene velocity. Here Inline graphic represents the visual scene velocity, Inline graphic represents the true body velocity and Inline graphic indicates the estimate of the body velocity calculated by the models.

We then developed a nonlinear model that incorporated the findings of published empirical and modeling studies. The amplitude of postural reaction is known to increase logarithmically with the visual scene velocity until it saturates [4]. We tested a model of the functional form (Fig. 3B):

graphic file with name pcbi.1000680.e050.jpg
graphic file with name pcbi.1000680.e051.jpg

Here Inline graphic represents the visual scene velocity at which saturation occurs. We chose Inline graphic = 2.8 cm/s based on the previous findings in the literature [5]. This model had a single free parameter, the slope, Inline graphic.

We considered another model where the gain of postural reactions is initially constant, but decreases monotonically with increasing visual scene velocities (Fig. 3C). This model, with three free parameters, has the functional form:

graphic file with name pcbi.1000680.e058.jpg
graphic file with name pcbi.1000680.e059.jpg

We fitted these models to the gain values of each subject tested in our experiment. We computed the Bayesian Information Criterion for each subject and for each model. We then performed a paired t-test to determine if there was a significant difference in the BIC values for different models.

Footnotes

The authors have declared that no competing interests exist.

This work was supported by NIH grants RO1 DC05235 (awarded to EAK), 5RO1 NS057814 and 1R01NS063399 (awarded to KPK), the Falk Trust and the Chicago Community Trust (awarded to KPK). The Funding agencies did not have any role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Fushiki H, Kobayashi K, Asai M, Watanabe Y. Influence of visually induced self-motion on postural stability. Acta Otolaryngol. 2005;125:60–64. doi: 10.1080/00016480410015794. [DOI] [PubMed] [Google Scholar]
  • 2.Thurrell AE, Bronstein AM. Vection increases the magnitude and accuracy of visually evoked postural responses. Exp Brain Res. 2002;147:558–560. doi: 10.1007/s00221-002-1296-1. [DOI] [PubMed] [Google Scholar]
  • 3.Keshner EA, Dokka K, Kenyon RV. Influences of the perception of self-motion on postural parameters. Cyberpsychol Behav. 2006;9:163–166. doi: 10.1089/cpb.2006.9.163. [DOI] [PubMed] [Google Scholar]
  • 4.Peterka RJ, Benolken MS. Role of somatosensory and vestibular cues in attenuating visually induced human postural sway. Exp Brain Res. 1995;105:101–110. doi: 10.1007/BF00242186. [DOI] [PubMed] [Google Scholar]
  • 5.Mergner T, Schweigart G, Maurer C, Blumle A. Human postural responses to motion of real and virtual visual environments under different support base conditions. Exp Brain Res. 2005;167:535–556. doi: 10.1007/s00221-005-0065-3. [DOI] [PubMed] [Google Scholar]
  • 6.Dokka K, Kenyon RV, Keshner EA. Influence of visual scene velocity on segmental kinematics during stance. Gait Posture. 2009;30:211–216. doi: 10.1016/j.gaitpost.2009.05.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Keshner EA, Kenyon RV. The influence of an immersive virtual environment on the segmental organization of postural stabilizing responses. J Vestib Res. 2000;10:207–219. [PubMed] [Google Scholar]
  • 8.Peterka RJ. Sensorimotor integration in human postural control. J Neurophysiol. 2002;88:1097–1118. doi: 10.1152/jn.2002.88.3.1097. [DOI] [PubMed] [Google Scholar]
  • 9.Jeka J, Oie KS, Kiemel T. Multisensory information for human postural control: integrating touch and vision. Exp Brain Res. 2000;134:107–125. doi: 10.1007/s002210000412. [DOI] [PubMed] [Google Scholar]
  • 10.Lee DN, Lishman JR. Visual proprioceptive control of stance. Journal of Human Movement Studies. 1975;1:87–95. [Google Scholar]
  • 11.Lishman JR, Lee DN. The autonomy of visual kinaesthesis. Perception. 1973;2:287–294. doi: 10.1068/p020287. [DOI] [PubMed] [Google Scholar]
  • 12.Wright WG, DiZio P, Lackner JR. Vertical linear self-motion perception during visual and inertial motion: more than weighted summation of sensory inputs. J Vestib Res. 2005;15:185–195. [PubMed] [Google Scholar]
  • 13.Horak FB, MacPherson JM. Postural orientation and equilibrium. In: Rowell LB, Sheperd JT, editors. Exercise: Regulation and Integration of Multiple Systems. New York: Oxford University Press; 1996. pp. 255–292. [Google Scholar]
  • 14.Oie KS, Kiemel T, Jeka JJ. Multisensory fusion: simultaneous re-weighting of vision and touch for the control of human posture. Brain Res Cogn Brain Res. 2002;14:164–176. doi: 10.1016/s0926-6410(02)00071-x. [DOI] [PubMed] [Google Scholar]
  • 15.Keshner EA, Kenyon RV, Langston J. Postural responses exhibit multisensory dependencies with discordant visual and support surface motion. J Vestib Res. 2004;14:307–319. [PubMed] [Google Scholar]
  • 16.Blumle A, Maurer C, Schweigart G, Mergner T. A cognitive intersensory interaction mechanism in human postural control. Exp Brain Res. 2006;173:357–363. doi: 10.1007/s00221-006-0384-z. [DOI] [PubMed] [Google Scholar]
  • 17.Landy MS, Maloney LT, Johnston EB, Young M. Measurement and modeling of depth cue combination: in defense of weak fusion. Vision Res. 1995;35:389–412. doi: 10.1016/0042-6989(94)00176-m. [DOI] [PubMed] [Google Scholar]
  • 18.Sober SJ, Sabes PN. Flexible strategies for sensory integration during motor planning. Nat Neurosci. 2005;8:490–497. doi: 10.1038/nn1427. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Ernst MO, Banks MS. Humans integrate visual and haptic information in a statistically optimal fashion. Nature. 2002;415:429–433. doi: 10.1038/415429a. [DOI] [PubMed] [Google Scholar]
  • 20.van Beers RJ, Sittig AC, Denier van der Gon JJ. The precision of proprioceptive position sense. Exp Brain Res. 1998;122:367–377. doi: 10.1007/s002210050525. [DOI] [PubMed] [Google Scholar]
  • 21.Ernst MO, Bulthoff HH. Merging the senses into a robust percept. Trends Cogn Sci. 2004;8:162–169. doi: 10.1016/j.tics.2004.02.002. [DOI] [PubMed] [Google Scholar]
  • 22.Knill DC. Robust cue integration: a Bayesian model and evidence from cue-conflict studies with stereoscopic and figure cues to slant. J Vis. 2007;7:5 1–24. doi: 10.1167/7.7.5. [DOI] [PubMed] [Google Scholar]
  • 23.Wolpert DM, Ghahramani Z, Jordan MI. An internal model for sensorimotor integration. Science. 1995;269:1880–1882. doi: 10.1126/science.7569931. [DOI] [PubMed] [Google Scholar]
  • 24.Knill DC, Richards W. Perception as Bayesian Inference. New York: Cambridge University Press; 1996. [Google Scholar]
  • 25.Hartung B, Schrater PR, Bulthoff HH, Kersten D, Franz VH. Is prior knowledge of object geometry used in visually guided reaching? J Vis. 2005;5:504–514. doi: 10.1167/5.6.2. [DOI] [PubMed] [Google Scholar]
  • 26.Kording KP, Wolpert DM. Bayesian decision theory in sensorimotor control. Trends Cogn Sci. 2006;10:319–326. doi: 10.1016/j.tics.2006.05.003. [DOI] [PubMed] [Google Scholar]
  • 27.Simoncelli EP, Adelson EH, Heeger DJ. Probability distributions of optic flow. 1991. pp. 310–315. IEEE Comput Soc Conf Comput Vision and Pattern Recogn.
  • 28.Stocker AA, Simoncelli EP. Sensory adaptation within a Bayesian framework for perception. 2006. NIPS Advances in Neural Information Processing Systems Volume 18.
  • 29.Stocker AA, Simoncelli EP. Noise characteristics and prior expectations in human visual speed perception. Nat Neurosci. 2006;9:578–585. doi: 10.1038/nn1669. [DOI] [PubMed] [Google Scholar]
  • 30.Weiss Y, Simoncelli EP, Adelson EH. Motion illusions as optimal percepts. Nat Neurosci. 2002;5:598–604. doi: 10.1038/nn0602-858. [DOI] [PubMed] [Google Scholar]
  • 31.Fitzpatrick R, McCloskey DI. Proprioceptive, visual and vestibular thresholds for the perception of sway during standing in humans. J Physiol. 1994;478 ( Pt 1):173–186. doi: 10.1113/jphysiol.1994.sp020240. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Gigerenzer G, Todd PM Research Group ABC. Simple heuristics that make us smart. New York: Oxford University Press; 1999. [Google Scholar]
  • 33.Mackay D. Information Theory, Inference, and Learning Algorithms. Cambridge University Press; 2002. [Google Scholar]
  • 34.Kuo AD. An optimal state estimation model of sensory integration in human postural balance. J Neural Eng. 2005;2:S235–249. doi: 10.1088/1741-2560/2/3/S07. [DOI] [PubMed] [Google Scholar]
  • 35.Cohen H. Vestibular rehabilitation improves daily life function. Am J Occup Ther. 1994;48:919–925. doi: 10.5014/ajot.48.10.919. [DOI] [PubMed] [Google Scholar]
  • 36.Jacob RG, Furman JM, Durrant JD, Turner SM. Panic, agoraphobia, and vestibular dysfunction. Am J Psychiatry. 1996;153:503–512. doi: 10.1176/ajp.153.4.503. [DOI] [PubMed] [Google Scholar]
  • 37.Beidel DC, Horak FB. Behavior therapy for vestibular rehabilitation. J Anxiety Disord. 2001;15:121–130. doi: 10.1016/s0887-6185(00)00046-3. [DOI] [PubMed] [Google Scholar]
  • 38.van der Kooij H, Jacobs R, Koopman B, Grootenboer H. A multisensory integration model of human stance control. Biol Cybern. 1999;80:299–308. doi: 10.1007/s004220050527. [DOI] [PubMed] [Google Scholar]
  • 39.van der Kooij H, Jacobs R, Koopman B, van der Helm F. An adaptive model of sensory integration in a dynamic environment applied to human stance control. Biol Cybern. 2001;84:103–115. doi: 10.1007/s004220000196. [DOI] [PubMed] [Google Scholar]
  • 40.Carver S, Kiemel T, van der Kooij H, Jeka JJ. Comparing internal models of the dynamics of the visual environment. Biol Cybern. 2005;92:147–163. doi: 10.1007/s00422-004-0535-x. [DOI] [PubMed] [Google Scholar]
  • 41.Ma WJ, Beck JM, Latham PE, Pouget A. Bayesian inference with probabilistic population codes. Nat Neurosci. 2006;9:1432–1438. doi: 10.1038/nn1790. [DOI] [PubMed] [Google Scholar]
  • 42.Winter DA. Biomechanics and control of human movement. Wiley Publications; 2004. [Google Scholar]
  • 43.Bryan AS, Angelaki DE. Optokinetic and vestibular responsiveness in the macaque rostral vestibular and fastigial nuclei. J Neurophysiol. 2009;101:714–720. doi: 10.1152/jn.90612.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Liu S, Angelaki DE. Vestibular signals in macaque extrastriate visual cortex are functionally appropriate for heading perception. J Neurosci. 2009;29:8936–8945. doi: 10.1523/JNEUROSCI.1607-09.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Tanahashi S, Ujike H, Kozawa R, Ukai K. Effects of visually simulated roll motion on vection and postural stabilization. J Neuroeng Rehabil. 2007;4:39. doi: 10.1186/1743-0003-4-39. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Collins JJ, De Luca CJ. Open-loop and closed-loop control of posture: a random-walk analysis of center-of-pressure trajectories. Exp Brain Res. 1993;95:308–318. doi: 10.1007/BF00229788. [DOI] [PubMed] [Google Scholar]

Articles from PLoS Computational Biology are provided here courtesy of PLOS

RESOURCES