Skip to main content
. 2019 Nov 20;123(1):243–258. doi: 10.1152/jn.00882.2018

Table 3.

Model parameters to predict axis of tilt/translation based on grouped OORs

Predicted value (y) a b c d e RMSE F Statistic P Value
Combined tilt-translation model equation:
y = [a × cos(b × TiltAxis + c) + d] + (e × TypeOfStimulus)
    Magnitude ratio −0.75 1.96 93.0 1.35 −4e-4 0.64 0.01 0.92
    Right eye X-component −0.86 1.08 −20.36 −0.06 −4.2e-5 0.28 8.87e-5 0.99
    Eye Y-component −0.72 1.04 −77.34 −0.012 0.0003 0.25 0.01 0.94
    Right eye Z-component 0.39 1.02 −46.7 0.005 0.0003 0.34 0.03 0.87
    Left eye X-component −0.83 1.01 19.8 −0.01 −3.6e-5 0.28 7.18e-5 0.99
    Left eye Y-Component 0.75 0.98 92.1 0.003 −2.9e-5 0.23 5.46e-5 0.99
    Left eye Z-component 0.32 1.09 27.37 0.02 −3e-4 0.37 0.03 0.86

Model output for a nonlinear mixed effect model created to predict a component of eye movement based on the fixed effect (input) of translation and tilt axis and type of stimulus. The equation for the model is listed. An ANOVA of the results of this nonlinear mixed-effect model and a similar one after eliminating “TypeOfStimulus” showed no detectable difference between the 2 models (all P > 0.86), indicating that whether the eye movement was recorded during a tilt or translation did not play a significant role in creation of the model. Model results are shown in Fig. 11. OOR, otolith-ocular reflex; RMSE, root mean square error.