Skip to main content
PLOS ONE logoLink to PLOS ONE
. 2020 Aug 19;15(8):e0236732. doi: 10.1371/journal.pone.0236732

Determining mean and standard deviation of the strong gravity prior through simulations

Björn Jörges 1,*, Joan López-Moliner 2
Editor: Robin Baurès3
PMCID: PMC7446919  PMID: 32813686

Abstract

Humans expect downwards moving objects to accelerate and upwards moving objects to decelerate. These results have been interpreted as humans maintaining an internal model of gravity. We have previously suggested an interpretation of these results within a Bayesian framework of perception: earth gravity could be represented as a Strong Prior that overrules noisy sensory information (Likelihood) and therefore attracts the final percept (Posterior) very strongly. Based on this framework, we use published data from a timing task involving gravitational motion to determine the mean and the standard deviation of the Strong Earth Gravity Prior. To get its mean, we refine a model of mean timing errors we proposed in a previous paper (Jörges & López-Moliner, 2019), while expanding the range of conditions under which it yields adequate predictions of performance. This underscores our previous conclusion that the gravity prior is likely to be very close to 9.81 m/s2. To obtain the standard deviation, we identify different sources of sensory and motor variability reflected in timing errors. We then model timing responses based on quantitative assumptions about these sensory and motor errors for a range of standard deviations of the earth gravity prior, and find that a standard deviation of around 2 m/s2 makes for the best fit. This value is likely to represent an upper bound, as there are strong theoretical reasons along with supporting empirical evidence for the standard deviation of the earth gravity being lower than this value.

Introduction

There is ample evidence that humans represent earth gravity and use it for a variety of tasks such as interception [110], time estimation [11], the perception of biological motion [12] and many more. Recently, we have shown that gravity-based prediction for motion during an occlusion matched performance under a 1g expectation not only qualitatively, but also quantitatively [13]. This was an important finding to support our interpretation of the above results as a strong prior in a Bayesian framework of perception [14]. The results presented in [13] indicate that temporal errors in a timing task were consistent with a mean of 1g (9.81 m/s2) when occlusions were long enough. In the present paper, we extend the simulations brought forward in our previous paper: First, we consider how accounting for the Aubert-Fleischl effect, which leads humans to perceive moving object at about 80% of their actual speed when they pursue the target with their eyes [1517], can extend our simple 1g-based model to shorter occlusions. Furthermore, to fully characterize a prior, we need to not only indicate its mean, but also its standard deviation. The second goal of the present paper is thus to determine the standard deviation of the strong gravity prior. We aim to achieve this goal by simulations based on assumptions about the different sources of noise relevant to the task at hand.

In this paper, we adopt a constructivist-computational framework [18, 19]; we view perception as a process by which humans acknowledge the state of the world around us based on both prior knowledge and sensory online information in order to guide their interactions with the external world. Please note that other psychological traditions, such as ecological perception [20], deny the necessity of prior knowledge. Within our constructivist framework, we envision (visual) perception as a two-step process: Encoding and Decoding [21, 22]. During Encoding, low level signals such as luminosity, retinal velocities or orientation are picked up by the perceptual system and represented as neural activity. However, these low-level sensory signals, and the neural activity they are represented as, can be ambiguous with respect to the state of the world: for example, the same retinal velocities can correspond to vastly different physical velocities, depending on the distance between observer and object. An object that moves 6 m in front of the observer in the fronto-parallel plane with a physical speed of 1 m/s elicits a retinal speed of about 9.5°/s when fixation is maintained. The same retinal speed could correspond to a target that moves at a physical speed of 1.2 m/s 7 m in front of the observer. Decoding is the process of interpreting optic flow information. In Decoding, humans often combine sensory input with previous (prior) knowledge to obtain a more accurate and precise estimate of the observed state of the world. For example, we use knowledge about the size of an object to recover its most likely distance to the observer, thus providing a key to recover its physical velocity from retinal motion. If we, for example, know that we are observing a basketball and know from experience that its radius is 0.12 m, and we perceive that the target occupies a visual angle of 0.5°, we know that the target moves at 7 m in front of as. We then also know that the physical velocity of the ball is 1.2 m/s, not 1 m/s. In some, if not many instances, this combination occurs according to Bayes’ formula:

P(A|B)=P(B|A)P(A)P(B) [1]

The probability of a state of the world A given evidence B is the probability of observing evidence B given the state of the world A multiplied by the probability of the state of the world (A), divided by the probability of the evidence (B). In a Bayesian framework, sensory input (Likelihood), corresponding to the term P(B|A)P(B) in Eq 1, and prior knowledge (Prior), corresponding to P(A) in Eq 1, are combined according to their respective precisions to yield a more precise and more accurate final percept (Posterior). Under many circumstances, Prior, Likelihood and Posterior can be represented as normal distributions whose standard deviations correspond to the representation’s reliability. If an organism has a high sensitivity to the sensory input, that is, when they are able to reliably distinguish one stimulus strength from a very similar stimulus strength, the standard deviation of the Likelihood would be very low, which corresponds to a very narrow distribution. On the other hand, if the organism has a very precise representation of the most likely state of the world, the Prior would be very narrow. Finally, the standard deviation of the Posterior would depend on the precision of Likelihood and Prior. Usually, both the Prior and the Likelihood contribute to the Posterior; for example when we know that our opponent in a tennis match usually serves in the right corner of the court, but not always, (Prior) and we have good visibility of their serving motion, but since the motion is so quick, we do not have a lot of time to acquire evidence (Likelihood). We thus take sensory input (e. g. about their body posture while serving) into account only to some extent (see “Normal Prior” scenario in Fig 1). However, in the case of gravity it seems that the expectation of Earth Gravity overrules all sensory information that humans collect on the law of motion of an observed object [6, 7, 2325]. On a theoretical level, this is a sensible assumption, since all of human evolution and each human’s individual development occurred under Earth Gravity. In Bayesian terms, the Prior is extremely precise and thus overrules all sensory information represented as the Likelihood. According to our interpretation, we would thus expect an extremely low value for the standard deviation of the earth gravity prior (“Strong Prior” scenario in Fig 1). We would expect this value to be represented more precisely than linear velocities, which generally elicit Weber Fractions of 10%, which corresponds to a standard deviation of about 15% of the mean represented stimulation.

Fig 1. Graphical illustration of likelihood, prior and posterior in a Bayesian framework, for both a normal, relatively shallow prior, and a strong, extremely precise prior.

Fig 1

In the following, we use the data from our previous study [13] to simulate the variability of responses under different assumptions about the standard deviation of the gravity prior.

Methods

In this paper, we use previously published data [13]. The pre-registration for the original hypotheses can view viewed on Open Science Foundation (https://osf.io/8vg95/). All data relevant to this project are available in our GitHub repository (https://github.com/b-jorges/SD-of-Gravity-Prior).

Participants

We tested ten participants (n = 10) overall, including one of the authors (BJ) who was excluded from the analyses in this paper. The remaining participants were between 23 and 34 years old and had normal or corrected-to-normal vision. Three (n = 3) of the included participants were women and six (n = 6) were men. All participants gave their informed consent. The research in this study was part of an ongoing research program that has been approved by the local ethics committee of the University of Barcelona. The experiment was conducted in accordance with the Code of Ethics of the World Medical Association (Declaration of Helsinki).

Stimuli

Participants were shown targets of tennis ball size (r = 0.033), shape and texture in an immersive 3D environment (see Fig 2). The 3D environment should help participants to perceive the stimulus at the correct distance and activate the internal model of gravity [11]. The targets moved along parabolic trajectories in the fronto-parallel plane 6.15 m in front of the observer. The trajectories were determined by the simulated gravity (0.7g, 0.85g, 1g, 1.15g, 1.3g or -1g), the initial vertical velocity (4.5 or 6 m/s) and the initial vertical velocity (3 or 4 m/s). Air drag was simulated according to Eqs [2] and [3] (see http://www.demonstrations.wolfram.com/ProjectileWithAirDrag/) in line with the air drag at the location of the experiment (Barcelona in Spain, at sea-level), and the ball did not spin.

Fig 2. 2D depiction of the visual scene used as environment for stimulus presentation.

Fig 2

The stimulus was always presented in front of the white wall and never crossed other areas (such as the lamps of tables) that could introduce low level differences in contrast etc. The lines denote the different parabolic trajectories that along which the targets travelled. Figure from (Jörges & López-Moliner 2019).

x(t)=(vxi2+vyi2)0.5*m*gg*c*cos(asin(vyi(vxi2+vyi2)0.5))*(1e(g*t*cm*g)) [2]
y(t)=(mc)*((vxi2+vyi2)0.5*sin(asin(vyi(vxi2+vyi2)0.5))+m*gc)*(1e(g*t*cm*g))m*g*tc [3]

x(t) is the horizontal position over time, y(t) is the vertical position over time, vxi is the initial horizontal velocity, vyi is the initial vertical velocity, m is the mass of the object (0.057 kg), g is the simulated gravity, c is the drag coefficient (0.005). Targets always moved from left to right. When gravity acted downwards, the target started 0.5m above the simulated ground and when it acted upwards, the target started out 3.5m above the ground. The final positions were marked with tables for downwards gravities and by lamps hanging from the ceiling for upwards gravities. The total flight time was the time it took for the ball to return to its initial height. The target disappeared either between 75% and 80% (Short Occlusion) or between 50% and 55% (Long Occlusion) of the total flight time. Each of the conditions was repeated 24 times, for a total of 1344 trials across four blocks. Within each block, the kinetic profiles were presented in a random order. From the participant’s perspective, the trajectories always unfolded in front of the white wall, that is, low level cues such as contrast and brightness were equal across all trajectories and conditions. Fig 2 shows the trajectories projected on the visual scene.

Apparatus

We used two Sony laser projectors (VPL-FHZ57) to present overlaid images on a back-projection screen (244 cm high and 184 cm wide). The images had a resolution of 1920 x 1080 pixels and were refreshed at 85Hz. Participants were wearing glasses with polarizing filters to provide stereoscopic images. They stood 2 m in front of the screen. The disparity between the two projectors’ images was adapted to each participant’s interocular distance. The stimuli were programmed in PsychoPy [26]. The projectors introduced a delay of 0.049259 s (SD = 0.001894 s) that we accounted for in the analysis of timing responses. For another hypothesis, eye-tracking data was acquired; see [13].

Participant responses were collected with a regular computer mouse. It has been shown that commodity input devices often lack in temporal accuracy and precision for response capture [27]. To mitigate such issues, we use the openGl engine in python (pyglet) devoted to gaming, which aims to reach maximum precision both for stimulus frames and input recording. We access the mouse time stamps directly iohub python libraries (which merges with PsychoPy) which circumvents the main system events loop and uses the clock_gettime(CLOCK_MONOTONIC) in unix-like systems (like os x, the one we use). The precision is sub-milliseconds. Iohub can be used with or without PsychoPy real-time access to input devices. Importantly, it runs its own thread devoted to continuously sampling the input device state independently of the video (stimulus) thread.

Procedure

We asked participants to follow the target closely with their gaze and indicate with a mouse click when they believed the target had returned to its initial height. Participants first completed 48 familiarization trials in which the balls reappeared when they pressed the button, which allowed them to assess the spatial error. Then, the main experiment followed. It consisted of four blocks: 3 blocks with 320 trials each (the five positive gravities– 0.7g, 0.85g, 1g, 1.15g, 1.3g –, two initial vertical velocities, two initial horizontal velocities, two occlusion conditions, eight repetitions per condition) and one block with 384 trials (as the other block, but 1g and -1g as gravities, and 24 repetitions per condition). Each block took 15–20 minutes and participants could rest after each block. We counterbalanced across participants whether the -1g block or the 0.7g-1.3g blocks was presented first.

Results

We have reported mean difference in a previous paper [1,3]. In the following, we thus limit ourselves to analyzing the influence of gravity on the precision of responses in preparation for the simulations we are conducting after. We used a slightly different, more liberal outlier analysis for this project to make sure that we do not lose any variability present in participants’ responses. We also exclude all data collected from the author (s10; all 1344 trials). Further, we exclude all trials where subjects pressed the button before the target disappeared (38 trials) or where the temporal error was greater than 2 s (178 trials). Overall, we excluded 1.6% of all trials from the nine participants included in the analysis. To make it easier to compare temporal errors across conditions, we then computed the error ratio:

ErrorRatio=Error+OccludedDurationOccludedDuration [4]

In Fig 3, we illustrate the response distributions. For an analysis and interpretation of the effect of gravitational motion on accuracy, please see our previous paper [13].

Fig 3. Temporal errors in the 0.7–1.3 g conditions.

Fig 3

The wings of each structure indicate the distribution of responses, while the boxplot in the middle of each structure indicate the 75% percentiles and the mean per condition.

While we used Linear Mixed Modelling to assess accuracy, assessing precision differences between conditions is not straight-forward with this method. Therefore, we employ Bayesian Linear Mixed Modelling to assess whether gravity has an impact on the precision of the timing responses. The R package brms [28], which provides user-friendly interface for the package rstan [29], uses a very similar syntax to the more well-known lme4 [30]. In addition to mean differences, this type of analysis also allows us to test for variability differences between conditions. We thus fit a mixed model to explain both means and standard deviations of the response distributions, with gravity as a fixed effect and varying intercepts per participant as random effects. In lme4/brms syntax, the test model is specified as:

ErrorRatioGravity+(1|Subject)SigmaGravity+(1|Subject) [5]

Where the first line corresponds to the statistical structure that corresponds to the means of the response distributions and the second line corresponds to the standard deviations of the response distributions. Unlike regular Linear Mixed Models, Bayesian Linear Mixed Models do not need to be compared to a Null Model. We can use the hypothesis() function from the R Core package [31] to test hypotheses directly. We found a posterior probability of >0.999 that a lower gravity value is related to lower variability, the sigma coefficient for Gravity being 0.057 (SE = 0.004; 95% Confidence Interval = [0.051;0.064]) in the log space. In the regular space, this corresponds to a standard deviation of 0.296 (95% CI = [0.282;0.313]) for 0.7g, 0.321 (95% CI = [0.303;0.344]) for 0.85g, 0.350 (95% CI = [0.326;0.378]) for 1g, 0.382 (95% CI = [0.351;0.416]) for 1.15g and 0.413 (95% CI = [0.378;0.458]) for 1.3g. Table 1 lists all mean temporal errors and the respective standard errors across participants. Note that, unlike the results from the Bayesian Mixed Model, the variability values from Table 1 also include variability that the Mixed Model assigns to the individual.

Table 1. Means and standard deviations observed for the temporal errors divided by gravities and initial vertical velocities.

0.7g-1.3 Block -1g/1g Block
Long Occlusion
vyi 0.7g 0.85g 1g 1.15g 1.3g -1g 1g
4.5 m/s Mean 1.12 1.11 1.20 1.24 1.30 1.33 1.17
SD 0.47 0.49 0.53 0.42 0.44 0.53 0.38
6 m/s Mean 1.05 1.11 1.17 1.24 1.32 1.23 1.16
SD 0.49 0.55 0.57 0.54 0.57 0.56 0.46
Short Occlusion
vyi 0.7g 0.85g 1g 1.15g 1.3g -1g 1g
4.5 m/s Mean 1.22 1.31 1.34 1.41 1.52 1.68 1.35
SD 0.64 0.65 0.65 0.56 0.88 0.86 0.58
6 m/s Mean 1.26 1.33 1.37 1.47 1.49 1.51 1.35
SD 0.65 0.77 0.77 0.88 0.75 0.80 0.76

Interestingly, precision seems to be higher for 1g trials than for -1g trials. To test this observation statistically, we fitted a second Bayesian Linear Mixed Model to the -1g/1g data, where gravity as fixed effect factor and subjects as random effects predict the timing error:

ErrorRatioGravity+(1|Subject)

We tested the hypothesis that Gravity would lead to lower variability. The posterior probability of this hypothesis being true was > 0.999, with a sigma coefficient for Gravity of -0.011 (SE = 0.004; 95% Confidence Interval = [-0.014,-0.009] in the log space. That is, the standard deviation of distribution of -1g responses in regular space is 0.426 (95% Confidence Interval = [0.414;0.439]), while the standard deviation of the distribution of 1g responses in regular space is 0.344 (95% Confidence Interval = [0.334;0.353]). This indicates that the absolute error is lower and thus the precision is higher for 1g than for -1g. On a theoretical level, this is in line with previous findings [32] showing that the internal representation of gravity is not activated when upwards motion is presented, even when the absolute value of acceleration impacting the object is equal to the absolute value of earth gravity (9.81 m/2). The precision may thus be higher for 1g than for -1g because the internal model of gravity is utilized for 1g, but not for -1g trials.

Simulations

The physical formula for distance from initial velocity and acceleration (Eq 6) is the base for both of our simulation procedures. This reflects the assumption that humans perform the task at hand accurately–under most circumstances. This assumption is supported by our data, which show a high accuracy for the earth gravity conditions.

We furthermore neglect the air drag for these simulations and use the equation for linearly accelerated motion as an approximation.

dy=g2*t2+vy*t [6]
t1/2=vy+(vy24*g2*dy)0.52*g2 [7]

As evidenced by a comparison between Eqs (2) and (3) and Eqs (6) and (7), the computational complexity increases significantly if we want to accommodate air drag, while the gains in accuracy are marginal (0.02 s in the condition with the most extreme differences).

Mean of the gravity prior

To characterize the mean Strong Gravity Prior, we build upon our model the mean timing errors presented in our previous data [13]. Importantly, the predictions of our model matched the observed data only for the Long Occlusion condition. In the Long Occlusion condition, subjects displayed a tendency to respond slightly too late, while their responses should be centered around zero. Our ad hoc explanation of this discrepancy was that subjects were often executing a saccade when the ball returned to initial height, which may have interfered with the predictions [33]. An alternative explanation may be, however, that our subjects underestimated the target’s speed at disappearance due to the so called Aubert-Fleischl phenomenon: humans estimate the speed of a target that they pursue with their eyes at about 80% of its actual speed [15, 16, 3436]. Our subjects were specifically instructed to follow the target with their eyes, and the eye-tracking data we collected that they generally did pursue the target [33]. An underestimation of the velocity at disappearance could explain the tendency of subjects to respond too late in the Short Occlusion condition. For the Long Occlusion condition, on the contrary, the vertical speed at disappearance is very low and has a nearly neglectable influence on the final prediction. Setting the perceived velocity at 80% of the presented velocity should thus yield more accurate predictions for the Short Occlusion condition, while the accuracy for the Long Occlusion condition would be largely maintained. We thus employ the same procedure laid out in [33], but add a coefficient of 0.8 to the perceived velocity at disappearance to account for the Aubert-Fleischl phenomenon.

We will briefly summarize the procedure and then present how this tweak affects the results of our simulations. We used the physical formula for distance from accelerated motion (Eq 6, with d being the height as disappearance, vy the vertical velocity at disappearance and g being gravity). For our simulations, we assume that humans use an earth gravity value of 9.81 m/s2 independently of the presented gravity value, as long as the display is roughly in line with a real-world scenario. We furthermore assume that we perceive the vertical velocity at disappearance at 80% of the presented velocity. Eq 7 thus becomes

t1/2=vy,perceived+(vy,perceived24*gearth2*dy)0.52*gearth2 [8]

With vy,perceived = 0.8 * vy,presented and gearth=9.81ms2.

We use this formula to simulate the timing error for each trial separately without adding noise. We furthermore also simulate the responses without accounting for the Aubert-Fleischl phenomenon to compare performance for both models. Fig 4 shows the mean errors observed in our participants (“Obs. Error”), the mean errors when accounting for the Aubert-Fleischl phenomenon (“Sim. Error (AF)”), and the mean errors when not accounting for the Aubert-Fleischl phenomenon (“Sim. Error (No AF)”).

Fig 4. Mean temporal errors that we observed in our participants (across participants in blue, and for each participant separately in shades of grey), simulated taking the Aubert-Fleischl phenomenon into account (light red) and simulated without taking the phenomenon into account for the different conditions.

Fig 4

The right column represents values for the Long Occlusion condition, while the left column represents the Short Occlusion condition. The upper row shows values for an initial vertical velocity of 4.5 m/s, while the lower row represents initial vertical velocities of 6 m/s. Note that the standard errors for the observed errors are so small that all error bars fall well within the area covered by the dots.

The overall Root Mean Squared Error between AF model predictions and observed behavior is 0.2, and for the non-AF model predictions substantially higher, at 0.265. Table 2 shows the error for each of the conditions. Including the AF phenomenon thus vastly improves the model’s generalizability.

Table 2. Root Mean Squared Errors (RMSEs) between simulated and observed mean errors for simulations including the Aubert-Fleischl phenomenon (AF) and simulations that don’t (No AF).

Lower values signify a better fit.

Long Occlusion Short Occlusion
vyi AF No AF AF No AF
4.5 m/s 0.150 0.160 0.236 0.333
6 m/s 0.148 0.158 246 0.344

This improvement upon our previous model lends further support to the idea that the mean of a strong gravity prior is at or very close to 9.81 /s2.

Standard deviation of the gravity prior

The second value needed to characterize a normal distribution, which we assume the strong gravity prior to be represented as, is its standard deviation. There are two different ways to approach this problem: First, we can simulate the temporal responses of our subjects assuming different standard deviations for the gravity prior and minimize the difference between the standard deviations of the responses we observed in our subjects and the model standard deviations. In this case, we would draw the values for vy, dy and gearth from distributions with given means and standard deviations, and compute a simulated temporal response from these values. The mean for vy would be the last observed velocity in y direction, corrected by a factor of 0.8 for the Aubert-Fleischl phenomenon, and the standard deviation can be computed based on Weber fractions for velocity discrimination from the literature. The mean for dy is the distance in y direction between the point of disappearance and the reference height. The mean for gearth is 9.81 m/s2, and we optimize over its standard deviation to match the standard deviation observed in the subjects’ temporal responses.

A second approach would be to solve Eq (6) for gearth, and then compute its mean and standard deviation analytically based on the means and standard deviations of t, vy and dy. For the addition, subtraction and multiplication of two normal distributions, there are analytic solutions to compute mean and standard deviation of the resulting distribution.

gearth=2(dyvy*t)t2 [9]

However, as evident from Eq 9, this method requires computing the standard deviation of the quotient of two distributions. To our knowledge, this is not possible in an analytical fashion and would entail simulations by itself. We will thus focus on the simulation approach.

Assumptions

For this approach, we need to make several assumptions. In the following, we will outline each and provide the rationale for the chosen values. Please note that we conduct these simulations in absolute terms (i.e., absolute errors) to mimic the processes more closely, but convert quality metrics (such as model fits) and results into relative terms (i.e., error ratios).

Use of Eq (6). In our previous paper, we have shown that predictions based on Eq 6 fit observed temporal errors reasonably well [13]. This is particularly the case when subjects extrapolated motion for larger time frames in the Long Occlusion condition. The difference in predictions for this equation with regards to Eq (2) is at most 3 ms, and the added computational complexity does not justify the added accuracy, especially since our main concern is precision.

vy. The velocity term in Eq 6 (vy*t) refers to the part of the full distance the target moved because of its initial velocity. Our targets disappeared right after peak, therefore their initial velocity was very low. The velocity term thus contributes less to the full estimate than the gravity term, especially in the Long Occlusion condition (see also Fig 5C). Importantly, the vertical velocity component is not perceived directly. Rather, it has to be recovered from the tangential speed (vtan,perceived) and the angle between the tangential speed vector and the vertical speed vector (αperceived) by means of the equation:

vy,perceived=cos(αperceived)*vtan,perceived [10]
Fig 5. Predictions for different standard deviations chosen for different parameters in our model.

Fig 5

Dots represent the standard deviation for each gravity (0.7g-1.3g), divided by Occlusion category (Long and Short) and initial vertical velocities (4.5 and 6 m/s). The color gradient indicates different values of the (standardized) standard deviation for the perceived distance, the perceived velocity, the represented gravity and the remaining error. The baseline values are 0.148 for distance and velocity, 0.1 for gravity and 0.05 for the remaining (motor) error. A. Predictions for five standardized standard deviations for the perceived distance (0.1–0.3 m). B. Predictions for five standard deviations for the remaining (motor) error (0.02–0.1 s), modelled as independent of and constant across initial velocities, gravities and occlusion conditions. C. Predictions for five different standardized standard deviations for the last perceived velocity (0.1–0.3 m/s). D. Predictions for five different standardized standard deviations for the represented gravity (0.02–0.18 m/s2).

Weber fractions for the discrimination of angular velocities reported in the literature are about 10% [37]. To calculate the standard deviation of the distribution of perceived velocities from the Weber fraction, we have to find that normal distribution where a difference of 10% from its mean leads to a proportion of responses of 25/75%. For a standardized normal distribution with a mean of 1, this is a standard deviation of 0.148. Note that, by using a standardized normal distribution, we assume that Weber fractions are constant across the relevant range of stimulus strengths. Fig 5C shows how predictions vary with varying variability in perceived vertical velocity: The effect is negligible for the Long Occlusion condition, while it increases response variability uniformly across gravities. Further variability is incurred in estimating αperceived. Following [38], the JND for orientation discrimination in untrained subjects is around 6° for oblique orientations. This corresponds to a standard deviation of 0.089.

Furthermore, we need to account for the Aubert-Fleischl phenomenon, which consists in an underestimation of the velocity of a moving target during smooth pursuit [15, 16, 3436]. While this effect should in principle be partially offset by improved predictions for motion coherent with earth gravity – an empirical question that has, to our knowledge, not been addressed so far –, our simulations show that a Aubert-Fleischl correction factor of 0.8 yields an excellent fit for the observed mean errors. We thus proceed with a value of 0.8 also for the simulations concerning the standard deviation.

dy. For the distance term (dy), we choose the stimulus value as mean distance, as we don’t expect any biases. In terms of precision, Weber fractions of 3% to 5% are observed for distance estimates in the front parallel plane [39]. However, since subjects have to estimate the distance not between two well defined points, but rather the height above the simulated table, the precision of these estimates is likely lower than reported for the above task. We thus work with a Weber fraction of twice the reported value (10%). Using the above method, we determine that the standard deviation for this value is 0.148. Fig 5A shows how predictions vary with variability in perceived distance: There is a slight logarithmic pattern, where response variability added by higher variability in perceived distance increases with decreasing gravity.

t. The response time t is measured directly in our task, both in mean and variability.

Remaining variability

For our simulations, we rely on accounting for every source of variability in the responses. One source of error beyond perceiving and representing g, vy and dy is the motor response. Motor responses are likely to vary strongly between tasks, for which reason variability reported in the literature is of limited use. To estimate the error introduced by these further factors, we thus take advantage of previous results indicating that the gravity model is not activated for upside-down motion [32], a hypothesis which is also supported by our data.

Under this assumption, we can use the responses for the inverted gravity condition to estimate the errors introduced by motor variability. An inactivation of the gravity prior would mean that the gravity acting upon the object should be represented with the same precision as arbitrary gravities. We previously found Weber fractions of between 13% and beyond 30% for arbitrary gravities [40], which is in line with those found for linear accelerations [41]. We thus proceed with a value of 20%, which corresponds to a normalized standard deviation of 0.295 (see procedure above).

There are further constraints: First, the motor variability should be lower than the overall variabilities observed for the absolute error in each condition (the minimum is just over 0.08 s for the short occlusion condition with 1.3g and an initial vertical velocity of 4.5 m/s). Second, the motor variability should be equal across conditions and be independent of gravity, initial velocity and Occlusion category (see Fig 5B).

We put these values for g, vy and dy into Eq 7 to stimulate the temporal responses for each trial 1000 times. We minimize the Root Mean Square Errors (RMSE) between the standard deviations of the simulated timing error and the observed timing errors, separately for each combination of gravity, initial vertical velocity, Occlusion condition and participant. We collapsed the error across initial horizontal velocities because results for both values were virtually the same, mostly likely because the horizontal velocity barely influences overall flight duration in the presence of air drag, and not at all in the absence of air drag. After visualizing a relevant range of candidate values for the standard deviation of the remaining errors (see Fig 6), we use the optim() function implemented in R with a lower bound of 0.01 s and an upper bound of 0.06 s to find the best fit for the observed data. We found the best fit for a standard deviation of 0.058 s, with an RMSE of 0.04.

Fig 6.

Fig 6

A. Root Mean Square Errors (RMSE) between the standard deviation of timing errors simulated based on different motor errors (between 0.00 and 0.07 s) and the standard deviation of observed timing errors. B. Root mean square errors (RMSE) between the standard deviation of timing errors simulated based on different standard deviations of the gravity prior between 0.15 and 0.25*9.81 m/s2 and the standard deviation of observed timing errors.

The standard deviation of the gravity prior

We then proceed to apply these values to simulate data sets based on the above assumptions, get the standard deviations for the timing error and compare them to standard deviations of the observed timing errors (Method 1). We restrict this comparison to the 0.7g/0.85g/1g/1.15/1.3g condition, as we expect the gravity model not to be activated for inverted gravitational motion. For a discussion of factors impacting the performance of the model for short occlusions, see [40]. We first simulate a range of sensible standard deviations (from 0, corresponding to an impossibly precise representation, to 0.28, corresponding to a quite imprecise representation with limited impact on the final percept, in steps of 0.03) to determine the lower and upper bounds of the optimization interval (see Fig 6B). Fig 5D furthermore highlights how changes in the simulated variability of the represented gravity changes response variability.

We find the errors to be lowest around 0.21, and choose thus 0.16 as the lower bound and 0.26 m/s2 as the upper bound. We then search for that standard deviation that minimizes the error between simulated and observed timing errors, using the optim() function implemented in R [31]. For each iteration, we simulate 1000 data sets and minimize the Root Mean Square Error (RMSE) between the standard deviations of simulated and observed timing errors across these 1000 data sets. The R code we used for these simulations can be found on GitHub (https://github.com/b-jorges/SD-of-Gravity-Prior), including extensive annotations. We found a normalized standard deviation of 0.208 for the gravity prior, which corresponds to a standard deviation of about 2.04 m/s2 for a mean of 9.81 m/s2, and a Weber fraction of 14.1%. The RMSE is 0.024. In Fig 7, we illustrate how the simulated standard deviations relate to the observed ones. The light red dots correspond to this method (“Simulated (Method1)”); as evident from the figure, the fits are better for the Long Occlusion condition, while the SDs are generally overestimated for the Short Occlusion condition.

Fig 7. Observed and simulated standard deviations separated by occlusion condition, initial vertical velocity and presented gravity.

Fig 7

Blue indicates the observed standard deviations across subjects, while the standard deviations simulated through the two-step process (Method 1) are coded light red and the standard deviations simulated through the two-parameter fit (Method 2) are coded solid red.

If the gravity prior was discarded completely for upwards motion, we might observe even larger errors for -1g motion. We elaborate on this issue in the discussion. As there is thus some reason to believe that the gravity prior is not completely inactive in upwards motion, which may bias to above method to overestimate the standard deviation of the gravity prior, we furthermore conducted simulations where both the motor variability and the strong gravity prior are fitted to the data (Method 2). To this end, we use the optimize() function implemented in R which uses the Nelder and Mead method [42] to determine those values for the motor standard deviation and the standard deviation of the gravity prior that yield the smallest errors between simulated and observed variability. This is suitable because variability in the gravity prior and motor variability affect the final variability differentially (see Fig 5): a higher motor variability leads to uniformly higher standard deviations for the observed error, while a higher gravity variability affects longer trajectories (Long Occlusion, higher initial vertical velocity and lower gravities) more strongly than shorter ones. Based on above results, we chose 0.04 and 0.2 as starting parameters, but did not limit the parameter space. This method allots variability in slightly different proportions: the standard deviation for the motor error is 0.06 s and the standardized standard deviation of the gravity prior is 0.211 (which corresponds to a non-standardized standard deviation of 2.07 m/s2 and a Weber fraction of 14.2%), with an RMSE of 0.024. These values are extremely close to the values found with Method 1. While it is worth noting that fitting both parameters to the data makes this method more susceptible to overfitting, this lends additional support to the tentative conclusion that the standard deviation of the gravity prior is just above 2 m/s2 or a Weber Fraction of 14.2% The simulated standard deviations for these conditions are depicted in solid red in Fig 7 (“Simulated (Method 2)”): The fits are much better for the long occlusions, at the cost of a slight overestimation of the variability for the short occlusions.

Discussion

Humans assume in many tasks and circumstances that objects in their environment are affected by earth gravity. It has thus been suggested that we maintain a representation of this value, which we then recruit to predict the behavior of objects in our environment. We recently interpreted this representation as a Strong Prior in a Bayesian framework [14]. A “Strong Prior” is a prior with a reliability so high that it overrules any sensory input represented in the likelihood. Based on data from timing task (previously reported in [33]), we make an attempt at determining the standard deviation of a hypothetical Strong Earth Gravity Prior. Our general approach is to account for other sources of perceptuo-motor variability in the task based on thresholds reported in the literature, and attributing the remaining variability to the Gravity Prior. Based on this approach, we find a standard deviation of 2.13 m/s2 (Method 1) or 2.07 m/s2 (Method 2), for a prior with a mean of 9.81 m/s2, which corresponds–mathematically–to a Weber fraction of 14.1% or 14.2%, respectively. This is considerably lower than Weber fractions generally observed for acceleration discrimination, but above Weber fractions for the discrimination of constant speeds [43].

Interestingly, when we simulated the timing errors with a fixed value of 9.81 m/s2 (i.e., in a non-Bayesian framework where the value of earth gravity is not represented as a distribution, but rather a value set at 1g; see [13] and also above), we found that our results fit the observed timing error quite nicely for each gravity value. That is, the observed gravity (corresponding to the Likelihood) had no discernable influence on the final percept (Posterior). However, in a Bayesian framework, this is only possible if the Likelihood is extremely shallow and the Prior is extremely precise. A Weber fraction of about 30% for the likelihood (which we assume for acceleration discrimination), and a Weber fraction of 14.1% or 14.2%. for the prior (as modelled) would not result in discarding the likelihood completely (see also Fig 1; even for a strong prior and a rather shallow likelihood, the likelihood attracts the posterior to some extent). Our results thus reveal a mismatch between the means observed in our experiment, the modelled standard deviation and a Bayesian explanation.

We see two possible ways to explain this mismatch. Firstly, our observed standard deviation for the gravity prior could be an upper bound. Our method relies on identifying all sources of variability and allotting variability in the response accordingly. Since we did not measure our participants’ Weber fractions for velocity and distance discriminations individually, but rather used averages reported in the literature for somewhat different tasks, this may have distorted how much variability perceived distances and velocity at disappearance introduced in the response. Furthermore, when estimating the variability introduced in the motor response, we part from the premise that the internal model of gravity is not activated at all for -1g motion. However, we observe a bias to respond too late in this condition, suggesting that humans expect objects to accelerate less when moving upwards. This could be taken as evidence that the internal model of gravity is still activated to some extent. In this case, we would need to allot more variability to the motor error, which in turn would lead to a lower standard deviation for the gravity prior. However, this pattern in our data is also consistent with humans taking arbitrary accelerations into account insufficiently in perceptuo-motor tasks, which has been reported repeatedly for tasks where the gravity prior is highly unlikely to be recruited [41, 4446]. The values of 14.1% or 14.2% obtained above may thus be an upper bound for the standard deviation of the Earth Gravity Prior.

A second possibility is that prior knowledge and online perceptual input are combined in a non-Bayesian fashion (and we should thus avoid the terminology “Prior”, “Likelihood” and “Posterior”), where the mean of the final percept is set according to an acceleration of 9.81 m/s2, while its standard deviation is determined by a (not necessarily Bayesian) combination of prior knowledge and online sensory information.

Conclusion

In this paper, we build upon a simple model for coincidence timing of gravitational motion brought forward in [13]. By accounting for the Aubert-Fleischl phenomenon, we extend the domain of our model to also include shorter extrapolation intervals. Furthermore, we propose a procedure to determine the standard deviation of a potential gravity prior, and apply it to pre-existing data from a timing task. Standard deviations of 2.13 m/s2 or 2.07 m/s2 (depending on the method) explains the behavior observed in our task best. However, considering the literature we would expect an even lower standard deviation, as a Prior with a mean of 9.81 m/s2 and standard deviations of 2.13 m/s2 or 2.07 m/s2 should not attract the Posterior as strongly as has been commonly observed. We thus believe that we are not able to fully disentangle different sources of noise in our data; the value we find for the standard deviation of the earth gravity prior is thus more likely an upper bound, and follow-up experiments may find lower values.

Data Availability

The data are available on GitHub (github.com/b-jorges/SD-of-Gravity-Prior).

Funding Statement

Funding was provided by the Catalan government (2017SGR-48; https://govern.cat/gov/) and the European Regional Development Fund's (https://ec.europa.eu/regional_policy/en/funding/erdf/) project ref. PSI2017-83493-R from AEI/Feder, UE. BJ was supported by the Canadian Space Agency (CSA). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.La Scaleia B., Zago M., Moscatelli A., Lacquaniti F., and Viviani P., “Implied dynamics biases the visual perception of velocity,” PLoS One, vol. 9, no. 3, 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Senot P. et al. , “When Up Is Down in 0g: How Gravity Sensing Affects the Timing of Interceptive Actions,” J. Neurosci., vol. 32, no. 6, pp. 1969–1973, 2012. 10.1523/JNEUROSCI.3886-11.2012 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Ceccarelli F. et al. , “Rolling motion along an incline: Visual sensitivity to the relation between acceleration and slope,” Front. Neurosci., vol. 12, no. JUN, pp. 1–22, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Zago M., McIntyre J., Senot P., and Lacquaniti F., “Internal models and prediction of visual gravitational motion,” Vision Res., vol. 48, no. 14, pp. 1532–1538, 2008. 10.1016/j.visres.2008.04.005 [DOI] [PubMed] [Google Scholar]
  • 5.McIntyre J., Zago M., and Berthoz A., “Does the Brain Model Newton’s Laws,” Nat. Neurosci., vol. 12, no. 17, pp. 109–110, 2001. [DOI] [PubMed] [Google Scholar]
  • 6.Zago M., Bosco G., Maffei V., Iosa M., Ivanenko Y. P., and Lacquaniti F., “Fast Adaptation of the Internal Model of Gravity for Manual Interceptions: Evidence for Event-Dependent Learning,” J. Neurophysiol., vol. 93, no. 2, pp. 1055–1068, 2004. 10.1152/jn.00833.2004 [DOI] [PubMed] [Google Scholar]
  • 7.Zago M., Bosco G., Maffei V., Iosa M., Ivanenko Y., and Lacquaniti F., “Internal Models of Target Motion: Expected Dynamics Overrides Measured Kinematics in Timing Manual Interceptions,” J. Neurophysiol., vol. 91, no. 4, pp. 1620–1634, 2004. 10.1152/jn.00862.2003 [DOI] [PubMed] [Google Scholar]
  • 8.Zago M. and Lacquaniti F., “Cognitive, perceptual and action-oriented representations of falling objects,” Neuropsychologia, vol. 43, no. 2 SPEC. ISS., pp. 178–188, 2005. 10.1016/j.neuropsychologia.2004.11.005 [DOI] [PubMed] [Google Scholar]
  • 9.Zago M., La Scaleia B., Miller W. L., and Lacquaniti F., “Coherence of structural visual cues and pictorial gravity paves the way for interceptive actions,” J. Vis., vol. 11, no. 10, pp. 1–10, 2011. 10.1167/11.10.1 [DOI] [PubMed] [Google Scholar]
  • 10.Mijatovic A., La Scaleia B., Mercuri N., Lacquaniti F., and Zago M., “Familiar trajectories facilitate the interpretation of physical forces when intercepting a moving target,” Exp. Brain Res., vol. 232, no. 12, pp. 3803–3811, 2014. 10.1007/s00221-014-4050-6 [DOI] [PubMed] [Google Scholar]
  • 11.Moscatelli A. and Lacquaniti F., “The weight of time: Gravitational force enhances discrimination of visual motion duration,” J. Vis., vol. 11, no. 4, pp. 1–17, 2011. [DOI] [PubMed] [Google Scholar]
  • 12.Maffei V., Indovina I., Macaluso E., Ivanenko Y. P., Orban G. A., and Lacquaniti F., “Visual gravity cues in the interpretation of biological movements: Neural correlates in humans,” Neuroimage, vol. 104, no. October 2014, pp. 221–230, 2015. [DOI] [PubMed] [Google Scholar]
  • 13.Jörges B. and López-Moliner J., “Earth-Gravity Congruent Motion Facilitates Ocular Control for Pursuit of Parabolic Trajectories,” Sci. Rep., vol. 9, no. 1, pp. 1–13, 2019. 10.1038/s41598-018-37186-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Jörges B. and López-Moliner J., “Gravity as a Strong Prior: Implications for Perception and Action,” Front. Hum. Neurosci., vol. 11, no. 203, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Aubert H., “Die Bewegungsempfindung,” Pflüger, Arch. für die Gesammte Physiol. des Menschen und der Thiere, vol. 40, no. 1, pp. 459–480, December 1887. [Google Scholar]
  • 16.Fleischl V., “Physiologisch-optische Notizen,” Sitzungsberichte der Akad. der Wissenschaften Wien, no. 3, pp. 7–25, 1882. [Google Scholar]
  • 17.Dichgans J., Wist E., Diener H. C., and Brandt T., “The Aubert-Fleischl phenomenon: A temporal frequency effect on perceived velocity in afferent motion perception,” Exp. Brain Res., vol. 23, no. 5, pp. 529–533, November 1975. 10.1007/BF00234920 [DOI] [PubMed] [Google Scholar]
  • 18.Nanay B., “The Representationalism versus Relationalism Debate: Explanatory Contextualism about Perception,” Eur. J. Philos., vol. 23, no. 2, pp. 321–336, 2014. [Google Scholar]
  • 19.Marr D., “A computational investigation into the human representation and processing of visual information.pdf,” Vision: A computational investigation into the human representation and processing of visual information. 1982. [Google Scholar]
  • 20.Gibson J. J., The Ecological Approach to Visual Perception. New York: Taylor & Francis, 1986. [Google Scholar]
  • 21.Gold J. I. and Shadlen M. N., “The Neural Basis of Decision Making,” 2007. [DOI] [PubMed] [Google Scholar]
  • 22.Schneidman E., Bialek W., and Ii M. J. B., “Synergy, Redundancy, and Independence in Population Codes,” vol. 23, no. 37, pp. 11539–11553, 2003. 10.1523/JNEUROSCI.23-37-11539.2003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.McIntyre J., Zago M., Berthoz A., and Lacquaniti F., “The Brain as a Predictor: On Catching Flying Balls in Zero-G,” in The Neurolab Spacelab Mission: Neuroscience Research in Space, Buckey J. C. and Homick J. L., Eds. National Aeronautics and Space Administration, Lyndon B. Johnson Space Center, 2003, pp. 55–61. [Google Scholar]
  • 24.La Scaleia B., Zago M., and Lacquaniti F., “Hand interception of occluded motion in humans: A test of model-based versus on-line control,” J. Neurophysiol., vol. 114, pp. 1577–1592, 2015. 10.1152/jn.00475.2015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Zago M. and Lacquaniti F., “Internal Model of Gravity for Hand Interception: Parametric Adaptation to Zero-Gravity Visual Targets on Earth,” J. Neurophysiol., vol. 94, no. 2, pp. 1346–1357, 2005. 10.1152/jn.00215.2005 [DOI] [PubMed] [Google Scholar]
  • 26.Peirce J. et al. , “PsychoPy2: Experiments in behavior made easy,” Behav. Res. Methods, vol. 51, no. 1, pp. 195–203, 2019. 10.3758/s13428-018-01193-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Plant R. R. and Turner G., “Millisecond precision psychological research in a world of commodity computers: New hardware, new problems?,” Behav. Res. Methods, vol. 41, no. 3, pp. 598–614, 2009. 10.3758/BRM.41.3.598 [DOI] [PubMed] [Google Scholar]
  • 28.Bürkner P. C., “Advanced Bayesian multilevel modeling with the R package brms,” R J., vol. 10, no. 1, pp. 395–411, 2018. [Google Scholar]
  • 29.Stan Development Team, “Stan: the R interface to Stan. R package version 2.14.1,” pp. 1–23, 2016. [Google Scholar]
  • 30.Bates D., Mächler M., Bolker B. M., and Walker S. C., “Fitting linear mixed-effects models using lme4,” J. Stat. Softw., vol. 67, no. 1, 2015. [Google Scholar]
  • 31.R Core Team, “A Language and Environment for Statistical Computing. R Foundation for Statistical Computing,.” Vienna, Austria, 2017. [Google Scholar]
  • 32.Indovina I., Maffei V., Bosco G., Zago M., Macaluso E., and Lacquaniti F., “Representation of visual gravitational motion in the human vestibular cortex.,” Science, vol. 308, no. April, pp. 416–419, 2005. [DOI] [PubMed] [Google Scholar]
  • 33.Jörges B. and López-Moliner J., “Earth-Gravity Congruent Motion Facilitates Ocular Control for Pursuit of Parabolic Trajectories,” Sci. Rep., vol. 9, no. 1, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Wertheim A. H. and Van Gelder P., “An acceleration illusion caused by underestimation of stimulus velocity during pursuit eye movements: Aubert-Fleischl revisited.,” Perception, vol. 19, no. 4, pp. 471–82, 1990. 10.1068/p190471 [DOI] [PubMed] [Google Scholar]
  • 35.de Graaf B., Wertheim A. H., and Bles W., “The Aubert-Fleischl paradox does appear in visually induced self-motion,” Vision Res., vol. 31, no. 5, pp. 845–849, 1991. 10.1016/0042-6989(91)90151-t [DOI] [PubMed] [Google Scholar]
  • 36.Spering M. and Montagnini A., “Do we track what we see? Common versus independent processing for motion perception and smooth pursuit eye movements: A review,” Vision Res., vol. 51, no. 8, pp. 836–852, 2011. 10.1016/j.visres.2010.10.017 [DOI] [PubMed] [Google Scholar]
  • 37.Kaiser M. K., “Angular velocity discrimination,” Percept. Psychophys., vol. 47, no. 2, pp. 149–156, 1990. 10.3758/bf03205979 [DOI] [PubMed] [Google Scholar]
  • 38.Schoups A. A., Vogels R., and Orban G. A., “Human perceptual learning in identifying the oblique orientation: retinotopy, orientation specificity and monocularity.,” J. Physiol., vol. 483, no. 3, pp. 797–810, 1995. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Norman J. F., Todd J. T., Perotti V. J., and Tittle J. S., “The Visual Perception of Three-Dimensional Length,” J. Exp. Psychol. Hum. Percept. Perform., vol. 22, no. 1, pp. 173–186, 1996. 10.1037//0096-1523.22.1.173 [DOI] [PubMed] [Google Scholar]
  • 40.Jörges B., Hagenfeld L., and López-Moliner J., “The use of visual cues in gravity judgements on parabolic motion,” Vision Res., vol. 149, pp. 47–58, August 2018. 10.1016/j.visres.2018.06.002 [DOI] [PubMed] [Google Scholar]
  • 41.Werkhoven P., Snippe H. P., and Alexander T., “Visual processing of optic acceleration,” Vision Res., vol. 32, no. 12, pp. 2313–2329, 1992. 10.1016/0042-6989(92)90095-z [DOI] [PubMed] [Google Scholar]
  • 42.Nelder J. A. and Mead R., “A Simplex Method for Function Minimization,” Comput. J., vol. 7, no. 4, pp. 308–313, 1965. [Google Scholar]
  • 43.McKee S. P., “A local mechanism for differential velocity detection,” Vision Res., vol. 21, no. 4, pp. 491–500, 1981. 10.1016/0042-6989(81)90095-x [DOI] [PubMed] [Google Scholar]
  • 44.Bennett S. J. and Benguigui N., “Is Acceleration Used for Ocular Pursuit and Spatial Estimation during Prediction Motion?,” PLoS One, vol. 8, no. 5, 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Benguigui N., Ripoll H., and Broderick M. P., “Time-to-contact estimation of accelerated stimuli is based on first-order information.,” J. Exp. Psychol. Hum. Percept. Perform., vol. 29, no. 6, pp. 1083–1101, 2003. 10.1037/0096-1523.29.6.1083 [DOI] [PubMed] [Google Scholar]
  • 46.Brenner E. et al. , “How can people be so good at intercepting accelerating objects if they are so poor at visually judging acceleration?,” Iperception., vol. 7, no. 1, pp. 1–13, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision Letter 0

Robin Baurès

14 Apr 2020

PONE-D-20-06855

Characterizing the Strong Earth Gravity Prior

PLOS ONE

Dear Mr. Joerges,

I hope you, your co-author, and the families are going well in this strange period.

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Having now received two reviews regarding your manuscript, I think I hardly could get reviews that are more different on your manuscript, and now stand in a very uncomfortable position to make a decision.

Where do we stand, just to act in full transparency: R1 sees no comment to make to your article, and would accept it as is, minus a very few details. On the contrary, R2 has many negative comments and recommend rejection. To make things even harder, it turns out this is not an independent manuscript, and you here re-analyze data that was published previously (which has call for a specific warning from the editorial team).

Regarding R2’s comments, I do not see any comment that would claim for a major issue, which would definitely call for an immediate rejection. I think all the comments are important, but you should have a chance to answer to them.

I am more concerned with the data reanalyze issue. I do not see a clear explanation why the current analysis was not proposed in the first article. Is there anything new, an article that got published, a new method etc… that led you to make the current analysis? At the moment, one may have the feeling that the current paper is just the second part of the first paper, and I think this is detrimental to its acceptance. Can it be specifically added why these analyses were not done initially, and why they are carried out now? Or something like the conclusion of the first paper seems at odd given you did not take into account the Aubert-Fleischl phenomenon, and you here attempt to answer this possible lack? I have been reviewing an article some time ago, Makin (2018), in which he re-analyze some of his own data (see specifically the supplementary part 2), and I think we miss an explanation like this currently.

Finally, having read myself your paper, a scientific comment as well:

We do not know how long the trajectory gets occluded in your experiment, depending on the occlusion ratio, which makes your results hard to compare to the literature. Doing myself TTC experiments, and using occlusion time within 0.5 - 3 s, I often get constant error between -1 s to +1 s, even so I mostly use constant velocities (the errors would be of a higher magnitude I think with acceleration). Hence, if you exclude trials with errors > .5 s and one participant because he has a mean error of .23 s, this is a major bias to me. I see no reason to exclude a participant because his performances are different from the others, this is clearly not an exclusion bias. I would exclude him because of the devices did not properly work, because he did not understood the task, forgot his glasses etc… Excluding trials in which the participants answered before the occlusion is perfectly fine to me for example. If I should use a performance criteria, I would choose one that cannot be questioned and it obviously not acceptable. In the experiments I am doing, I generally remove errors above 3 s. Here you are removing errors / participants that I would never dare considering as outliers. I think that by excluding these trials and participants, reducing the error toward 0 ms and the variability as you do, it is not surprising to confirm the participants perform well and confirm the existence of a 1g model. If existing, this model should deal with the complete variability of the data, not only on the selected trials.I would therefore strongly suggest to include these trials / participant, and see it this affect the outcome of the analysis.

Makin, A. D. J. (2018). The common rate control account of prediction motion. Psychonomic Bulletin & Review, 25, 1784-1797.

We would appreciate receiving your revised manuscript by May 29 2020 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter.

To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). This letter should be uploaded as separate file and labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. This file should be uploaded as separate file and labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. This file should be uploaded as separate file and labeled 'Manuscript'.

Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

We look forward to receiving your revised manuscript.

Kind regards,

Robin Baurès, Ph.D.

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements:

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at http://www.plosone.org/attachments/PLOSOne_formatting_sample_main_body.pdf and http://www.plosone.org/attachments/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please modify the title to ensure that it is meeting PLOS’ guidelines (https://journals.plos.org/plosone/s/submission-guidelines#loc-title). In particular, the title should be "specific, descriptive, concise, and comprehensible to readers outside the field" and in this case it is not informative and specific about your study's scope and methodology.

3. We noted in your submission details that a portion of your manuscript may have been presented or published elsewhere.

"We have published about this data before (Jörges & López-Moliner 2019). Our previous publication focussed on the eye-movement component of the project. We also presented the mean differences in timing errors and established a very simple model to account for these errors.

The present paper addresses a different research question than our previous publication: rather than comparing mean errors, we use the variability in responses to estimate how precisely humans represent the value of earth gravity.

We adapted the methods section for this manuscript to focus on the timing task. However, a strong overlap is unavoidable. Furthermore, Figure 2 from the present manuscripts was also used in the previous publication. We noted this in the manuscript."

Please clarify whether this [conference proceeding or publication] was peer-reviewed and formally published. If this work was previously peer-reviewed and published, in the cover letter please provide the reason that this work does not constitute dual publication and should be included in the current manuscript.

4. Please ensure that you refer to Figure 2 in your text as, if accepted, production will need this reference to link the reader to the figure.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The target manuscript aims to expand a Bayesian model, previously put forth by the authors, to account for timing errors of ball undergoing parabolic trajectories either in accordance or counter to earth's gravity. As an improvement from previous efforts, the authors explicitly include what could be expected from the Aubert-Fleischl effect. While doing so, the authors uncover a mismatch between the mean timings, the normal distribution used to model gravuty as a prior and a Bayesian explanation of the found variability. Overall, the manuscript is very interesting and well written. The assumptions for the modelizations are very well clarified and I found several of the steps particularly clever. I honestly do not think I can provide any feedback which would further improve this manuscript, apart from a few minor details:

1. Abstract, "while expands the range" - there seems to be an error in this sentence;

2. Page 3: While both the Bayes Theorem and the explanation of the Bayesian framework are clear by themselves, it would be better to link them more closely (e.g., by explicitly stating to which parameters in equation 1 does the prior [P(A)] and the Likelihood [P(B|A)/P(B)] refer to - I know this should be obvious for most readers, but it would help the readability of this section);

3. Page 8: "Figure 3: Temporal errors (...)" - this excerpt seems to be a figure caption. In fact, the first sentence reads the same as the caption for Figure 3. There is, however, an isolated sentence in this paragraph which do not seem to fit the text or the caption: "illustrated the distributions...". Please either remove these sentences or, if meant to convey what can be seen in the Figure, please reformulated it;

4. Page 11: "Figure 4 visualizes the mean errors" - I am not an english native speaker, but the verb "visualizes" sounds odd in this context (for a non-native speaker it sounds as if the Figure is activelly visualizing the mean errors); if this turns out to be a correct usage of the term, please ignore this comment;

Reviewer #2: The authors report an experiment testing the existence of a Strong Gravity Earth Prior (SGEP) in a coincidence timing task. The authors aim at determining the mean and the std of the SGEP. The submitted paper calls for two other articles already published by the authors, among which one reports one part of the collected data (eye-tracking) gained in a single experiment. The remaining data (temporal errors) are used in the present submitted paper. The first result of the experiment is that the Aubert-Fleischl phenomenon can account for the mean of SGEP. The second result of the experiment is that the variability of the SGEP can be retrieved from participants’ button-press temporal variability.

I agree that the work is for some aspects interesting. The experimental paradigm neither measurements are not very innovative but the scientific approach sounds good. Concerning the flow of the paper, the motivation to access the mean of SGEP can be easily understood for the non-specialist reader. Results can be easily understandable too. Indeed, there is a theoretical value of g and one can compare it to the participants' mean percept. The second goal of the experiment (“determine the standard deviation of the strong gravity prior” p.2) is for me much more obscure. Are there any examples of the measure of the standard deviation of Priors in the literature? What would be the expected value of it? This leads to a difficult interpretation of the authors' conclusions: “we are not able to fully disentangle different sources of noise in our data”.

Additionally, I do not like very much splitting a single experiment into several papers since it misleads the reader about the original experiment. Here, the reader can for instance not be aware of the experimental constraints (e.g., calibration process) linked to eye-tracking for participants that are however part of the experiment but too briefly reported. Moreover, this looks for me a rentability approach rather than a scientifically motivated approach. Temporal judgment and oculomotor behavior were linked, why spreading them? Finally, some paragraphs are strictly identical between papers without any further checks. For instance, can the authors justify that “The projectors introduced a delay of 0.049259 s (SD = 0.001894 s) “ (p.6)? I would be interested in an instrument able to measure events shorter than ms… I noticed several other points that at least request a cross-check. This gave to me a weird first impression of the work.

Experimentally, I’m very annoyed with several aspects of the experiment, despite some parts of the experimental protocol and data were already published otherwhere.

- My first concern is about the sample size. I can not understand how one of the authors can participate in such a psychological experiment and strongly suggest to remove its data. All participants must be naïve in most of the experiments in visual psychology. Additionally, I found the remaining sample of 8 participants (given that s9 was excluded from analysis, results sections p.9) too weak. The question looks to be already raised by a reviewer in the 2019’s paper since a “justification of sample size” paragraph appears in it. This, however, does not convince me. Finally, if the data used in the present paper were gained in a previous experiment (Jorges, Lopez-Moliner, 2019) as claimed p.4 “we use the data from our previous study (Jörges & López-Moliner, 2019)”, why are the gender of participants different between this paper (5 females) and the previous paper (3 females)?

- My second concern is about the number of experimental conditions (section procedure, p.6-7). It looks that participants performed 48 training trials + 3x320 trials + 64 trials for the “1 block of 1g/-1g motion" = 1168 trials. How long spent the experiment? Do you think that such an amount of trials did not let to a standardized perception? Do any participants report any fatigue or ennui?

- My third concern is about the repetition number. Since the paper focuses on the variability of prior, why not testing less trajectory but much more repetitions? You’re not dealing with 3D motion variability, just button-press. 8 repetitions of a temporal error are too low for supporting the authors' test about the variability of SGEP.

- My fourth concern addresses the data collection. First, can the authors guaranty that the mouse button can accurately measure temporal error? In such an experimental paradigm, high-temporal accuracy devices as E-Prime are usually required since the USB port and internal clock of the computer can delay the monitoring of USB mouse signal. Do the authors perform some test-bed ? Moreover, at no moment, the authors report the duration of the stimulus, which makes temporal data difficult to understand. Second, in the apparatus section (p. 6), why not offering any details about the eye-tracking device? post-processing methods? Type of dependent variable analyzed? This is an odd oversight because some interpretations resort to eye-tracking data (cf. simulations section, p10. “Our ad hoc explanation of this discrepancy was that subjects were often executing a saccade when the ball returned to initial height” “ and “Our subjects were specifically instructed to follow the target with their eyes, and the eye-tracking data we collected that they generally did pursue the target” ). The authors should at least mention their previous paper and report results.

I would finally like the author to carefully explain the ballistic of the ball trajectory and the related perceptual information available for perceiving g. Indeed, in a fully visible trajectory, g can be retrieved from different sources. In their experiments, authors occlude some parts of the trajectory. Naïve readers must understand what information remains available from all information usually available. This must also be connected to real-world illustrations, that must convince the reader that humans usually have to succeed in performing such tasks (e.g. in sports for instance) and that the experimental paradigm can mirror human perceptual processes

Minor comments :

- Participant section (p.4), why “remaining participants”?

- Apparatus section (p. 6). Does the virtual scene be enslaved to the participants' viewpoint? Perhaps the variability of judgments is related to a change in perception of the altitude?

- Stimuli section (p.4). Does the ball spin during its trajectory?

- Stimuli Section (p.5). please details why the -6.15m depth from the observer was chosen and the perceptual consequences of such a parameter (FOV, role of stereoscopy, usual perceptual processes operating at this range of distance, etc.)

- I found the paragraph “On a theoretical level […] to recover its physical velocity from retinal motion” (p.3) very crude, without any theoretical references to any framework while it is connected to a specific field of visual psychology. Optic Flow carries a lot of visual information that alone can be used to disambiguate environment perception for ecological psychologists. Also, for cyberneticians, “precise estimate of the observed world” is achieved through internal models. Please add references and pragmatic examples that support the authors' claims.

- Result section (p.8). The reference to Figure 3 is weirdly inserted in connection to the following explanation in the text. Please correct.

- Result section (p.8). The “following test model” looks to be identical to the first equation (p.7). Why going back regarding the previous equation?

- Result section (p.9). I would prefer expressing the temporal errors as a function of the duration of the trajectory to figure out their magnitude.

- Result section. Please provide all data corresponding to the statistical tests (especially effect size).

- The sentence “However, in the case of gravity it seems that the expectation of Earth Gravity overrules all sensory information that humans collect on the law of motion of an observed object. “(p3-4) looks to be a claim without any experimental evidence.

- The Figure 1 is so much underexploited in the text that unfamiliar readers with Bayesian Theory might be loose in interpreting it.

- The reference to “figure 1” in the Section “stimuli”, p.5 looks to be incorrect in the paragraph that refers to the virtual scene (cf. figure 2)

- A missing figure reference had to be corrected in p. 17

- “we expect the gravity model not to be activated for inverted gravitational motion” (p. 16) looks to contradicts with “there is some reason to believe that the gravity prior is not completely inactive in upwards motion” (P. 17)

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Nuno De Sá Teixeira

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2020 Aug 19;15(8):e0236732. doi: 10.1371/journal.pone.0236732.r002

Author response to Decision Letter 0


11 May 2020

Response to the editor

I am more concerned with the data reanalyze issue. I do not see a clear explanation why the current analysis was not proposed in the first article. Is there anything new, an article that got published, a new method etc… that led you to make the current analysis? At the moment, one may have the feeling that the current paper is just the second part of the first paper, and I think this is detrimental to its acceptance. Can it be specifically added why these analyses were not done initially, and why they are carried out now? Or something like the conclusion of the first paper seems at odd given you did not take into account the Aubert-Fleischl phenomenon, and you here attempt to answer this possible lack? I have been reviewing an article some time ago, Makin (2018), in which he re-analyze some of his own data (see specifically the supplementary part 2), and I think we miss an explanation like this currently.

The first paper addressed the question whether the internal representation of gravity was used to guide eye-movements and motion extrapolation (see also our preregistration at https://osf.io/8vg95/). We also introduced a very simple model, which, however, only predicted responses for trials with a long occlusion. It broke down for shorter occlusions and we suggested a possible explanation for this lacking fit for shorter occlusions (an interaction with saccades). Importantly, we never touched upon variability in timing responses, mostly because the main thrust of this project were eye-movements.

This second project uses a different approach (modelling and simulations) to answer a different research question: what are the parameters of this strong earth gravity prior? From the literature and our previous analysis, it was quite clear that the mean of this prior had to be around 9.81 m/s². However, it hadn’t been attempted to characterize the standard deviation of this prior. What we see as main contribution of this paper is thus both the method by which we obtain the standard deviation and the standard deviation itself.

It is important to add that we only included the methods and a repetition of a part of the results for the convenience of the reader. We realized that this could give the impression that we were presenting (new) experimental results. We reduced the methods to the bare minimum and removed the biggest part of the results from the resubmission, and added a note that readers that were interested in experimental design and accuracy results could consult our earlier publication. The contributions we aim to make with this manuscript are, as stated above, the refinement of the model and the simulations that enable us to extract the standard deviation of the gravity prior.

We do not know how long the trajectory gets occluded in your experiment, depending on the occlusion ratio, which makes your results hard to compare to the literature. Doing myself TTC experiments, and using occlusion time within 0.5 - 3 s, I often get constant error between -1 s to +1 s, even so I mostly use constant velocities (the errors would be of a higher magnitude I think with acceleration). Hence, if you exclude trials with errors > .5 s and one participant because he has a mean error of .23 s, this is a major bias to me. I see no reason to exclude a participant because his performances are different from the others, this is clearly not an exclusion bias. I would exclude him because of the devices did not properly work, because he did not understood the task, forgot his glasses etc… Excluding trials in which the participants answered before the occlusion is perfectly fine to me for example. If I should use a performance criteria, I would choose one that cannot be questioned and it obviously not acceptable. In the experiments I am doing, I generally remove errors above 3 s. Here you are removing errors / participants that I would never dare considering as outliers. I think that by excluding these trials and participants, reducing the error toward 0 ms and the variability as you do, it is not surprising to confirm the participants perform well and confirm the existence of a 1g model. If existing, this model should deal with the complete variability of the data, not only on the selected trials. I would therefore strongly suggest to include these trials / participant, and see it this affect the outcome of the analysis.

Applying the criterion of an absolute error of < 0.5 after removing subject 9 results only in a loss of 269 trials, that is 2.2% of the remaining trials. That is, this criterion was relatively liberal. Please note that our occlusion times are between 0.2 and 0.8 s, that is, quite short (we also added this information to the manuscript).

Regarding subject 9, we have the strong worry that their performance was influenced heavily by other biases (due to VR presentation); as visible from the (adjusted) Figure 4 in the manuscript, their performance differs quite extremely from the other subjects (they are the only participant whose error ratio consistently lies above 2, in comparison to the other participants who barely ever exceed 1.25). For the resubmission, we have, nonetheless, included s09 (but excluded the author, s10, as per Reviewer 2’s request) and adjusted the exclusion criterion to 2 seconds, which represents about 250 % of the longest occlusion duration. This led to removal of 216 trials or 1.7% of all trials that remained after exclusion of the author’s data.

Response to the Reviewers

Reviewer #1:

The target manuscript aims to expand a Bayesian model, previously put forth by the authors, to account for timing errors of ball undergoing parabolic trajectories either in accordance or counter to earth's gravity. As an improvement from previous efforts, the authors explicitly include what could be expected from the Aubert-Fleischl effect. While doing so, the authors uncover a mismatch between the mean timings, the normal distribution used to model gravuty as a prior and a Bayesian explanation of the found variability. Overall, the manuscript is very interesting and well written. The assumptions for the modelizations are very well clarified and I found several of the steps particularly clever. I honestly do not think I can provide any feedback which would further improve this manuscript, apart from a few minor details:

1. Abstract, "while expands the range" - there seems to be an error in this sentence;

Addressed, thank you.

2. Page 3: While both the Bayes Theorem and the explanation of the Bayesian framework are clear by themselves, it would be better to link them more closely (e.g., by explicitly stating to which parameters in equation 1 does the prior [P(A)] and the Likelihood [P(B|A)/P(B)] refer to - I know this should be obvious for most readers, but it would help the readability of this section);

Thanks for raising this point. We expanded the whole section.

3. Page 8: "Figure 3: Temporal errors (...)" - this excerpt seems to be a figure caption. In fact, the first sentence reads the same as the caption for Figure 3. There is, however, an isolated sentence in this paragraph which do not seem to fit the text or the caption: "illustrated the distributions...". Please either remove these sentences or, if meant to convey what can be seen in the Figure, please reformulated it;

Thank you, addressed.

4. Page 11: "Figure 4 visualizes the mean errors" - I am not an english native speaker, but the verb "visualizes" sounds odd in this context (for a non-native speaker it sounds as if the Figure is activelly visualizing the mean errors); if this turns out to be a correct usage of the term, please ignore this comment;

Addressed.

Reviewer #2:

The authors report an experiment testing the existence of a Strong Gravity Earth Prior (SGEP) in a coincidence timing task. The authors aim at determining the mean and the std of the SGEP. The submitted paper calls for two other articles already published by the authors, among which one reports one part of the collected data (eye-tracking) gained in a single experiment.

We have published one paper based on this dataset (in Scientific Reports), not two. The first paper contains analyses of both behavioral tasks (ocular pursuit and timing responses). After publication of that paper, we became interested in characterizing the gravity prior not only in terms of its mean, but also in terms of its standard deviation, and realized that our published data could be used for this purpose. While the simulations proposed in this manuscript are of course related to the previous analyses, we believe that we go beyond (re-)reporting the behavioral data, both in terms of its research question (“can we fully characterize the gravity prior?” rather than “Do humans rely on gravity to guide eye-movements and interceptive timing?”) and its modelling/simulation-based (rather than behavioral) approach.

The remaining data (temporal errors) are used in the present submitted paper. The first result of the experiment is that the Aubert-Fleischl phenomenon can account for the mean of SGEP. The second result of the experiment is that the variability of the SGEP can be retrieved from participants’ button-press temporal variability. I agree that the work is for some aspects interesting. The experimental paradigm neither measurements are not very innovative but the scientific approach sounds good. Concerning the flow of the paper, the motivation to access the mean of SGEP can be easily understood for the non-specialist reader. Results can be easily understandable too. Indeed, there is a theoretical value of g and one can compare it to the participants' mean percept. The second goal of the experiment (“determine the standard deviation of the strong gravity prior” p.2) is for me much more obscure. Are there any examples of the measure of the standard deviation of Priors in the literature? What would be the expected value of it? This leads to a difficult interpretation of the authors' conclusions: “we are not able to fully disentangle different sources of noise in our data”.

There are studies on optimal integration of prior information with online evidence. These studies typically manipulate the standard deviation of the prior in virtue of the stimulus distribution they represent. We, in turn, are attempting to measure the standard deviation of a prior that is formed by our interactions with the natural environment. “Natural” priors have been studied before, e.g., a slow motion prior, a light-from-above prior or a bigger-is-heavier prior (Adams, Graf, & Ernst, 2004; Peters, Ma, & Shams, 2016; Stocker & Simoncelli, 2006; Thornton & Lee, 2000). However, to our knowledge, we are the first ones to fully characterize such prior in terms of its mean and standard deviation. We expanded on our explanation of the Bayesian framework to make clearer what motivated our hypothesis about the standard deviation.

We expect the standard deviation of this prior to be very low. This is because only a very narrow prior can attract the mean of the posterior as strongly as reported in the literature, and as we found in our experiment (see analysis in our Scientific Reports paper). If we were pressed to put a number on our prior expectation of its standard deviation, we would choose 10% of its mean (i.e., 1 m/s²). I.e., it would be represented more precisely than linear velocities, which are represented with a standard deviation of 10-15% of the respective means. We added a sentence specifying this expectation in the introduction.

Additionally, I do not like very much splitting a single experiment into several papers since it misleads the reader about the original experiment. Here, the reader can for instance not be aware of the experimental constraints (e.g., calibration process) linked to eye-tracking for participants that are however part of the experiment but too briefly reported. Moreover, this looks for me a rentability approach rather than a scientifically motivated approach. Temporal judgment and oculomotor behavior were linked, why spreading them?

We understand the reviwer's concern and want to make clear that none of the authors of this manuscript like “slicing” or producing more than one paper from a single project. We are aware that this impression might arise. The motivation for the present simulations arose after the behavioral results were already published. The focus of this manuscript goes beyond the scope of the previous paper, and we believe that the use of published datasets is acceptable if the paper adds sufficiently novel results. In our opinion, both the modelling-based procedure we use to recover the standard deviation of the strong earth gravity prior and its tentative value satisfy this condition of novelty. Please note that we did not split up ocular behavior and manual judgments in two papers. We report the results for both tasks in the previous paper. The present manuscript, in turn, focusses on computational aspects behind these behavioral results.

Please note that we re-described the methods for the convenience of the reader. We now realize that this might lead readers to believe that these were new experimental results. This was not our intention and we apologize. We reduced the methods to the bare minimum necessary to understand the task and added a reference to the previous paper.

Finally, some paragraphs are strictly identical between papers without any further checks. For instance, can the authors justify that “The projectors introduced a delay of 0.049259 s (SD = 0.001894 s) “ (p.6)? I would be interested in an instrument able to measure events shorter than ms… I noticed several other points that at least request a cross-check. This gave to me a weird first impression of the work.

The delay refers to the relative difference between the projector's output and audio output measured by an analog oscilloscope HAMEG HM 1505 which works with a resolution of 150 MHz. The estimation of the relative delay is based on 100 samples. The value reported here corresponds to the mean across all of these trials. We adjusted the manuscript to reflect that this is a mean value.

Experimentally, I’m very annoyed with several aspects of the experiment, despite some parts of the experimental protocol and data were already published otherwhere.

- My first concern is about the sample size. I can not understand how one of the authors can participate in such a psychological experiment and strongly suggest to remove its data. All participants must be naïve in most of the experiments in visual psychology. Additionally, I found the remaining sample of 8 participants (given that s9 was excluded from analysis, results sections p.9) too weak. The question looks to be already raised by a reviewer in the 2019’s paper since a “justification of sample size” paragraph appears in it. This, however, does not convince me.

While certainly not the most recommended approach, being one’s own participant is, in our understanding, not an uncommon situation in our field. It is generally assumed that there is not enough voluntary control over performance, and in fact, the author’s performance is very similar to the other participants’ performance (if somewhat less variable, possibly due to their experience with the experiment). However, we are very sympathetic to this concern and repeat the simulations without the author’s data. As per the editor’s request, we include s09, which brings us back to an n of 9. The results are very similar.

The power analysis (reported in the justification of sample size section) was conducted before data collection (please see also our preregistration under https://osf.io/8vg95/). While we agree that a higher n is always desirable, we found under quite conservative assumptions that a sample size of 10 was sufficient to detect effects. Note that the power analysis was conducted with the eye-tracking task in mind, not for the time-to-contact estimation. However, considering the effect sizes observed in the timing task, 9 participants (with 1344 trials each) would be enough to achieve a power of nearly 1. This is, of course, assuming that the inter-subject variability we find in our dataset is representative of the variability in the overall population.

Finally, if the data used in the present paper were gained in a previous experiment (Jorges, Lopez-Moliner, 2019) as claimed p.4 “we use the data from our previous study (Jörges & López-Moliner, 2019)”, why are the gender of participants different between this paper (5 females) and the previous paper (3 females)?

That is was a mistake in the current manuscript and we apologize for this oversight. The numbers from the published paper are correct. We corrected this mistake.

- My second concern is about the number of experimental conditions (section procedure, p.6-7). It looks that participants performed 48 training trials + 3x320 trials + 64 trials for the “1 block of 1g/-1g motion" = 1168 trials. How long spent the experiment? Do you think that such an amount of trials did not let to a standardized perception? Do any participants report any fatigue or ennui?

The experiment was about one hour of testing overall. Calibrating the eye-tracker before the session and between the blocks added on average another 30 minutes to the experiment. The participants did report being bored by the task at times, but dividing the experiment up in four blocks meant that they could take breaks every 15 minutes.

We address the issues of a regression to the mean/central tendency, which may be a consequence of standardized perception, in the Scientific Reports paper; in brief, we present three pieces of evidence: (1) errors from trials with different gravities but the same time-to-contact should be biased strongly towards a common mean error. This is not the case; there is a bias, but it is very small. (2) The overall pattern of responses fit our gravity model much better than a central tendency model. (3) variability in our data is heavily correlated with flight time; if participants used the same response criterion independently of presented trajectory, variability should be equal across conditions.

- My third concern is about the repetition number. Since the paper focuses on the variability of prior, why not testing less trajectory but much more repetitions? You’re not dealing with 3D motion variability, just button-press. 8 repetitions of a temporal error are too low for supporting the authors' test about the variability of SGEP.

We present each combination of stimulus variables 24 times overall: eight times in each of the three 0.7g-1.3g blocks. In the -1g/1g condition, we only had two different gravity values and thus managed to accommodate all 24 repetitions in one block. Furthermore, differences in contact timing were negligible between both initial horizontal velocities (very likely because it barely impacts the flight duration). For this reason, we collapsed trials and obtained thus the standard deviations across 48 trials for each condition combination (gravity x initial vertical velocity x Occlusion category). We realize that this was ambiguous in the manuscript and adjusted it accordingly.

- My fourth concern addresses the data collection. First, can the authors guaranty that the mouse button can accurately measure temporal error? In such an experimental paradigm, high-temporal accuracy devices as E-Prime are usually required since the USB port and internal clock of the computer can delay the monitoring of USB mouse signal. Do the authors perform some test-bed ?

Comercial stimulus presentation software like E-Prime or presentation are not suitable for 3D complex stimulus presentations. Usually, programs needs to be writen in C or python (like in our case). We use the openGl engine in python (pyglet) devoted to gaming, which aims to reach maximum precision both for stimulus frames and input recording. We access the mouse time stamps directly iohub python libraries (which merges with psychopy) which circumvents the main system events loop and uses the clock_gettime(CLOCK_MONOTONIC) in unix-like systems (like os x, the one we use). The precision is sub-miliseconds. Iohub can be used with or without psychopy realtime access to input devices. Importantly, it runs its own thread devoted to continuously sampling the input device state independently of the video (stimulus) thread.

Moreover, at no moment, the authors report the duration of the stimulus, which makes temporal data difficult to understand.

This was indeed an oversight, which we corrected. Thank you!

Second, in the apparatus section (p. 6), why not offering any details about the eye-tracking device? post-processing methods? Type of dependent variable analyzed? This is an odd oversight because some interpretations resort to eye-tracking data (cf. simulations section, p10. “Our ad hoc explanation of this discrepancy was that subjects were often executing a saccade when the ball returned to initial height” “ and “Our subjects were specifically instructed to follow the target with their eyes, and the eye-tracking data we collected that they generally did pursue the target”). The authors should at least mention their previous paper and report results.

We did not want to repeat results from our previous paper excessively (for the reasons the reviewer outlined above); we added references to the previous paper to the passages in question.

I would finally like the author to carefully explain the ballistic of the ball trajectory and the related perceptual information available for perceiving g. Indeed, in a fully visible trajectory, g can be retrieved from different sources. In their experiments, authors occlude some parts of the trajectory. Naïve readers must understand what information remains available from all information usually available. This must also be connected to real-world illustrations, that must convince the reader that humans usually have to succeed in performing such tasks (e.g. in sports for instance) and that the experimental paradigm can mirror human perceptual processes

Minor comments :

- Participant section (p.4), why “remaining participants”?

This was an error, we tested 10 participants overall. We apologize for the oversight.

- Apparatus section (p. 6). Does the virtual scene be enslaved to the participants' viewpoint? Perhaps the variability of judgments is related to a change in perception of the altitude?

We did not adjust the stimulus to the height of the participants. After excluding the author, the tallest participant was about 1.80 and the shortest about 1.65. At a 6 m difference to the stimulus, the differences in angle between the eyes of the tallest and the shortest participant are at most 2°. While this may have some influence on between-participant variability, any effect should be cancelled out due to the within-participant design of the experiment.

- Stimuli section (p.4). Does the ball spin during its trajectory?

The ball does not spin. Since we reduced the methods part greatly, we did not add this information to the manuscript.

- Stimuli Section (p.5). please details why the -6.15m depth from the observer was chosen and the perceptual consequences of such a parameter (FOV, role of stereoscopy, usual perceptual processes operating at this range of distance, etc.)

We mostly cut the description of the methods to minimize overlap with what has been published before, which would include this information.

To answer the reviewer’s questions nonetheless: when looking at the center of the screen, the field of view was 50° horizontally and 62° vertically, and stimuli were presented in a range of about 40° horizontally and 20° vertically.

For the task at hand, binocular cues are mostly relevant to estimating the distance of the object. While they are quite noisy at a distance of 6.15 m, the high accuracy in responses suggests that our participants managed to recover the distance correctly, probably using prior knowledge about the target (López-Moliner & Keil, 2012), the internal model of gravity (Lacquaniti et al., 2015) and the visual environment. In our task, binocular cues are nearly constant throughout the trajectories, which is why it is very likely that participants rely mostly on monocular cues.

- I found the paragraph “On a theoretical level […] to recover its physical velocity from retinal motion” (p.3) very crude, without any theoretical references to any framework while it is connected to a specific field of visual psychology. Optic Flow carries a lot of visual information that alone can be used to disambiguate environment perception for ecological psychologists. Also, for cyberneticians, “precise estimate of the observed world” is achieved through internal models. Please add references and pragmatic examples that support the authors' claims.

We added references for the concept of “Encoding/Decoding”. We furthermore made our theoretical commitments more explicit and expanded on our (previously very rudimentary) example.

- Result section (p.8). The reference to Figure 3 is weirdly inserted in connection to the following explanation in the text. Please correct.

Thank you, we corrected this mistake.

- Result section (p.8). The “following test model” looks to be identical to the first equation (p.7). Why going back regarding the previous equation?

These equations referred to different types of analysis (accuracy vs. precision). However, since we cut the accuracy analysis to minimize overlap, the first one is no longer needed.

- Result section (p.9). I would prefer expressing the temporal errors as a function of the duration of the trajectory to figure out their magnitude.

We transformed the error to ratios ((error + occluded time)/occluded time).

- Result section. Please provide all data corresponding to the statistical tests (especially effect size).

To minimize overlap with our previous paper, we cut the analysis regarding means. Furthermore, we transformed the results of the Bayesian analyses from the log space into normal space, which should make them much more interpretable; especially together with Table 1, which lists the standard deviations for each condition including variability that the Bayesian Mixed Model assigned to each individual. (This is why the values in the table are higher than the values reported for the model.)

- The sentence “However, in the case of gravity it seems that the expectation of Earth Gravity overrules all sensory information that humans collect on the law of motion of an observed object. “(p3-4) looks to be a claim without any experimental evidence.

We added references to substantiate this claim.

- The Figure 1 is so much underexploited in the text that unfamiliar readers with Bayesian Theory might be loose in interpreting it.

We expanded on the explanation of the Bayesian framework and relied more on Figure 1.

- The reference to “figure 1” in the Section “stimuli”, p.5 looks to be incorrect in the paragraph that refers to the virtual scene (cf. figure 2)

Corrected, thank you.

- A missing figure reference had to be corrected in p. 17

We are not sure which figure reference the reviewer is referring to exactly, but we carefully checked all references to make sure that all are in order.

- “we expect the gravity model not to be activated for inverted gravitational motion” (p. 16) looks to contradicts with “there is some reason to believe that the gravity prior is not completely inactive in upwards motion” (P. 17)

This issue required a longer explanation, and we were under the impression that it was not the right place to elaborate. Instead, we dedicated a paragraph of the discussion to the issue. We added a reference to that section in the place in question.

Bibliography:

Adams, W. J., Graf, E. W., & Ernst, M. O. (2004). Experience can change the “light-from-above” prior. Nature Neuroscience, 7(10), 1057–1058. https://doi.org/10.1038/nn1312

Lacquaniti, F., Bosco, G., Gravano, S., Indovina, I., La Scaleia, B., Maffei, V., & Zago, M. (2015). Gravity in the Brain as a Reference for Space and Time Perception. Multisensory Research, 28(5–6), 397–426. https://doi.org/10.1163/22134808-00002471

López-Moliner, J., & Keil, M. (2012). People Favour Imperfect Catching by Assuming a Stable World. Current Science, (4), 1435–1439. https://doi.org/10.1371/Citation

Peters, M. A. K., Ma, W. J., & Shams, L. (2016). The Size-Weight Illusion is not anti-Bayesian after all: a unifying Bayesian account. PeerJ, 4, e2124. https://doi.org/10.7717/peerj.2124

Stocker, A. A., & Simoncelli, E. P. (2006). Noise characteristics and prior expectations in human visual speed perception. Nature Neuroscience, 9(4), 578–585. https://doi.org/10.1038/nn1669

Thornton, A., & Lee, P. (2000). Publication bias in meta-analysis: Its causes and consequences. Journal of Clinical Epidemiology, 53(2), 207–216. https://doi.org/10.1016/S0895-4356(99)00161-4

Decision Letter 1

Robin Baurès

2 Jul 2020

PONE-D-20-06855R1

Determining Mean and Standard Deviation of the Strong Gravity Prior through Simulations

PLOS ONE

Dear Dr. Joerges,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

As you will see in the reviews, I could possibly not receive more opposed reviews from the two reviewers. To help me getting a decision, I have contacted and been help by another academic editor that I wish to thank here, who has read and commented the reviews. We both agree that you have made substantial modifications that were asked by the second reviewer, in particular re-included S09 and excluded S10, which appears unnoticed by the second reviewer. It however appears that in referring to the previous article to describe the experimental design, you might have over-reacted. We feel that with the current article alone, it would be hard to a reader to understand the task. We should think that all readers might not have access to the previous publication, and hence the description of the task should stand on its own in the present manuscript.

Finally, the second reviewer still disagrees with the use of the mouse and the keyboard to collect temporal response. It seems to both me and the additional editor that with cautions on the software side, which you did, these two devices remain appropriate to collect these answers. However, I would recommend adding a few lines to present the reviewer's concern, with the citation, so that any reader can shape his own mind on the debate.

Please submit your revised manuscript by Aug 16 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

Robin Baurès, Ph.D.

Academic Editor

PLOS ONE

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: (No Response)

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: No

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: No

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The authors successfully addressed all the points raised in my previous review. I do believe that the current version of the manuscript is fit for publication.

Reviewer #2: I'm disappointed in this manuscript. The introduction has been reworked, the methods removed, the first part of the results reanalyzed with a more suitable dependent variable, and the discussion has been kept almost intact. Given the abundance of errors in the previous version, I would have expected a cautious verification by the authors, but errors persist in the references of the figures (cf. lines 83, 91, 143, 145), in the typing errors (cf. line 67). I found it very disturbing to hide all the information about the methods (e.g., participants, task), especially because they hide important experimental questions raised in the previous expertise. This hides the nature of the participants. In response to the authors, I urge them to distance themselves from colleagues who commonly integrate themselves as participants in their psychological experiences. It is absolutely necessary to avoid the authors being part of the population in psychological experiments in order to avoid introducing certain voluntary and involuntary biases in the behaviour of the experimenters. In the revised manuscript, it is not clear that the experimenters are part of the data analysed. In addition, the deletion of the method section hides the instruments used to monitor participant response. I continue to argue that the mouse and keyboard are not good tools for recording temporal responses despite the argument provided. Please read Plant & Turner, Behavior Research Methods, 2009. This makes the publication methodologically unacceptable. Finally, this deletion has led the authors to merge information regarding the exclusion of trials in the results section. I don't understand why the inclusion/exclusion of about 1% of the trials seems to be a necessity for the authors. What is behind all these efforts?

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Nuno Alexandre De Sá Teixeira

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2020 Aug 19;15(8):e0236732. doi: 10.1371/journal.pone.0236732.r004

Author response to Decision Letter 1


10 Jul 2020

Editor:

"As you will see in the reviews, I could possibly not receive more opposed reviews from the two reviewers. To help me getting a decision, I have contacted and been help by another academic editor that I wish to thank here, who has read and commented the reviews. We both agree that you have made substantial modifications that were asked by the second reviewer, in particular re-included S09 and excluded S10, which appears unnoticed by the second reviewer. It however appears that in referring to the previous article to describe the experimental design, you might have over-reacted. We feel that with the current article alone, it would be hard to a reader to understand the task. We should think that all readers might not have access to the previous publication, and hence the description of the task should stand on its own in the present manuscript."

-- We have extended the Methods section to reflect all relevant information.

"Finally, the second reviewer still disagrees with the use of the mouse and the keyboard to collect temporal response. It seems to both me and the additional editor that with cautions on the software side, which you did, these two devices remain appropriate to collect these answers. However, I would recommend adding a few lines to present the reviewer's concern, with the citation, so that any reader can shape his own mind on the debate."

-- We have included Reviewer 2’s concern in the manuscript and have explained our strategy to mitigate delays and variability added by the input device.

Reviewer 2:

"Given the abundance of errors in the previous version, I would have expected a cautious verification by the authors, but errors persist in the references of the figures (cf. lines 83, 91, 143, 145), in the typing errors (cf. line 67)."

-- We apologize. Some of these errors were introduced into the manuscript in the process of formatting the document for resubmission. We addressed these errors in this version.

"I found it very disturbing to hide all the information about the methods (e.g., participants, task), especially because they hide important experimental questions raised in the previous expertise. This hides the nature of the participants."

-- We had reduced the Methods section to minimize overlap with the published paper. As per the reviewer’s and the editor’s request, we have re-included this information in this version of the manuscript.

"In response to the authors, I urge them to distance themselves from colleagues who commonly integrate themselves as participants in their psychological experiences. It is absolutely necessary to avoid the authors being part of the population in psychological experiments in order to avoid introducing certain voluntary and involuntary biases in the behaviour of the experimenters. In the revised manuscript, it is not clear that the experimenters are part of the data analysed."

-- We thank the reviewer for their advice. As stated in our previous response, we are highly in agreement with this concern and have removed the author from our analyses. The overall number of included participants remains the same (n = 9) because we re-included one participant we had previously excluded based on the editor’s feedback.

"In addition, the deletion of the method section hides the instruments used to monitor participant response. I continue to argue that the mouse and keyboard are not good tools for recording temporal responses despite the argument provided. Please read Plant & Turner, Behavior Research Methods, 2009. This makes the publication methodologically unacceptable."

-- As per the editor’s suggestion, we are making possible constraints as well as our mitigation efforts much clearer in the resubmitted version of the manuscript.

"Finally, this deletion has led the authors to merge information regarding the exclusion of trials in the results section. I don't understand why the inclusion/exclusion of about 1% of the trials seems to be a necessity for the authors. What is behind all these efforts?"

-- These trials include, for example, trials where the program froze, participants did not pay attention or other situations that led to extremely large recorded errors. These excluded trials do not reflect participant performance in any meaningful way.

Attachment

Submitted filename: Response to Reviewers.pdf

Decision Letter 2

Robin Baurès

14 Jul 2020

Determining Mean and Standard Deviation of the Strong Gravity Prior through Simulations

PONE-D-20-06855R2

Dear Dr. Joerges,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Robin Baurès, Ph.D.

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

Robin Baurès

23 Jul 2020

PONE-D-20-06855R2

Determining Mean and Standard Deviation of the Strong Gravity Prior through Simulations

Dear Dr. Jörges:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Robin Baurès

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    Attachment

    Submitted filename: Response to Reviewers.pdf

    Data Availability Statement

    The data are available on GitHub (github.com/b-jorges/SD-of-Gravity-Prior).


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES