Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2012 Oct 1.
Published in final edited form as: Atten Percept Psychophys. 2011 Oct;73(7):2205–2217. doi: 10.3758/s13414-011-0170-2

The underestimation of egocentric distance: evidence from frontal matching tasks

Zhi Li 1, John Phillips 1, Frank H Durgin 1,
PMCID: PMC3205207  NIHMSID: NIHMS316169  PMID: 21735313

Abstract

There is controversy over the existence, nature, and cause of error in egocentric distance judgments. One proposal is that the systematic biases often found in explicit judgments of egocentric distance along the ground may be related to recently observed biases in the perceived declination of gaze (Durgin & Li, Attention, Perception, & Psychophysics, in press), To measure perceived egocentric distance nonverbally, observers in a field were asked to position themselves so that their distance from one of two experimenters was equal to the frontal distance between the experimenters. Observers placed themselves too far away, consistent with egocentric distance underestimation. A similar experiment was conducted with vertical frontal extents. Both experiments were replicated in panoramic virtual reality. Perceived egocentric distance was quantitatively consistent with angular bias in perceived gaze declination (1.5 gain). Finally, an exocentric distance-matching task was contrasted with a variant of the egocentric matching task. The egocentric matching data approximate a constant compression of perceived egocentric distance with a power function exponent of nearly 1; exocentric matches had an exponent of about 0.67. The divergent pattern between egocentric and exocentric matches suggests that they depend on different visual cues.

Keywords: Distance perception, Height perception, Gaze declination, Perceptual scale expansion, Virtual reality


There is some controversy over how to construe the perception of egocentric distance along the ground under full cue conditions (such as in a grassy field). On the one hand, magnitude estimation studies suggest that egocentric distance is linearly compressed. That is, egocentric distances are normally underestimated by explicit verbal measures, but those measures can typically be fit with a power function with an exponent very close to 1.0 (e.g., Da Silva, 1985;R. Teghtsoonian & Teghtsoonian, 1970). On the other hand, two other distinctive patterns have emerged from the seminal work of Loomis and colleagues (Loomis, Da Silva, Fujita, & Fukusima, 1992; Loomis & Philbeck, 1999; see also Ooi & He, 2007), using nonverbal measures. When participants were asked to walk to a previewed target without visual feedback, Loomis et al. (1992; see also Thomson, 1983) found that walking was fairly accurate and scaled linearly with distance, at least out to 20 m. But when Loomis et al. (1992) had participants adjust exocentric extents along the ground to match frontal extents arranged to form an L-shape, they found that the adjusted depth intervals tended to become progressively larger as egocentric distance increased (see also Gilinsky, 1951).

One puzzle about these dichotomous findings is how it could be that the egocentric distances to the two ends of an exocentric extent could be judged linearly, while the extent itself was not. A plausible explanation was proposed by Loomis, Philbeck, and Zahorik (2002), hypothesizing a dissociation in visual representation of location and shape (see Andre & Rogers, 2006, for a related account). We have proposed an alternative mechanistic account for this dissociation that does not depend on dissociated visual representations. Specifically, the discrepancy may arise because observers use different visual cues for the different tasks (Durgin & Li, in press; Philbeck, 2000). For judging egocentric distance on the ground, participants could have mainly used gaze declination or angular declination from horizontal (e.g., Ooi, Wu, & He, 2001). In contrast, for the L-shape ratio task, participants may have used visual information relevant to recovering optical slant (surface orientation relative to the direction of gaze; Gibson & Cornsweet, 1952; Sedgwick, 1986).

The two angular variables in our hypothesis (i.e., gaze declination and optical slant) have each proven to be important to the perception of spatial extent in different circumstances. A variety of studies have shown that gaze declination (e.g., “slope of regard”; Wallach & O’Leary, 1982), angular declination (Ooi et al., 2001; Philbeck & Loomis, 1997), or angular declination relative to the visible horizon (Messing & Durgin, 2005) is an important source of information for egocentric distance perception. In contrast, direct measures of perceived optical slant provide a good fit to exocentric distance-matching data (Li & Durgin, 2010). An important distinction between these two angular variables is that whereas recovering optical slant utilizes binocular cues to depth (Li & Durgin, 2010; Norman, Crabtree, Bartholomew, & Ferrell, 2009; Norman, Todd, & Phillips, 1995), gaze declination is a monocular cue. Consistent with our interpretation, Loomis et al. (2002) found that exocentric aspect ratio (shape) estimates differed as a function of whether viewing was binocular or monocular, whereas egocentric distance tasks, such as visually directed walking, were fairly robust to whether viewing was monocular or binocular.

Li and Durgin (2009) and Durgin and Li (in press) found that the perceived angle of gaze declination is exaggerated with a gain of about 1.5. That is, participants looking downward with a 30° declination of gaze will normally report a perceived declination of about 45°. Moreover, when asked to indicate the bisection point between horizontal and vertical gaze, they also indicate a direction that is actually about 30° from horizontal (Durgin & Li, in press). If observers use gaze declination to judge egocentric distance and the geometry of their experience of egocentric distance is roughly consistent with their angular estimates, the misperception of perceived declination of gaze predicts a compression of perceived egocentric distance, as is shown schematically in Fig. 1. That compression should be approximately linear (i.e., have an exponent of about 1.0), with a magnitude of about 0.7, as will be discussed below. Such a magnitude of underestimation of egocentric distance is surprisingly consistent with the findings of many magnitude estimation studies (e.g., Foley, Ribeiro-Filho, & Da Silva, 2004; see Loomis & Philbeck, 2008, for a recent summary), and our model provides a mechanistic basis for the underestimation (caused by the measured bias in perceived gaze or angular declination). We note that the relevant range of gaze declinations is probably less than 50° and that farther gaze declinations result in a nearly frontal view of the ground near one’s feet, for which angular estimates are less important to the assessment of distance.

Fig. 1.

Fig. 1

A schematic representation of the theory that egocentric distance is misperceived as a consequence of angular scale expansion (Durgin & Li, in press). Over a rather large range of declination (± ~50°), the perceived gaze declination angle, γ′, is 1.5× the actual gaze declination angle, γ, with a resulting compression in perceived distance, D′, along the ground by about 0.7 of the true distance, D

Thus, whereas egocentric distance underestimation in studies using explicit magnitude estimation of distance has sometimes been dismissed as an artifact of judgmental scaling, it is intriguing that the bias found in magnitude estimation of perceived gaze declination is quantitatively consistent with other evidence that egocentric distance is perceived as linearly compressed. This evidence contrasts with the view that egocentric distance is perceived linearly and accurately (e.g., Loomis et al., 1992). It also contrasts with the view that perceived egocentric distance is nonlinearly compressed (e.g., Gilinsky, 1951).

Experiment 1: the egocentric L task

Most of the data that suggest an underestimation of perceived distance (e.g., Foley et al., 2004) or overestimation in perceived gaze declination (e.g., Durgin & Li, in press; Li & Durgin, 2009) are based on magnitude estimation (verbal report). Because verbal report of egocentric distance might be affected by cognitive biases, more solid evidence is called for to support our interpretation of the dichotomous findings in past studies of distance perception. In the present study, we developed a nonverbal method for measuring perceived egocentric distance. In particular, we preferred a method that was extremely similar to the method used by others to measure perceived exocentric extents (i.e., an extent-matching task). Loomis et al. (1992), for example, measured exocentric extents using an L-shaped arrangement of rods, asking participants to assess the ratios between frontal extents on the ground and extents in depth along the ground. We adapted this procedure to measure perceived egocentric distance by having participants compare frontal extents with egocentric extents. In particular, we asked participants to adjust their own position until they felt that their egocentric distance from a frontal extent was identical to the length of that frontal extent.

Foley (1972) used a related task and measured strong distance compression, but it was in a reduced-cue (completely dark) environment. Higashiyama (1996) developed an egocentric distance task similar to ours in an outdoor environment and found evidence of linear underestimation of egocentric distance, as we would expect. However the urban setting used by Higashiyama (his frontal extents were marked on the face of a building) differed substantially from the grassy fields employed by Loomis et al. (1992), and the underestimation he reported was not statistically reliable. We therefore conducted our egocentric L-shape task on a large, level grass field—to better match the conditions normally used to investigate egocentric distance perception (see also Norman, Crabtree, Clayton, & Norman, 2005). We will show that for an egocentric L-shape task, participants seem to underestimate egocentric distance in a manner that is quantitatively consistent with the overestimation of gaze declination measured by Durgin and Li (in press), but not with the nonlinear compression normally found using exocentric L-shape tasks.

Method

Task

We had people walk forward or backward in an open field until they felt that they stood at the same (radial) distance from one experimenter as the (frontally observed) distance between that experimenter and a second experimenter. The basic configuration is shown in Fig. 2. If egocentric distance is perceptually compressed, as proposed in our hypothesis, observers should position themselves much too far away because of the foreshortening of the perceived egocentric distance. Moreover, our hypothesis predicted that perceived egocentric distance measured in this way would vary approximately linearly with distance; the ratio of underestimation should be fairly invariant across scale.

Fig. 2.

Fig. 2

The egocentric L-shape task, viewed from above. The participant (bottom) moves forward or backward until he or she feels that he or she is the same (egocentric) distance from the main experimenter (top left) as the distance between the two experimenters (top left and right)

Participants

Twenty-four undergraduates (16 of them female) participated voluntarily. (All but 1 participant was naïve as to the hypotheses.)

Setting

The experiment took place on level playing fields on the Swarthmore College campus. The experimental layout was moved at intervals in order to minimize wear on the grass and to vary the background view, which included distant buildings or fences.

Design

A linear range of eight frontal distances was tested (4–25 m, by increments of 3 m). Paired pseudorandom orders were created so that although each participant matched each distance only once, across participants, each distance was approached half the time from a farther distance and half the time from a nearer distance. To make this possible, two extreme distance trials (2.5 and 30 m) were embedded in the order to simply cause the participants to move closer to or farther from the actual extremes of the design. A single initial practice trial at the middle distance of 14.5 m was used to familiarize the participants with the procedure. Half the participants approached this middle distance from a near position, and half from a farther position to which they had been led for initial instruction. A further constraint on the design was that consecutive experimental trials were allowed to differ by as little as a single interval only once per participant.

Procedure

On each trial, the participant was required to turn his or her back while the mobile experimenter positioned himself at the predetermined target distance from the stationary experimenter. When signaled, the participant turned and walked toward or away from the stationary experimenter until he or she felt that the two legs of the L were the same. (Although a strategy of seeking to place the mobile experimenter at a 45° angle from the direction to the stationary experimenter ought to suffice, those few who reported attempting such a strategy during a postexperiment interview responded no differently than those who did not). Participants were not hurried and could adjust back and forth as much as they wished. None of the participants adopted the strategy of walking up to the center so as to observe the frontal distance as an egocentric distance, although such a strategy was not explicitly prevented.

Measurement

A laser range finder mounted on a tripod at the central position was used to measure the distances to the participant and the mobile experimenter once they were set. (Participants carried a lightweight foam board with them that they held over their face while the laser was in operation.) The measurement was taken at waist level to the nearest centimeter.

Postexperiment interview

Participants were interviewed orally at the conclusion of the experiment about their strategies and beliefs about the experiment (see Durgin, Baird, Greenburg, Russell, Shaughnessy and Waymouth 2009). Only a few reported using unusual strategies, but their data did not differ from that of other participants.

Results and discussion

If there is perceptual underestimation of egocentric distance, participants should place themselves too far from the center of the L in order to compensate for the perceptual underestimation. Figure 3 shows that the average egocentric settings were much larger than the frontal intervals, consistent with the underestimation of egocentric distance. A power function fit had an exponent of nearly 1 (0.96) and a constant multiplier of 1.43, as expected. Thus, a nonverbal egocentric L task reproduces the common finding from magnitude estimation studies that perceived egocentric distance is compressed, but not compressive.

Fig. 3.

Fig. 3

Egocentric matches to frontal exocentric extents and a power function fit of the data. Error bars represent standard errors of the means

For comparison with typical findings for exocentric extent-matching tasks, such as Gilinsky’s (1951), we show the predictions of her model using the 28.5-m constant (i.e., A = 28.5) derived from her data. These predictions approximate a large class of exocentric L-shape tasks. We also show a model based on typical verbal reports (summarized by Loomis & Philbeck, 2008). The data are now plotted on reversed axes so that the imputed perceived distance is along the y-axis and actual egocentric distance is along the x-axis. It is clear that the shape of the data predicted by models based on exocentric comparisons, such as Gilinsky’s hyperbolic model, does not fit the egocentric matching data. The data are fairly consistent with prior verbal report data, however. A power function fit has an exponent of essentially 1 (1.04) and a multiplier of 0.69.

The transformation of the egocentric L data in Fig. 4 can be treated as a measure of perceived distance only if we assume size constancy for frontal intervals. However, underconstancy is often found for far distances. Foley et al. (2004) has published the most comprehensive verbal estimation data for egocentric distance and frontal intervals at a similar range of egocentric distances. Using Foley et al.’s mean data for intervals less than 5° from frontal (i.e., within 0.5% of frontal length), we computed a correction factor for our match data to take into account the underestimation of frontal intervals at a distance. The corrected estimates of perceived egocentric distance are plotted in Fig. 5, along with Foley et al.’s egocentric distance estimates and our gaze declination model (Durgin & Li, in press). The gaze model shown here has no free parameters. If the gain is altered slightly (e.g., 1.43 instead of 1.5) or a tiny error in the perceived horizontal is introduced (i.e., 0.5° downward, O’Shea & Ross, 2007), the model nearly perfectly coincides with the corrected match data.

Fig. 4.

Fig. 4

Egocentric distance perception inferred from present data and compared with average verbal data from eight studies (Loomis & Philbeck, 2008) and with predictions based on Gilinsky’s (1951) hyperbolic space model using her estimated constant of 28.5 m

Fig. 5.

Fig. 5

Frontal match data, corrected for likely underconstancy of perceived frontal extents, are plotted along with egocentric verbal report data based on a similar range of egocentric distances (from Foley et al. 2004). The predictions of a simple gaze declination model are shown in blue (Durgin & Li, in press)

Experiment 2: the vertical egocentric L task

A limitation of the egocentric L task for purposes of modeling is that we have to base our comparisons on a frontal interval whose scale is unknown. One way to circumvent this problem is to use a vertical frontal interval instead of a horizontal one. Our angular scale-expansion theory of egocentric distance underestimation predicts that vertical extents should be exaggerated relative to an egocentric extent by a specific amount. Indeed, it is well known that vertical extents are exaggerated, and Higashiyama and Ueyama (1988) collected similar data previously, but without considering an angular interpretation of their data.

On the basis of the geometry shown in Fig. 6 (left panel), and assuming an angular scale expansion of 1.5 (Durgin & Li, in press), we can predict that egocentric matches to vertical extents will produce the parameter-free function shown in Fig. 6 (right panel). The derivation of the model is shown in the Appendix. To provide an initial test of the model, we asked participants to do a vertical version of the egocentric L task.

Fig. 6.

Fig. 6

Geometry (left) and model predictions (right) for egocentric distance (D) matching to a vertical extent (H). The solid line shows the prediction with the parameter-free model with a perceptual gain of 1.5 applied to the angular variables γ and θ. For comparison, model predictions with gains of 1.2 and 1.0 are shown

Method

After finishing Experiment 1, the 23 subjects who were naïve as to the hypothesis also performed a pole-height-matching task. We had preselected four poles as the targets near the open field where we conducted Experiment 1. Two of the poles were playing field lights that consisted of a long straight pole and a large lamp frame. For the taller lamp, the pole below the lamp frame was used as the target extent (22.5 m). For the other lamp, a crossbar 7.4 m from the ground was used to mark the intended height. The third target was a flagpole, which was 12.7 m tall. The fourth target was a fence post, which was 3.75 m tall. Each participant was led to the four targets in a randomized sequence. The experimenter indicated the targets to the participants while their distance to the poles was about 2 to 3 times the target height. The participant was then asked to adjust his or her distance to the pole until it matched the height of the target. Their physical distance from the pole was marked and measured later.

Results and discussion

The mean egocentric distance matches to vertical frontal extents are shown in Fig. 7. For all but one of the poles, the matched egocentric distances were quite close to the model prediction. Because participants approached all poles from a far distance, the procedure used may have tended to elevate estimates. However, we note that the data collected by Higashiyama and Ueyama (1988, Experiments 1 and 3), using similar procedures but a more heterogeneous set of stimuli (e.g., buildings, trees, and phone booths) and with a control for starting position, also fit our angular scale expansion model. Their data are also replotted in Fig. 7.

Fig. 7.

Fig. 7

Results of the vertical egocentric L task in Experiment 2 (X ± SE) conducted with poles in an open field. The solid line shows the prediction with the parameter-free model with a perceptual angular gain of 1.5 (see the Appendix). The data of Higashiyama and Ueyama (1988) using a similar method are replotted for comparison and also suggest an excellent quantitative fit to the model

Experiment 3: replication in a virtual environment

Whereas outdoor experiments provide a measure of ecological validity, the methodological control afforded by virtual environments provides an important additional tool for studying space perception. Although distance perception in virtual reality is normally found to be compressed when assessed by walking (e.g., Loomis & Knapp, 2003), the use of relative-distance strategies has proven effective in studying space perception in the past (e.g., Durgin & Li, in press; Li & Durgin, 2009, 2010; Messing & Durgin, 2005). One of the conflicting depth cues in most binocular head-mounted displays (HMDs) is that the frame of the display is simulated as being binocularly fused at optical infinity even though it (necessarily) occludes near objects that are rendered in the scene. One successful strategy for avoiding this problem is to render a false occluding frame in near space (Durgin & Li, in press; Li & Durgin, 2010). Another possibility is to use a panoramic display with overlapping fields of view for which the screen boundaries are monocular. In the present experiment, we tested a panoramic display to evaluate whether it could be used to reproduce the pattern of results found in Experiment 1 and 2.

Method

Participants

A total of 42 Swarthmore College undergraduates (19 of them female) participated in Experiment 3 either for $5 or to fulfill a course requirement. All had normal or corrected-to-normal vision. None had participated in Experiment 1 and 2. Twenty-one participated in a virtual version of Experiment Experiment 1 (horizontal egocentric L task), but one had to be excluded for misunderstanding the directions; 21 participated in a virtual version of Experiment 2 (vertical egocentric L task).

Horizontal egocentric L task

In this version of the task, participants stood in a virtual environment that correctly specified their eye height and the visual horizon and simulated a grassy field with two people (avatars) in it, as shown in Fig. 8 (upper panel). Participants translated through the world by using a toggle button to move toward or away from the central avatar. Their instruction was to set themselves the same distance from the female avatar as the male avatar was from the female avatar. Participants were allowed to look around but were warned that they would not be able to see their own body in the virtual environment.

Fig. 8.

Fig. 8

The virtual environments for the horizontal (upper panel) and vertical (lower panel) egocentric L tasks in Experiment 3. The full (panoramic) field of view is not depicted. Note that the view shown in the upper panel is downward toward the avatar’s feet, whereas the lower panel shows a view looking straight ahead at eye level

Eight frontal distances (from 4 to 25 m at 3-m intervals, as in Experiment 1) were tested twice each. The initial distance from the participant to the female avatar was randomized, so that on half the trials (once for each frontal extent), it was longer than the distance between the two avatars, and on the other half, it was shorter. The precision of virtual environments might have made alternative geometrical solutions more salient (i.e., setting the visual angle between the two avatars to 45°, as implied by a right isosceles triangle). We therefore added 9 filler trials in order to discourage the participant from adopting an angular strategy. The filler trials were interleaved with the 16 experimental trials (with the constraint that the first 2 trials were always filler trials). In the filler trials, the angle formed by the two avatars and the participant was not a right angle but was increased or decreased by a random amount between 11.3° and 31°. Between trials, the screen was blanked for a couple of seconds.

Vertical egocentric L task

In this version of the vertical egocentric L task, participants stood in the same virtual environment but saw a silver pole (20-cm diameter). A depiction of the scene from the participant’s point of view is shown in Fig. 8 (lower panel). Their instruction was to set their distance to the pole to match the height of the pole. Because of some complaints of motion sickness in the horizontal version, we had participants move the pole, rather than themselves, in the vertical matching task. Again, eight frontal extents (from 3 to 24 m in 3-m intervals) were tested twice each. The initial distance from the participant to the pole was randomized, so that on half the trials (once for each extent), it was longer than the height of the pole, and on the other half, it was shorter. The alternative geometrical strategy (set to 45°) was not a concern, because the viewpoint was not at an apex of the relevant triangle, so no filler trials were used.

Displays

A realistic grass texture (a photograph) was used to tile a ground plane of about 150 × 150 m. A random noise signal was superimposed to provide a nonrepeating low-spatial-frequency modulation of luminance. To simulate the horizon, the grass field was surrounded by a green cylinder with a diameter of 150 m. The upper edge of the cylinder was always held at the participant’s eye height, which was monitored by the optical tracker. The color of the cylinder was picked so that the cylinder was perfectly merged with the distant grass field. A blue sky with clouds was depicted in the far distance. The two realistic avatars used for the horizontal egocentric L task continuously adjusted their posture, so as to appear alive. They were selected from the Vizard toolbox.

Apparatus

An xSight HMD (Sensics, Inc.) was used in our VR system. This HMD has a factory-calibrated horizontal field of view of 126° (90° per eye, with 54° binocular overlap) and a vertical field of view of 44°. The large field of view is achieved by combining six small screens into a single image for each eye. The optics of the xSight are free of the pincushion distortion present in most immersive HMDs. A Hiball 3000 optical tracking system was used to update the position and orientation of the headset at the 60-Hz display frame rate. The virtual scenes were rendered by a two-computer cluster using Vizard (V.4 beta 2, WorldViz, LLC). A radio mouse was used by participants to adjust either their position (horizontal egocentric L task) or the position of a virtual cylinder (vertical egocentric L task).

Results

The mean egocentric distance matches to horizontal frontal extents (between avatars) and vertical frontal extents (of poles) are shown in Fig. 9. These data suggest that using a tightly controlled panoramic virtual environment can produce the same patterns of egocentric matching data as we observed in the real world.

Fig. 9.

Fig. 9

Results of the horizontal (left) and vertical (right) egocentric L tasks in Experiment 3. The solid line in the left panel represents the data from Experiment 1. Matches in the virtual environment were essentially identical to those outdoors. The solid line in the right panel represents the parameter-free model that fits the real-world data. Again, the data from the virtual environment closely match the model as well as the real-world data. Standard error bars are shown

Experiment 4: egocentric versus exocentric L (in a virtual environment)

The logic of our argument so far is that egocentric distance perception is linearly compressed due to the perceptual expansion of angular declination information, but that exocentric extents (i.e., along the ground in the sagittal plane) are increasingly compressive because (1) they are measured by doing inverse geometry on estimates of optical slant of the extent and (2) estimates of optical slant become increasingly distorted with distance (Li & Durgin, 2010). However, most exocentric tasks previously reported have used relatively small exocentric distances (a notable exception is the report of Norman et al., 2005), and so a more direct comparison would be useful using similar ranges of extents.

Having established that egocentric distance tasks in our virtual environment replicated the finding we and others had observed outdoors, we sought to directly compare egocentric and exocentric extents of the same magnitudes in a virtual environment. A clear advantage of using a virtual environment is that precise control over multiple avatars allows us to measure both kinds of extent perception, using the same task and the same environment.

Method

Participants

A total of 42 Swarthmore College undergraduates (15 of them female) participated in Experiment 4, either for $5 or to fulfill a course requirement. All had normal or corrected-to-normal vision. Twenty-one participated in a virtual horizontal egocentric L task, and 21 participated in a corresponding exocentric L task. None had participated in the previous experiments.

Apparatus and displays

The same hardware, software, and virtual environment displays were used as in Experiment 3, except for two changes. First, in the exocentric L task, a second male avatar was presented at twice the egocentric distance of the female avatar to form an L-shape among the three avatars. A view of the display from the observer’s point of view in shown in Fig. 10, top. Second, in both tasks, the participant now manipulated the laterally displaced near male avatar in order to match either the egocentric distance to the female avatar (egocentric L task) or the exocentric distance between the female avatar and the far male avatar (exocentric L task). The motion of the avatar simulated walking, although the rate of displacement was continuous and could be halted midstride by releasing the movement button on the controller.

Fig. 10.

Fig. 10

Depiction of the virtual environment (top) used for the exocentric L task in Experiment 4. Schematic representation of the egocentric (left) and exocentric (right) L tasks are shown at the bottom. Participants (represented at the bottom of the schematic diagrams) adjusted the lateral position of the rightmost avatar until its distance from the central (female) avatar was the same as the extent in depth between the central avatar and the participant (egocentric L) or between the central avatar and the far avatar (exocentric L). The far avatar in the exocentric L task was always offset 0.5 m to one side or the other, as shown, so as to be clearly visible

Design and procedure

The design of the egocentric L task was modified to eliminate filler trials (because the avatar now walked a frontal path) and to include nine egocentric distances (3–27 m in intervals of 3 m). Each distance was tested twice in random order, and the adjustable frontal extent was randomly larger or smaller than each egocentric extent. The same design was employed in the exocentric task. Unbeknownst to the participants, the female avatar was always at the true bisection point between the participant and the far male avatar. Thus, for example, the 15-m exocentric extent started 15 m away and ended 30 m away (see Fig. 10). Each participant completed 18 matching trials in the condition to which he or she was assigned.

Results

Figure 11 shows the mean frontal matches to egocentric extents and exocentric extents of the same physical magnitudes. As was expected, power functions fit to the two sets of data have very different exponents. For egocentric extents, the frontal matches have an exponent of essentially 1, which is consistent with a linear compression of egocentric distance perception. For equally large exocentric extents, on the other hand, frontal matches have an exponent of about 0.67, reflecting the increasing compression of exocentric extents with distance. Because the frontal extents for the two functions were presented at the same simulated distances, the difference between the two functions cannot be attributed to scaling errors in frontal extents.

Fig. 11.

Fig. 11

Results of Experiment 4. Frontal extents matched to egocentric distances (open circles) can be fit with a power function with an exponent of 0.97 (essentially 1). Frontal extents matched to exocentric extents show that exocentric extents are compressive, with a power function exponent of 0.67. Standard errors of the means are shown

Were the exocentric portions of the egocentric extents judged any differently than the exocentric extents? To derive estimates of the exocentric half-portions of egocentric estimates, we derived “frontal matches” to the 6-, 9-, and 12-m exocentric portions of the 12-, 18-, and 24-m egocentric extents, respectively. We did this by subtracting the frontal matches to the egocentric half distances (i.e., 6, 9, and 12 m) from the frontal matches to each of the larger egocentric distances (i.e., 12, 18, and 24 m). Paired t-tests showed that the means for these derived exocentric portions (5.0, 8.0, and 9.8 m) did not differ systematically from the frontal matches to the near egocentric distances (5.1, 7.8, and 10.1 m), p > .20. Between-group tests, however, showed that in each case, the derived matches (to exocentric portions of egocentric extents) were larger than the actual frontal matches to the corresponding exocentric extents (4.1, 5.3, and 6.4 m), t(40) = 2.77, p = .0085; t(40) = 4.01, p = .0003; t(40) = 4.36, p < .0001. This implies that different information was used to estimate isolated exocentric extents than was used to estimate egocentric extents.

Although it would be premature to assume that the exocentric data from our virtual environment would exactly match that from an outdoor study, we have shown elsewhere that a model based on optical slants estimated in a similar virtual environment provides an excellent fit to outdoor slant perception data (Li & Durgin, 2010). Our exocentric data seem to be consistent with those in other exocentric studies, such as the pattern measured by Gilinsky (1951), as depicted in Fig. 4.

General discussion

There is one parameter of particular note in our data. The relationship between egocentric distance and horizontal frontal extents can be fit with a power function with an exponent of nearly 1.0. This exponent is consistent with prior studies of egocentric distance perception but is not consistent with models derived from exocentric distance judgments, such as exocentric L-shape tasks (e.g., Beusmans, 1998) or exocentric distance productions (e.g., Gilinsky, 1951), which would have an exponent much less than 1. The compression we observed (e.g., by a constant factor of about 0.7 in the outdoor environment) is consistent with that predicted by the angular scale-expansion model of Durgin and Li (in press; see also Li & Durgin, 2010), which supposes that (1) perceived gaze declination is exaggerated by a factor of 1.5 (out to 50° or so) and that (2) perceived egocentric distance along the ground is partly a function of perceived gaze declination (i.e., “slope of regard”, Wallach & O’Leary, 1982; angular declination, Ooi et al., 2001). The consistent role of these angular variables is supported by evidence that the exponent of estimated and walked distance is appropriately altered by artificially lowering the horizon in virtual environments (Messing & Durgin, 2005).

The observed pattern of data is therefore consistent with the idea that egocentric distance is normally perceived fairly linearly but suggests that the perception of egocentric distance is far from accurate. According to our data, perceived egocentric distance measured nonverbally is compressed, but not compressive. Perception underestimates egocentric distance but does so by a nearly constant ratio. Thus, the present data support the distinction between the perception of exocentric extents in depth and the perception of egocentric extents in depth proposed by Loomis et al. (1992). Inferring egocentric distance perception from studying the perception of exocentric extent, as Gilinsky (1951) sought to do, provides a different function than does studying egocentric perception directly (see also Ooi & He, 2007; Purdy & Gibson, 1955).

Biases in the evaluation of exocentric extents can sometimes be explained by biases in the perception of optical slant (Li & Durgin, 2010). To model optical slant perception, Li and Durgin (2010) measured both with explicit verbal reports of slant (relative to gaze) and used an aspect ratio task for L-shapes presented on slanted surfaces. They derived a model of perceived slant that successfully fit real-world slant data, in which perceived slant (β′) was shown to be a function of actual slant (β with a gain of about 1.5) but was also elevated in proportion to log distance (D). The aspect ratio task they used was based on three small spheres in an L configuration. Such configurations are fairly typical of exocentric distance tasks, but the two bottom spheres were simulated at eye level, to minimize the influence of perceived gaze declination.

However, the present exocentric task differs from those in most published studies. We used life-sized human avatars to define large (3- to 27-m) exocentric extents, whereas most prior exocentric L tasks have used rods or balls to define relatively small (e.g., 1- to 3-m) extents (Beusmans, 1998; Li & Durgin, 2010; Loomis & Philbeck, 1999; but see Norman et al., 2005). Human forms might provide additional depth information (relative size), as well as a basis for cognitive compensation for expected visual errors (familiar size). Although the magnitude of local (foveal) optical slant on a horizontal ground plane is identical to the magnitude of gaze declination, the visual system may not normally depend on this relationship to estimate optical slant. The fact that exocentric distances are increasingly compressed with increasing viewing distance seems to implicate a role for perceived optical slant, which is more biased at greater distances (Li & Durgin, 2010). In Fig. 12, we have plotted a pure gaze model (Durgin & Li, in press) and a pure optical slant model in which perceived optical slant is increased with the log of viewing distance (Li & Durgin, 2010, Eq. 7). Neither of these pure models predicts the exocentric L data from Experiment 4, which falls in between them. However, assuming a fixed angular gain of 1.5 and a somewhat weaker influence of log viewing distance on perceived slant provides a one-parameter model (depicted in Fig. 12) that provides an excellent fit to the exocentric L data. The fact that the function seems to be distance dependent shows that it depends on different information than simply gaze declination.

Fig. 12.

Fig. 12

Modeling the results of the exocentric L task of Experiment 4. A one-parameter model manually fit to the data based on the optical slant models described by Li and Durgin (2010, Eq. 6) shows that the exocentric L data for large exocentric extents defined by human forms falls in between the pure optical-slant model (Li & Durgin, 2010, Eq. 7) and a pure gaze declination model (Durgin & Li, in press)

The contrast between direct egocentric distance functions and the perceptual compression of exocentric distance intervals (i.e., Gilinsky’s [1951] method of computing egocentric distance) can be explained by the increasing compression in the perception of exocentric extents due to foreshortening errors in the evaluation of stereoscopic depth intervals (Palmisano, Gillam, Govan, Allison & Harris 2010), which distorts the perceived local optical slant (Li & Durgin, 2010). But what could explain the discrepancy between accurate motor performance (e.g., Loomis et al., 1992) and the present egocentric distance results? One hypothesis is the idea that two different neural representations of egocentric distance are what differentiate perceptual and motor responses. Andre and Rogers (2006) reported that prism glasses, which distorted angular declination, had a much larger influence on motor measures than on explicit verbal estimates of distance. However, it should be noted that the prisms they used would also have caused a misperception of ground surface orientation, which would have led to potentially disruptive conflict from motor feedback during spatial updating (see also Ooi et al., 2001). In contrast, Messing and Durgin (2005) found that verbal estimates of distance and motor estimates produced by walking were both affected by about the same amount when the visual horizon was subtly lowered (by 1.5°), leaving other near space orientation coding undistorted (see also Philbeck & Loomis, 1997).

Locomotor calibration theory

An alternative to the two-systems perspective (see also Durgin, Hajnal, Li, Tonge, & Stigliani 2010, in press) argues that the apparent scaling discrepancy between action measures, such as between visually directed walking (which is largely accurate) and explicit verbal estimates (which are normally biased), may be due to the continuous calibration of walking by visual feedback. Thus, if something 10 m away appears to be only 7 m away, and the observer also perceives their stride length as only 70% of what it truly is as a result of constant calibration to a compressed perceptual environment, action measures in normal circumstances would be expected to be accurate. Consider the analogy of trying to hit a pitched ball with a bat. If the batter systematically misperceives the location of the ball but also systematically mispredicts (and misperceives) the location of the swung bat, hitting may be successful in the presence of a systematic but matched perceptual error. Indeed, there is no obvious reason such an error should be detectable by the batter.

Support for the calibration view is easy to find. It is well documented that exposure to altered perceptual feedback concerning self-motion changes the calibration of visually directed walking performance but does not affect throwing performance (Rieser, Pick, Ashmead, & Garing, 1995). For example, after treadmill jogging (during which locomotor action is perceived to produce no forward self-motion), participants asked to walk to previewed targets walk too far (Durgin et al., 2005). This is not because the targets appear farther away, but because the participants feel as if they are going slower than they are (see also Philbeck, Woods, Arthur, & Todd, 2008). Evidence that the adaptation is not of distance perception comes from studies of hopping to targets, following hopping on a treadmill. Only hopping on the adapted leg produced overshoot (Durgin, Fox, & Kim, 2003). These locomotor recalibration studies demonstrate that accurate egocentric actions might result from normal locomotor calibration during normal (visually guided) walking, even if egocentric distance is normally misperceived (e.g., hypothesis 2 in Loomis et al., 1992, p. 915).

The reader may easily replicate our basic observation outdoors by using, for example, horizontal distances between fence posts to represent a horizontal frontal extent. The egocentric distance from the fence necessary to subjectively match the frontal distance can be marked and then observed from the side. We should note that our observations of differences in scale between egocentric distance and frontal extents are consistent with an alternative interpretation. For example, Foley et al. (2004) argued that frontal extents were slightly exaggerated (although that result may be related to overestimation effects under objective instructions [e.g., Carlson, 1960] or other forms of cognitive correction [Granrud, 2009]). Perhaps perceived frontal extents are exaggerated by a factor of 1.5, while egocentric distance is accurate. While our present data do not rule out this interpretation, such a view has little to recommend it, as compared to the vast evidence that egocentric distance is consistently underestimated in explicit verbal judgments. Moreover, the comparison of egocentric extents with vertical frontal extents has proven consistent with a parameter-free model of perceptual angular expansion.

A functional account of distance underestimation

Our hypothesis is that locomotor actions, such as visually directed walking, that are framed in body coordinates are controlled by a representation of egocentric distance that may often be derived primarily from angular variables. Because we have shown that these angular variables are biased, this provides a mechanistic account of distance underestimation, but it is based on a further functional (coding) account that we have laid out elsewhere: The distortion of angular variables (i.e., declination of gaze or angular declination relative to the perceived horizontal) may serve the functional purpose of maximizing the precision of discrimination available in internal perceptual codes, relevant to action (Durgin, 2009; Durgin & Li, in press; Hajnal, Abdul-Malak, & Durgin, 2011; Li & Durgin, 2009, 2010).

Our emphasis on angular measures, such as gaze declination and perceived optical slant, is not to be confused with an emphasis on subtended visual angles. Levin and Haber (1993) and Kudoh (2005) have proposed that exocentric errors can be explained in terms of subtended retinal angles. However, Li and Durgin (2010) showed that a model of perceived exocentric depth extents expressed in terms of perceived optical slant provided a much richer prediction of the details of Kudoh’s data, for example (see Li & Durgin, 2010, Fig. 14).

Egocentric distance perception measured both by verbal estimates and by action measures has been shown to be affected by subtle changes in the visible horizon level (Messing & Durgin, 2005). Findings of slightly elevated power functions for magnitude estimates of indoor environments (e.g., Lappin, Shelton & Rieser 2006;M. Teghtsoonian & Teghtsoonian, 1969) or bounded outdoor environments (Witt, Stefanucci, Riener, & Proffitt, 2007) can be accounted for if it is assumed that certain environmental structures (e.g., the bases of walls) distort the apparent horizon and, thus, the perceived declination of gaze (e.g., Matin & Li, 1992). Messing and Durgin found that lowering the visual horizon by 1.5° in virtual reality produced power function exponents greater than 1 for both verbal magnitude estimation tasks and visually directed walking tasks, consistent with the accelerating function reported by Lappin et al., for example. Thus, both the intercept and the gain of perceived gaze declination seem to play a role in judgments of egocentric distances. The expanded scaling of explicit estimates of gaze declination (Durgin & Li, in press) can quantitatively account for the compressed perception of egocentric distance, relative to frontal extents.

Conclusion

By using egocentric L-shape tasks on a grassy field, we can draw strong inferences concerning the comparison of egocentric distance and frontal extents. Relative to frontal viewing, egocentric distance perception is compressed. However, unlike exocentric depth extents, which are compressive (increasingly compressed with distance), the compressed perception of egocentric distance is linearly compressed in the range of the distances tested here (5–30 m). We interpret this as consistent with our conjecture that estimates of egocentric distance or location may be principally (but not exclusively) informed by the perceived declination of gaze, relative to the apparent horizon (or vanishing point) of a ground surface (Durgin & Li, in press). Strong support for the angular scale-expansion explanation of egocentric distance errors comes from the fit of a parameter-free model for the vertical egocentric L task.

Acknowledgments

This research was supported by Award Number R15 EY021026-01 from the National Eye Institute. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Eye Institute or the National Institutes of Health. Thanks to Jack Loomis and John Philbeck for comments on an earlier version of the manuscript.

Appendix

As depicted in Fig. 6 of the main text, consider that the eye height of the participant is h, the distance between the participant and the vertical pole is D, the pole height is H, the gaze declination to the bottom of the pole is γ, and the gaze angle from horizontal to the top of the pole is θ.

From the trigonometry,

D=htan(γ) (1)

and

H=h+Dtan(θ) (2)

imply that

H=h+tan(θ)tan(γ)h (3)

and, from Eqs. 1 and 3 we get

H/D=tan(γ)+tan(θ) (4)

If we assume that the perceived variables (adding a prime to the physical variables) remain in the same relationship and the perceived gaze angles equal the actual gaze angles times a constant multiplier, kv, then we will have

HD=tan(γ)+tan(θ)=tan(kv·γ)+tan(kv·θ) (5)

Given that the observer’s task is to match the perceived egocentric distance to the pole to the perceived height of the pole (i.e. H′/D′ = 1), according to Eq. 5 we can use the model to predict D for each given pole height H (note that when eye height, h, is known, the angles γ and θ can be unitarily specified by D), assuming that kv is 1.5, based on the findings of Durgin and Li (in press). These predictions were plotted in Figs. 6, 7, and 9 of the main text.

References

  1. Andre J, Rogers S. Using verbal and blind-walking distance estimates to investigate the two visual systems hypothesis. Perception & Psychophysics. 2006;68:353–361. doi: 10.3758/bf03193682. [DOI] [PubMed] [Google Scholar]
  2. Beusmans JM. Optic flow and the metric of the visual ground plane. Vision Research. 1998;38:1153–1170. doi: 10.1016/s0042-6989(97)00285-x. [DOI] [PubMed] [Google Scholar]
  3. Carlson VR. Overestimation in size-constancy judgments. The American Journal of Psychology. 1960;73:199–213. [PubMed] [Google Scholar]
  4. Da Silva JA. Scales for perceived egocentric distance in a large open field: Comparison of three psychophysical methods. The American Journal of Psychology. 1985;98:119–144. [PubMed] [Google Scholar]
  5. Durgin FH. When walking makes perception better. Current Directions in Psychological Science. 2009;18:43–47. [Google Scholar]
  6. Durgin FH, Baird JA, Greenburg M, Russell R, Shaughnessy K, Waymouth S. Who is being deceived? The experimental demands of wearing a backpack. Psychonomic Bulletin & Review. 2009;16:964–969. doi: 10.3758/PBR.16.5.964. [DOI] [PubMed] [Google Scholar]
  7. Durgin FH, Fox LF, Kim DH. Not letting the left leg know what the right leg is doing: Limb-specific locomotor adaptation to sensory-cue conflict. Psychological Science. 2003;16:567–572. doi: 10.1046/j.0956-7976.2003.psci_1466.x. [DOI] [PubMed] [Google Scholar]
  8. Durgin FH, Hajnal A, Li Z, Tonge N, Stigliani A. Palm boards are not action measures: An alternative to the two-systems theory of geographical slant perception. Acta Psychologica. 2010;134:182–197. doi: 10.1016/j.actpsy.2010.01.009. Retrieved from http://dx.doi.org/10.1016/j.actpsy.2010.01.009. [DOI] [PubMed]
  9. Durgin FH, Hajnal A, Li Z, Tonge N, Stigliani A. An imputed dissociation might be an artifact: Further evidence for the generalizability of the observations of Durgin et al. 2010. Acta Psychologica. doi: 10.1016/j.actpsy.2010.09.002. (in press) Retrieved from http://dx.doi.org/10.1016/j.actpsy.2010.09.002. [DOI] [PubMed]
  10. Durgin FH, Li Z. Perceptual scale expansion: An efficient angular coding strategy for locomotor space. Attention, Perception, & Psychophysics. doi: 10.3758/s13414-011-0143-5. (in press) [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Durgin FH, Pelah A, Fox LF, Lewis J, Kane R, Walley KA. Self-motion perception during locomotor recalibration: More than meets the eye. Journal of Experimental Psychology Human Perception and Performance. 2005;31:398–419. doi: 10.1037/0096-1523.31.3.398. [DOI] [PubMed] [Google Scholar]
  12. Foley JM. The size–distance relation and intrinsic geometry of visual space: Implications for processing. Vision Research. 1972;12:323–332. doi: 10.1016/0042-6989(72)90121-6. [DOI] [PubMed] [Google Scholar]
  13. Foley JM, Ribeiro-Filho NP, Da Silva JA. Visual perception of extent and the geometry of visual space. Vision Research. 2004;44:147–156. doi: 10.1016/j.visres.2003.09.004. [DOI] [PubMed] [Google Scholar]
  14. Gibson JJ, Cornsweet J. The perceived slant of visual surfaces—optical and geographical. Journal of Experimental Psychology. 1952;44:11–15. doi: 10.1037/h0060729. [DOI] [PubMed] [Google Scholar]
  15. Gilinsky AS. Perceived size and distance in visual space. Psychological Review. 1951;58:460–482. doi: 10.1037/h0061505. [DOI] [PubMed] [Google Scholar]
  16. Granrud CE. Development of size constancy in children: A test of the metacognitive theory. Attention, Perception, & Psychophysics. 2009;71:644–654. doi: 10.3758/APP.71.3.644. [DOI] [PubMed] [Google Scholar]
  17. Hajnal A, Abdul-Malak DT, Durgin FH. The perceptual experience of slope by foot and by finger. Journal of Experimental Psychology: Human Perception and Performance. 2011;37:709–719. doi: 10.1037/a0019950. [DOI] [PubMed] [Google Scholar]
  18. Higashiyama A. Horizontal and vertical distance perception: The discorded-orientation theory. Perception & Psychophysics. 1996;58:259–270. doi: 10.3758/bf03211879. [DOI] [PubMed] [Google Scholar]
  19. Higashiyama A, Ueyama E. The perception of vertical and horizontal distances in outdoor settings. Perception & Psychophysics. 1988;44:151–156. doi: 10.3758/BF03208707. [DOI] [PubMed] [Google Scholar]
  20. Kudoh N. Dissociation between visual perception of allocentric distance and visually directed walking of its extent. Perception. 2005;34:1399–1416. doi: 10.1068/p5444. [DOI] [PubMed] [Google Scholar]
  21. Lappin JS, Shelton AL, Rieser JJ. Environmental context influences visually perceived distance. Perception & Psychophysics. 2006;68:571–581. doi: 10.3758/bf03208759. [DOI] [PubMed] [Google Scholar]
  22. Levin CA, Haber RN. Visual angle as a determinant of perceived interobject distance. Perception & Psychophysics. 1993;54:250–259. doi: 10.3758/bf03211761. [DOI] [PubMed] [Google Scholar]
  23. Li Z, Durgin FH. Downhill slopes look shallower from the edge. Journal of Vision. 2009;9(11 Art 6):1–15. doi: 10.1167/9.11.6. [DOI] [PubMed] [Google Scholar]
  24. Li Z, Durgin FH. Perceived slant of binocularly viewed large-scale surfaces: A common model from explicit and implicit measures. Journal of Vision. 2010;10(14 Art 13):1–16. doi: 10.1167/10.14.13. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Loomis JM, Da Silva JA, Fujita N, Fukusima SS. Visual space perception and visually guided action. Journal of Experimental Psychology Human Perception and Performance. 1992;18:906–921. doi: 10.1037//0096-1523.18.4.906. [DOI] [PubMed] [Google Scholar]
  26. Loomis JM, Knapp JM. Visual perception of egocentric distance in real and virtual environments. In: Hettinger LJ, Haas MW, editors. Virtual and adaptive environments. Hillsdale: Erlbaum; 2003. pp. 21–46. [Google Scholar]
  27. Loomis JM, Philbeck JW. Is the anisotropy of perceived 3-D shape invariant across scale? Perception & Psychophysics. 1999;61:397–402. doi: 10.3758/bf03211961. [DOI] [PubMed] [Google Scholar]
  28. Loomis JM, Philbeck JW. Measuring spatial perception with spatial updating and action. In: Klatztky RL, Behrmann M, McWhinney B, editors. Embodiment, ego-space and action. New York: Taylor and Francis; 2008. pp. 1–44. [Google Scholar]
  29. Loomis JM, Philbeck JW, Zahorik P. Dissociation between location and shape in visual space. Journal of Experimental Psychology Human Perception and Performance. 2002;28:1202–1212. [PMC free article] [PubMed] [Google Scholar]
  30. Matin L, Li W. Visually perceived eye level: Changes induced by a pitched-from-vertical 2-line visual field. Journal of Experimental Psychology Human Perception and Performance. 1992;18:257–289. doi: 10.1037//0096-1523.18.1.257. [DOI] [PubMed] [Google Scholar]
  31. Messing RM, Durgin FH. Distance perception and the visual horizon in head-mounted displays. ACM: Transactions on Applied Perception. 2005;2:234–250. doi: 10.1145/1077399.1077403. [DOI] [Google Scholar]
  32. Norman JF, Crabtree CE, Bartholomew AN, Ferrell EL. Aging and the perception of slant from optical texture, motion parallax, and binocular disparity. Attention, Perception, & Psychophysics. 2009;71:116–130. doi: 10.3758/APP.71.1.116. [DOI] [PubMed] [Google Scholar]
  33. Norman JF, Crabtree CE, Clayton AM, Norman HF. The perception of distances and spatial relationships in natural outdoor environments. Perception. 2005;34:1315–1324. doi: 10.1068/p5304. [DOI] [PubMed] [Google Scholar]
  34. Norman JF, Todd JT, Phillips F. The perception of surface orientation from multiple sources of optical information. Perception & Psychophysics. 1995;57:629–636. doi: 10.3758/bf03213268. [DOI] [PubMed] [Google Scholar]
  35. Ooi TL, He ZJ. A distance judgment function based on space perception mechanisms: Revisiting Gilinsky’s (1951) equation. Psychological Review. 2007;114:441–454. doi: 10.1037/0033-295X.114.2.441. [DOI] [PubMed] [Google Scholar]
  36. Ooi TL, Wu B, He ZJ. Distance determined by the angular declination below the horizon. Nature. 2001;414:197–200. doi: 10.1038/35102562. [DOI] [PubMed] [Google Scholar]
  37. O’Shea RP, Ross HE. Judgments of visually perceived eye level (VPEL) in outdoor scenes: Effects of slope and height. Perception. 2007;36:1168–1178. doi: 10.1068/p5569. [DOI] [PubMed] [Google Scholar]
  38. Palmisano S, Gillam B, Govan DG, Allison RS, Harris JM. Stereoscopic perception of real depths at large distances. Journal of Vision. 2010;10(6 Art 19):1–16. doi: 10.1167/10.6.19. [DOI] [PubMed] [Google Scholar]
  39. Philbeck JW. Visually directed walking to briefly glimpsed targets is not biased toward fixation location. Perception. 2000;29:259–272. doi: 10.1068/p3036. [DOI] [PubMed] [Google Scholar]
  40. Philbeck JW, Loomis JM. A comparison of two indicators of perceived egocentric distance under full-cue and reduced cue conditions. Journal of Experimental Psychology Human Perception and Performance. 1997;23:72–85. doi: 10.1037//0096-1523.23.1.72. [DOI] [PubMed] [Google Scholar]
  41. Philbeck JW, Woods AJ, Arthur J, Todd J. Progressive locomotor recalibration during blind walking. Perception & Psychophysics. 2008;70:1459–1470. doi: 10.3758/PP.70.8.1459. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Purdy J, Gibson EJ. Distance judgment by the method of fractionation. Journal of Experimental Psychology. 1955;50:374–380. doi: 10.1037/h0043157. [DOI] [PubMed] [Google Scholar]
  43. Rieser JJ, Pick HL, Jr, Ashmead D, Garing A. Calibration of human locomotion and models of perceptual–motor organization. Journal of Experimental Psychology Human Perception and Performance. 1995;21:480–497. doi: 10.1037//0096-1523.21.3.480. [DOI] [PubMed] [Google Scholar]
  44. Sedgwick HA. Space perception. In: Boff KR, Kaufman L, Thomas JP, editors. Handbook of perception and human performance. New York: Wiley; 1986. pp. 21.1–21.57. [Google Scholar]
  45. Teghtsoonian M, Teghtsoonian R. Scaling apparent distance in natural indoor settings. Psychonomic Science. 1969;16:281–283. [Google Scholar]
  46. Teghtsoonian R, Teghtsoonian M. Scaling apparent distance in a natural outdoor setting. Psychonomic Science. 1970;21:215–216. [Google Scholar]
  47. Thomson JA. Is continuous visual monitoring necessary in visually guided locomotion? Journal of Experimental Psychology Human Perception and Performance. 1983;9:427–443. doi: 10.1037//0096-1523.9.3.427. [DOI] [PubMed] [Google Scholar]
  48. Wallach H, O’Leary A. Slope of regard as a distance cue. Perception & Psychophysics. 1982;31:145–148. doi: 10.3758/bf03206214. [DOI] [PubMed] [Google Scholar]
  49. Witt JK, Stefanucci JK, Riener CR, Proffitt DR. Seeing beyond the target: Environmental context affects distance perception. Perception. 2007;36:1752–1768. doi: 10.1068/p5617. [DOI] [PubMed] [Google Scholar]

RESOURCES