Skip to main content
PLOS One logoLink to PLOS One
. 2024 Mar 8;19(3):e0289855. doi: 10.1371/journal.pone.0289855

Coordination of gaze and action during high-speed steering and obstacle avoidance

Nathaniel V Powell 1,2, Xavier Marshall 1, Gabriel J Diaz 3, Brett R Fajen 1,*
Editor: Lei Zhang4
PMCID: PMC10923441  PMID: 38457388

Abstract

When humans navigate through complex environments, they coordinate gaze and steering to sample the visual information needed to guide movement. Gaze and steering behavior have been extensively studied in the context of automobile driving along a winding road, leading to accounts of movement along well-defined paths over flat, obstacle-free surfaces. However, humans are also capable of visually guiding self-motion in environments that are cluttered with obstacles and lack an explicit path. An extreme example of such behavior occurs during first-person view drone racing, in which pilots maneuver at high speeds through a dense forest. In this study, we explored the gaze and steering behavior of skilled drone pilots. Subjects guided a simulated quadcopter along a racecourse embedded within a custom-designed forest-like virtual environment. The environment was viewed through a head-mounted display equipped with an eye tracker to record gaze behavior. In two experiments, subjects performed the task in multiple conditions that varied in terms of the presence of obstacles (trees), waypoints (hoops to fly through), and a path to follow. Subjects often looked in the general direction of things that they wanted to steer toward, but gaze fell on nearby objects and surfaces more often than on the actual path or hoops. Nevertheless, subjects were able to perform the task successfully, steering at high speeds while remaining on the path, passing through hoops, and avoiding collisions. In conditions that contained hoops, subjects adapted how they approached the most immediate hoop in anticipation of the position of the subsequent hoop. Taken together, these findings challenge existing models of steering that assume that steering is tightly coupled to where actors look. We consider the study’s broader implications as well as limitations, including the focus on a small sample of highly skilled subjects and inherent noise in measurement of gaze direction.

Introduction

In the long history of research on the topic of steering through cluttered environments, scientists have studied visual control in the context of a variety of tasks, such as walking through crowded spaces, walking over complex terrain, and driving a car. In general, the focus has been on visual-locomotor tasks that humans do often and with relative ease, and for good reason. However, new insights can also be gleaned by studying what humans are capable of doing when pushed to the extreme.

One example of a visual-locomotor task that pushes the limits of human performance is first-person view (FPV) drone racing. This task involves using a remote controller to steer a quadcopter through a densely cluttered environment based on a video feed from a drone-mounted camera to a head-mounted display that is worn by the pilot. The FPV drone piloting task provides a novel but rich context within which to study high-speed visual control of locomotion by skilled actors and is ripe for exploration of numerous aspects of visual control. In the present study, the focus was on gaze and control strategies for steering through waypoints, following a path, and avoiding obstacles.

Gaze and steering in automobile driving

A key component to the skillful negotiation of complex environments in humans is the ability to actively direct one’s gaze to reveal task-relevant information and to coordinate gaze behavior with movement [14]. The critical role of active gaze is well established for many perceptual-motor skills, including sports such as cricket, racquetball, and baseball [57], walking over complex terrain [8, 9], and steering an automobile along a winding road [1012].

In studies of gaze behavior during automobile driving, a common finding is that drivers spend much of their time looking at the road and road edges. This has led to the formulation of various hypotheses about why drivers look where they do. The most well-known is the tangent point hypothesis from [10], which states that drivers fixate the tangent point of the upcoming bend because the visual direction of that point specifies the upcoming road curvature [13, 14]. More recent evidence favors an alternative hypothesis, which is that drivers fixate waypoints along their desired future path. During waypoint fixation, information from both the visual direction of the fixated waypoint and the surrounding retinal flow specifies whether one is on track to steer through the waypoint [15, 16]. As such, drivers could simply fixate points on the road through which they want to travel, essentially using the gaze control system to select waypoints [4, 16]. The information that is made available during such fixations can be used to adjust steering, such that the resulting trajectory passes through the waypoints.

The picture of steering that is painted by this hypothesis entails a close coupling of gaze and steering. It is one in which gaze is directed precisely toward locations where drivers want to steer, and steering is biased toward where drivers look. Evidence of the former was reported by [16], who found that subjects looked at different parts of the road depending on where in the lane (inside, center, outside) they were instructed to drive. Likewise, [17] provided evidence that people steer in the direction that they look by instructing subjects to look at a fixation point at one of several lateral positions on the road. On wider roads, trajectories were biased toward the inside edge when the fixation point was positioned toward the inside of the curve and vice-versa.

Taken together, these studies support a theory of steering that is aptly summarized by the phrase “look where you want to go” [4, 16]. From this perspective, the availability of information and hence the successful control of steering depend critically on the precise fixation of certain key reference points on the upcoming road or road edges. This presupposes that the upcoming road is plainly visible and easy to find even as the actor is continually moving through the scene. This condition is met in almost all previous studies of gaze behavior during steering as oftentimes, the only features that are present in the environment are road edges on a flat open planar ground surface, allowing for the upcoming road to be easily discriminated from the background. However, humans and other animals also move through densely cluttered environments, where the presence of other objects may occlude the upcoming path, there may be shadows that obscure the path, and the appearance of the path may be similar to that of other parts of the scene. In such scenarios, fixations may often fall off the desired future path because actors may need to visually scan the scene to find the path within the clutter.

This raises several questions about the relation between gaze and steering. First, if the information that is used to control steering is normally made available by fixating points on the desired future path, how is the ability to control steering affected when consistent fixation of the future path is not possible because the future path does not immediately pop out? Second, how good are people at finding the upcoming path in scenes that contain dense clutter? Third, dense clutter may not only visually occlude the upcoming path but also serve as an obstacle that has to be circumvented. If precise fixation of points on the future path is critical for following a path, is precise fixation of obstacles also critical for successful obstacle avoidance? Lastly, navigation through densely cluttered environments sometimes involves steering through a series of openings rather than following an explicit path. In such situations, the goal (i.e., the opening) is not an actual object to which gaze can be anchored and therefore fixation of a point on the future path is not possible. Where do people look under such conditions?

Look-ahead fixations and anticipatory steering adjustments

The need to search for navigable spaces within the clutter may not be the only reason why gaze is not consistently locked onto the future path. Closely related to this is the idea that in certain situations, people might look ahead and anticipate the trajectory adjustments needed to reach goals that lie beyond the immediate future [18, 19]. The ability to change speed and direction to avoid collisions or steer through waypoints is limited by factors such as sensorimotor delays, neuromuscular lags, limitations on action capabilities, and inertial constraints. If actors waited until they passed the most immediate waypoint before taking future waypoints into consideration, they may be forced to make large or erratic trajectory adjustments, may miss an upcoming waypoint, and they may collide with obstacles. As such, performing such tasks well would seem to require actors to look ahead and to adapt their trajectory in anticipation of having to reach successive waypoints.

One of the few studies on this topic was conducted by [19]. They had subjects steer along a course through a series of gates in a virtual environment, similar to slalom skiing. They found that subjects tended to fixate and track the nearest gate until it was within a certain range (about 1.5 s), and then make a look-ahead fixation to the next gate. Occasionally, subjects looked back to the nearest gate before once again shifting gaze ahead to the subsequent gate. They referred to such behavior as gaze polling and proposed that it plays a critical role in setting up a smooth trajectory through successive gates. However, analyses of steering behavior suggested that only some subjects followed trajectories that took future gates into consideration while others appeared to focus only on the most immediate gate. Furthermore, for subjects who did adapt their trajectory to anticipate future waypoints, steering behavior was only slightly smoother. Lastly, manipulations that affected how far in advance upcoming gates appeared had at most weak effects on steering smoothness.

Thus, whereas the execution of look-ahead fixations was a robust finding in [19] and has been reported in other studies [e.g., 18], the evidence for anticipatory steering adjustments is less clear. The second aim of this study was to determine whether subjects made anticipatory steering adjustments.

The FPV drone piloting task

Rather than focusing on the experimental task of automobile driving as others have done for decades, we chose the task of first-person view (FPV) drone racing. This task involves steering a quadcopter through a densely cluttered environment based on a video feed from a drone-mounted camera to a head-mounted display that is worn by the pilot. The pilot uses a remote controller to adjust the thrust, altitude, yaw rate, and roll of the quadcopter.

The drone piloting task is ideal for the purposes of the present study because it provides a naturalistic context in which humans move at high speeds through densely cluttered environments without risk of injury. In addition, the likelihood of observing anticipatory steering adjustments may be greater because the motion of drones is significantly constrained by inertia, making rapid trajectory adjustments more difficult, and because subjects were skilled drone pilots with extensive experience maneuvering drones in tight spaces.

While the drone piloting task is in many ways an ideal context for studying high-speed steering and obstacle avoidance, studying drone piloting in the real world poses some significant challenges: tracking the movement of the drone in a large space, limited ability to control and manipulate the environment, safety of both human subjects and equipment, dependence on weather (wind, rain), varying amount of sunlight. As such, we developed a drone-piloting simulator. The virtual environment was created using the Unity game engine and was visually realistic and highly customizable. Subjects viewed the scene through an HTC Vive Pro head-mounted display (HMD), which was equipped with a Pupil Labs eye tracker that enables eye tracking at up to 200 Hz. Steering and speed are controlled using a standard RC controller of the kind that is typically used by FPV drone pilots. The flight dynamics were not identical to a real-world racing drone, but they closely approximated a fast-moving photography drone, allowing experienced drone pilots to skillfully maneuver the drone after just a few minutes of practice.

To the best of our knowledge, the only previous study of gaze behavior in drone pilots is [20]. They had subjects repeatedly steer through a series of gates positioned along a figure-8 track. Subjects consistently directed gaze toward the upcoming gate and shifted gaze toward the next gate as soon as they passed through the previous one. They also found strong cross-correlations between gaze angle, camera angle, and thrust angle. The temporal offsets at peak correlation suggested that gaze angle changes preceded camera angle changes, which preceded thrust angle changes. These findings further reinforce the coupling of gaze and steering and provide additional evidence of look-ahead fixations, but the study was not designed to investigate questions about gaze behavior during navigation through cluttered environments.

The present study

We sought to study gaze and steering behavior under conditions that closely resemble the naturalistic task that drone pilots perform. As such, rather than conducting a traditional experiment with hundreds of individual trials each of which lasted a few seconds, we had subjects steer continuously along a racecourse. The course was an irregularly shaped closed loop embedded in a forest-like virtual environment and took approximately 60–75 s to complete. Each experimental session comprised five blocks, each of which consisted of two laps per condition. Subjects traveled in a clockwise direction in Blocks 1–4 and a counterclockwise direction in Block 5.

Experiment 1 included three conditions. The Path Only condition was most similar to automobile driving but with nearby trees that served as obstacles and occluded the path (Fig 1A). In the Hoops Only condition, hoops were placed at various locations along the path but the path was not rendered (Fig 1B). The task was to steer through the hoops. In the Hoops and Path condition, both features were present, and subjects were instructed to follow the path and steer through hoops while avoiding collisions with nearby trees. In Experiment 2, the conditions were similar to those in the Path Only condition of Experiment 1, but we explicitly manipulated the density of trees on and near the path. Fig 1C and 1D show screenshots from the two conditions (Dense Trees and Sparse Trees, respectively) in that experiment.

Fig 1. Screenshots of the Path Only (A) and Hoops Only (B) conditions from Experiment 1 and the Dense Trees (C) and Sparse Trees (D) conditions from Experiment 2.

Fig 1

Experiment 1 also included a Hoops and Path condition (not pictured) that combined the two key features of the Path Only and Hoops Only condition.

Methods and materials

Participants

Sample size was limited because a very small percentage of the population has FPV drone piloting experience. In addition, Covid-related precautions prohibited recruitment of subjects from outside the RPI community. Six subjects (ages 19–29 years) participated in Experiment 1 and three subjects (all of whom had completed Experiment 1) participated in Experiment 2. Subject recruitment took place between September 9, 2021 and February 7, 2022. One of the three subjects who participated in both experiments was a co-author of the study. All subjects were experienced quadcopter pilots with at least 10 hours (mean = 108 hours, range = 10–250 hours) of flight time in FPV drone piloting, either real or simulated. All subjects had normal or corrected-to-normal vision and no known visual or motor impairments.

Hardware

The experiments were conducted using an Alienware Aurora R12 desktop PC running Microsoft Windows 10 and equipped with an Intel i9 (11 series) ten-core processor, NVIDIA GeForce RTX 3090 24GB graphics processor, and 32GB of 3400 MHz DDR4 RAM. The environment was viewed through an HTC Vive Pro head-mounted display (HMD) with head tracking turned off to mimic the conditions experienced by FPV drone pilots. The HTC Vive Pro has a total resolution of 2160x1200 (1080x1200 per eye), a refresh rate of 90Hz, and 110-degree horizontal and vertical fields of view. A Pupil-Labs VR/AR extension eye tracker [21] was mounted inside of the HMD. The eye tracker was set up so that each eye recorded at 120Hz with a resolution of 196x196. The simulated quadcopter was controlled using a Taranis Q X7 RC controller with a micro-USB connection.

Virtual environment

The virtual environment was designed using the Unity game engine (version 2019.3.14f) and minimally consisted of a ground surface, trees, and a partly cloudy sky. The ground surface was created using a terrain generation tool (Gaia Pro, which was purchased from the Unity Asset Store) and included natural-looking ground textures and small variations in elevation. The terrain was discretized into a grid-like pattern and coniferous trees of various types and sizes were placed at random positions within each section of the grid. If trees were spawned too close to the path or too close to one another, they were moved by hand in the Unity editor.

This base environment was augmented to create different experimental conditions by changing aspects of the scene, such adding a path, placing hoops along the path, and changing the locations and density of trees. The path was present in the Path Only and Hoops + Path conditions of Experiment 1 as well as in both conditions of Experiment 2. The path was hand-crafted using the Path Painter II Asset to form an irregularly shaped loop with a width of ~1 m and a circumference of approximately 740 m. In the Hoops Only and Hoops + Path conditions of Experiment 1, 42 yellow hoops (~1 m in diameter) were placed at irregular intervals along the path. The Unity Experiment Framework (UXF) [22], an open-source Unity-based experimental interface tool was used to move subjects through each phase of the experiment.

The quadcopter’s physics relied on two assets from the Unity store: the “red” drone from the Drones Bundle Package for the physical object and FPV Drone Controller for the controls. The quadcopter mirrored a typical photography drone with the maximum speed set to approximately 17 m/s. The left joystick controlled the altitude (Y axis) and the yaw rate (X axis) while the right joystick controlled forward/backward thrust (Y axis) and lateral (X axis) thrust. The quadcopter model automatically compensated for the force of gravity such that it stabilized at a fixed altitude as long as the left joystick was centered along the Y axis. If the vehicle collided with a tree, hoop, or the ground, it bounced back for a brief period, after which time normal flight resumed. Rarely, the quadcopter would get stuck in the branches of a tree. When this happened, the experimenter aborted the current lap and restarted the drone at the beginning of the course.

Design and procedure

In both experiments, subjects first signed written consent after which they listened to a set of brief instructions. They were then asked to place the HMD on their head and adjust the head straps so that they could see the full view of the screen in the HMD and so the eye tracker camera had adequate coverage of their eyes. Subjects then followed the eye tracker calibration process as described in the subsection on eye tracking below. Next, subjects completed a practice phase during which they flew freely in an open environment to familiarize themselves with the controls and learn how the virtual quadcopter responded to joystick inputs. After three minutes, they were asked if they were sufficiently comfortable with the controls to continue on to the testing blocks. If they were not yet comfortable, they were given additional time to continue practicing until they were ready for testing.

The main part of Experiment 1 comprised five blocks, each of which included three sub-blocks. Within each sub-block, subjects completed two laps around the loop in each of the three conditions (Hoops Only, Hoops + Path, Path Only), yielding 10 laps per condition across the entire experiment. The Hoops Only and Path Only condition alternated between the first and third sub-blocks and the Hoops + Path condition was always the second sub-block. Subjects were asked to recalibrate the eye-tracker at the beginning of each sub-block. The quality of each calibration was assessed and recorded.

Subjects were instructed to fly as quickly as they could while staying close to the path (in the Path Only and Hoops + Path conditions), attempting to pass through each hoop (in the Hoops Only and Hoops + Path conditions), and avoiding collisions. There was nothing in the instructions or the task itself that indicated to subjects that they should prioritize speed over accuracy or vice-versa. They were further instructed that if they missed a hoop, they should continue on rather than backtrack. If the quadcopter collided with any object in the scene (e.g., tree, branch, leaves, the ground), its velocity was altered to simulate bouncing off the object. In the case of a head-on collision, the quadcopter bounced backwards and its velocity sharply dropped for a brief period. If the quadcopter collided with a tree and became stuck, the experimenter pressed a button to abort the trial and restart from the beginning of the current lap. This only occurred twice in Experiment 1 and never in Experiment 2. The simulation was designed such that if the quadcopter strayed too far off the path (approximately 10 meters to the left or right of the path and 20 meters in the vertical direction), it was automatically teleported to the start of their current lap and the recorded data for that lap was overwritten. However, this never occurred in any sessions of either experiment. In Blocks 1–4, subjects traveled around the loop while moving in the clockwise direction. In the fifth block, subjects started each condition facing the reverse direction and were instructed to fly in the opposite (counterclockwise) direction.

Experiment 2 was identical to Experiment 1 with the exception that there were two conditions (Dense Trees and Sparse Trees) rather than three. Both conditions were similar to those in the Path Only condition of Experiment 1, but the density of trees on and near the path was manipulated. As such, each of the five blocks included two sub-blocks, one for the Sparse Trees condition and one for the Dense Trees condition. Within each block, subjects completed two laps per condition for a total of ten laps per condition across the entire experiment. The order of conditions was reversed in consecutive blocks to minimize order effects. All other aspects of the procedure were the same as in Experiment 1.

Simulator sickness

It is well established that certain individuals experience symptoms of simulator sickness in virtual environments, especially during simulated self-motion [23, 24]. In the present study, the risk of simulator sickness was minimized because the subjects had previous experience with FPV drone piloting. During pilot testing, we did find that a small number of subjects with fewer hours of previous FPV drone piloting experience did encounter mild symptoms of simulator sickness within the first 5–10 minutes of the experiment. In these cases, the experiment was stopped immediately.

Eye tracking

Eye movements were recorded and post-processed using the Pupil-Labs Core software. Eye tracker calibration was performed using an 18-point depth-mapped calibration routine provided by Pupil-Lab open-source software. In addition to calibration, an assessment routine was created that used nine points that cover the visual field to determine the accuracy and quality of calibration. After calibration assessment, the average eye-tracking error in visual degrees was displayed on the screen. The experimenter then chose to move the participant onto the experiment or have them recalibrate. If recalibration was chosen, the previous calibration recording was overwritten. The mean eye tracking error exceeded five degrees for three of the six subjects in Experiment 1. The data from these subjects were excluded in the analyses of gaze behavior (but not from the analyses of performance measures or anticipatory steering behavior). For the remaining three subjects in Experiment 1, the mean eye tracking error was 3.05 degrees in Experiment 1. Error was greatest in the top left and right sectors and the lowest in the center of the display (mean = 2.55 degrees), which is where subjects tended to spend the most time looking. For the three subjects in Experiment 2, the mean eye tracking error was 3.80 degrees (mean = 2.19 degrees in the center). Two of the three subjects in Experiment 2 were among the three from Experiment 1 with accurate gaze data.

Data analyses

Analyses were based on raw data from the following sources: (1) the Pupil Labs eye tracker (recorded at 120 Hz), (2) the RC controller joystick positions (90 Hz), (3) the drone position and orientation (90 Hz), and (4) the positions and orientations of hoops.

To prepare the data for analysis, the raw data were used to compute the values of several variables. The direction of the thrust vector in the horizontal plane was calculated based on the offset of controller’s right joystick on both the X and Y axes (which determined forward/backward and lateral motion). The instantaneous heading direction was calculated based on the change in drone position over consecutive frames. The direction of the forward-facing camera was derived directly from the drone’s yaw angle. The approach angle of the quadcopter relative to the upcoming hoop was calculated based on the angle between the instantaneous heading vector and the normal vector passing through that hoop.

In the Hoops Only and Hoops and Path conditions, flight trajectories were parsed into segments that were defined by pairs of consecutive hoops. The first frame in each segment corresponded to the first timestep after the drone passed through the previous hoop and the last frame corresponded to the time step before it passed through the next hoop.

All repeated-measures ANOVAs were run using ezANOVA from the ez package in the R programming language. If the sphericity assumption was violated (as indicated by a significant Mauchly’s test), the Greenhouse-Geisser correction was applied to the degrees of freedom. In all figures, error bars correspond to 95% confidence intervals around means with between subject variation removed. We do not report statistical tests for the measures of gaze behavior due to the fact that we only obtained accurate gaze data from three of the six subjects. All linear mixed effects models used to conduct the analysis of anticipatory steering were run using the lmer function from the LME4 package in the R programming language.

Results

Basic performance measures

Before analyzing gaze behavior, we will examine four measures of task performance to determine how successfully subjects were able to perform the task in each condition and how their performance changed over blocks. Subjects were instructed to fly as quickly as possible, so speed of travel is one of the key metrics of task performance. The mean speed of travel averaged over blocks and conditions ranged from 8.37 m/s for the slowest subject to 15.00 m/s for the fastest subject. The overall mean speed was 12.19 m/s, which is equivalent to 46.4 km/h or 28.9 mph.

A two-way repeated-measures ANOVA revealed significant main effects of condition (F1.02, 5.12 = 7.57, p = .039, η2 = 0.60 [0.16, 1.00]) and block (F1.33, 6.64 = 10.75, p < .05, η2 = 0.68 [0.42, 1.00]) as well as the condition x block interaction (F3.01, 15.06 = 6.45, p < .01, η2 = 0.56 [0.32, 1.00]) (see Fig 2A). Bonferroni corrected contrasts revealed that speed was slowest in Block 1 in all three conditions, significantly increased between Blocks 1 and 2, and did not significantly increase between pairs of blocks beyond 1 and 2. Speed in Block 5 (in which subjects flew around the path in the opposite direction) was not significantly different from Block 4. Subjects also flew significantly faster in the Path Only condition compared to the other two conditions, although the effect sizes were moderate (d = 0.53 and d = 0.55).

Fig 2. Analyses of performance measures in Experiment 1: Mean speed (A), path deviation (B), number of collisions (C), and proportion of hoops completed (D).

Fig 2

Each measure is plotted as a function of block number, with the different colors representing different conditions. Error bars represent 95% CIs.

Fig 2B depicts mean deviation from the center of the path broken down by condition and block. The mean overall path deviation was 0.83 m. For reference, the width of the path was approximately 1 m, which means that subjects remained on or close to the path for much of the time. While it may seem surprising that path deviation was similar in the Hoops Only condition, when the path was not rendered, it is worth noting that the hoops were positioned on the path. Neither the main effect of condition (F < 1) nor the effect of block (F4,20 = 2.70, p = .06, η2 = 0.35 [0.00, 1.00]) was significant, but the condition x block interaction was (F8, 40 = 2.77, p < .05, η2 = 0.36 [0.05, 1.00]). Looking at Fig 2B, the source of the interaction appears to be the elevated path deviation in Block 5 of the Hoop Only condition. Closer inspection of the individual subject data revealed that two of the six subjects had substantially greater path deviations in this condition x block combination. Upon further inspection, we noticed that there was at least one turn where it took longer for the upcoming hoop to become visible when traveling in the opposite direction due to occlusion by trees. In the absence of a visible path (i.e., in the Hoop Only condition), certain subjects may have strayed a bit farther before detecting the upcoming hoop. This could explain why path deviation was slightly higher in Hoop Only condition when the loop was completed in the opposite direction.

The number of collisions per lap (Fig 2C) was greatest in Block 1 in all three conditions, decreased in Block 2, and remained relatively stable between 2 and 5. The main effect of condition was not significant (F < 1), nor was the main effect of block (F1.11, 5.53 = 4.61, p = .07, η2 = 0.48 [0.12, 1.00]), but the condition x block interaction (F1.76, 8.79 = 5.09, p < .05, η2 = 0.50 [0.24, 1.00]) was significant.

The mean overall proportion of hoops completed (Fig 2D) was 0.955. Neither of the main effects (F1,5 = 1.41, p = .29, η2 = 0.12 [0.00, 1.00] for condition, F < 1 for block) nor the interaction (F4, 20 = 1.17, p = .36, η2 = 0.19 [0.00, 1.00]) were significant. If we use a stricter definition of success that only includes hoops that were completed without the quadcopter colliding with or grazing the edge of the hoop before passing through, the overall proportion drops but only to 0.847. In other words, subjects cleanly passed through the large majority of hoops. Taken together, this suggests that subjects were able to successfully fly through a high percentage of hoops right from the beginning of the experiment regardless of whether the path was present or absent.

To summarize, the analyses of performance measures revealed that subjects performed the task well. They flew at a fast speed, successfully followed the path and passed through a high percentage of hoops, and infrequently collided with objects. Performance tended to be worse in Block 1 but improved and remained relatively stable beyond that block, including Block 5 in which subjects travelled around the loop in the opposite direction. These trends were present across all three conditions, with small differences emerging in just a few cases.

In each of the analyses above, all five blocks were included even though subjects flew around the loop in the opposite direction in Block 5. One might feel that the condition x block interaction should be evaluated without Block 5 included. We re-ran the analyses without Block 5 and found that none of the interaction effects changed from significant to non-significant or vice-versa, with one exception. In the analysis of path deviation, the condition x block interaction was not significant when Block 5 was excluded (F6, 30 = 0.79, p = .58, η2 = 0.14 [0.00, 1.00]).

Did subjects cut the corner on sharp turns?

In assessing the hypothesis that humans look where they want to go, one cannot assume that subjects always intended to follow the path. Although subjects were instructed to follow the path as closely as possible, they were also told to maintain a high speed. As such, it remains possible that subjects attempted to “cut the corner” on sharp turns. If they did, that would complicate the analysis of gaze behavior since fixating points that lie off the path while cutting the corner would not necessarily be inconsistent with the general hypothesis that people look where they want to go and steer in the direction of gaze [16].

To determine whether subjects exhibited corner-cutting behavior, we identified nine individual segments of the loop where the path was curved (see Fig 3A) and measured the signed deviation from the path center at each time step within those segments (see Fig 3B). A positive path deviation indicates an outside bias, and a negative path deviation indicates an inside bias. In the Path Only condition, there was an inside bias on four of the nine curves (1, 3, 6, and 9) but the 95% CI overlapped with the inside edge of the path (-0.5 m) for all but one curve (Curve #3). In addition, signed path deviation was positive (indicating an outside bias) on just as many curves (2, 4, 5, and 7). This is not surprising because the path was surrounded by trees on either side, so it was difficult or impossible on most curves to cut the corner without risking a collision. Signed path deviation was even closer to zero in the Hoops Only and Hoops + Path conditions. This makes sense because the hoops were positioned on the path, so in general, subjects could not cut the corner without missing a hoop. Taken together, this analysis rules out the possibility that subjects aggressively cut the corner when traveling around sharp bends in the path.

Fig 3.

Fig 3

(A) Bird’s eye view of virtual environment with the path highlighted by the dashed yellow line. The solid yellow lines indicate the curved sections of the path that were used in the analysis of signed path deviation. Curves are numbered 1 through 9. (B) Signed path deviation for each numbered section of the path in the Path Only, Hoop Only, and Hoops + Path conditions. Positive values correspond to an outside bias and negative values correspond to an inside bias.

Where did subjects look?

Next, we consider how much time subjects spent looking at each of the following categories of objects: the path, the edge of the hoop, through the center of the hoop, nearby trees, and the background. Nearby trees were defined as those that fell within a distance of 34.4 m of the subject, which corresponds to two times the average distance between hoops. We chose this distance because we wanted the proportion of time spent fixating nearby trees to reflect fixations to objects that subjects may be attempting to avoid at that moment. We reasoned that if gaze fell on trees that were more than two hoops ahead, it was unlikely to be for the purposes of obstacle avoidance. The background included the ground surface other than the path, trees other than those that were classified as nearby, and the sky.

In the Path Only condition, gaze was aimed at the path on only about half of the frames (49%). Equally often, subjects looked at either nearby trees (20%) or the surrounding terrain (29%) (see Fig 4A). Further analyses revealed that when subjects were not looking at the path, they were looking near the path. We quantified this by calculating for each frame on which gaze fell on the ground the distance between the point of fixation in the 3D environment and the center of the path. Fig 4B shows the density of fixations as a function of distance from the center of the path. The median distance was 1.11 m, which was slightly greater than the width of the path (approximately 1 m), suggesting that the majority of fixations were either on or near the path. Thus, in the Path Only condition, subjects only looked at the path itself 49% of the time and gaze often fell elsewhere (on trees and nearby terrain) but subjects spent most of their time looking in close proximity to the path.

Fig 4.

Fig 4

(A) Proportion of frames spent looking at the path, nearby trees, background, hoop center, and hoop edge in the Path Only, Hoops Only, and Hoops + Path conditions of Experiment 1. (B) Density plot of distance from fixation point to center of path. (C) Proportion of frames spent looking through the center of the hoop as a function of the amount of time until passing through the hoop. In B and C, the thin colored curves represent individual subject data and the thicker black curve represents the mean across subjects.

In the Hoops Only condition, the desired future path travels through the center of each hoop, which is empty space. As such, there were no visible objects on the desired future path to anchor gaze as there was when following a path. The things that were nearest to the desired future path were the edges of the hoops, but those were fixated only 17% of the time (see Fig 4A). The object category that drew the highest percentage of fixations (41%) was the hoop center, indicating that subjects spent more time looking at distant objects that were visible through the center of the hoop than they did at any other object category.

Before we can conclude that looking through the hoop center was a deliberate gaze strategy, it is necessary to rule out an alternative explanation for the observed results. As subjects approached a hoop, the center of the hoop took up an increasingly larger part of the visual field. Eventually, there was no place to look other than through the center of the hoop. Nevertheless, the hoop did not occupy the majority of the visual field until the last few frames. Fig 4C shows the proportion of frames spent looking through the hoop center as a function of the amount of time remaining until passing through the hoop, cut off at -2 s because the median time between hoops was 1.6 s. At -1 s, the largest visual angle that the hoop could occupy (assuming a head-on approach to the hoop at maximum speed) was approximately 5 deg of visual angle along the horizontal or vertical axis. Yet subjects were looking through the hoop center ~26% of the time. Even at -¼ s, when subjects were looking through the hoop about 70% of the time, the maximum visual angle that the hoop could occupy was ~19 deg. Hence, looking through the center of the hoop does not appear to be a necessary consequence of steering through the hoop but rather part of the gaze strategy for guiding steering.

In addition, gaze fell on trees 19% of the time and on background scenery 23% of the time. Such fixations may have occurred when subjects were searching the scene for upcoming hoops.

Lastly, gaze behavior in the Hoops + Path condition was very similar to the Hoops Only condition (Fig 4A). This is interesting because the path was visible in this condition, yet subjects spent very little time (~4% of frames) looking at it. Recall from the analyses of performance measures (average speed, path deviation, proportion of hoops completed, and number of collisions) that the differences between the Hoops Only and Hoops + Path condition were negligible. Thus, when the task involved steering through waypoints (i.e., hoops), the presence of the path had minimal impact on gaze behavior and task performance.

To summarize, gaze often fell on things that were not the path or the center of the upcoming hoop. This was almost certainly due to the presence of dense clutter, which meant that the upcoming path and the upcoming hoops did not always pop-out—they had to be searched for. (It is also possible that the estimated proportion of fixations to the path and upcoming hoops was reduced due to error in the measurement of gaze direction. We return to this point in the Discussion.) Nevertheless, subjects often looked in the general direction of things that they wanted to move toward—the future path or through the center of the upcoming hoop. Second, subjects performed the task extremely well. They moved at ~12 m/s, stayed within less than 1 m of the center of the path, passed through more than 95% of the hoops, and avoided collisions. Lastly, gaze was aimed at trees only about 25% of the time. By itself, this does not tell us much about whether people need to fixate and track obstacles to avoid them. As such, we turn to the results of Experiment 2 in which the density of trees was manipulated.

The proportion of time spent looking at trees in Experiment 2 was quite low–only about 18% and 7% in the Dense Tree and Sparse Tree conditions, respectively (Fig 5). This is actually lower than it was in Experiment 1, although the small sample size prohibits statistical comparisons. Thus, despite the large increase in the density of trees near and on the path, the amount of time spent looking at trees did not increase. This suggests that subjects can successfully avoid collisions with obstacles near their future path without sustained tracking of those objects. S1 Movie is from a representative subject in the Dense Tree condition of Experiment 2 and illustrates the basic findings from that condition: Gaze fell near but not always on the path and there were infrequent fixations on obstacles (the trees). Taken together, the findings support the hypothesis that a critical role for gaze during high-speed navigation in cluttered environments is to search for the path. Humans are capable of steering through waypoints and following a path at high speeds even without persistent fixation of these parts of the scene.

Fig 5. Proportion of frames spent looking at the path, nearby trees, and background in the Dense Trees and Sparse Trees conditions of Experiment 2.

Fig 5

Did subjects make anticipatory steering adjustments?

The second aim of this study was to determine whether subjects made anticipatory steering adjustments. Whereas evidence for look-ahead fixations has been found in previous studies, the evidence for anticipatory steering adjustments is less clear. Next, we explain how we went about testing for anticipation. If subjects adapted their approach to the upcoming hoop (Hoop N) in anticipation of the subsequent hoop (Hoop N+1), then their approach trajectory to hoop N should depend on the position or perhaps orientation of N+1 relative to N. That is, the approach trajectory that they take when hoop N+1 is in one location should be different than when hoop N+1 is in a different location (see Fig 6A). In other words, the relative position of hoop N+1 should be a significant predictor of the approach angle to Hoop N.

Fig 6.

Fig 6

(A) Illustration of how the trajectory to the most immediate hoop (Hoop N) may depend on position of subsequent hoop (Hoop N+1). (B) Predictor variables (angular position and relative orientation), outcome variable (approach angle), and (C) covariates (heading direction relative to N at N-1, angular position of N relative to N-1) used in linear model. (D) Illustration of outcome variable (approach angle) measured at different positions leading up to Hoop N.

This prediction can be expressed in the form of a linear model with approach angle (⍺N) to Hoop N as the outcome variable and angular position of Hoop N+1 relative to Hoop N (θN+1,N) as the predictor. Both angles are defined relative to the normal vector of Hoop N (see Fig 6B). The approach angle to hoop N may also depend on the initial conditions for the segment of the path between hoop N-1 and hoop N, including the drone’s heading direction relative to hoop N as it passes through hoop N-1 (ɸN), the drone’s orientation relative to hoop N as it passes through hoop N-1 (ρN), and the angular position of hoop N relative to hoop N-1 (θN,N-1) (Fig 6C). Hence, we included these variables as covariates in the model, which was a linear mixed-effects model with subjects as a random factor:

αN~ϕN+ρN+θN,N-1+θN+1,N (1)

Fitting this model to the data, we found that the angular position of Hoop N+1 accounted for 23.6% of the variance in approach angle (measured on the last frame before passing through hoop N) that was left unexplained by the covariates. We also compared the model in Eq 1 to another model with the covariates alone (i.e., without θN+1,N). The likelihood ratio test was statistically significant (χ2(1) = 413.70, p < .001, dAIC = -412), indicating that angular position of Hoop N+1 was a significant predictor of approach angle to Hoop N. This is consistent with a strategy that involves anticipation of upcoming steering constraints.

Next, we asked how far in advance of reaching Hoop N did subjects start to anticipate Hoop N+1. We repeated the analysis with the outcome variable (approach angle) measured at four additional points in time: when the subject was 75%, 50%, 25% and 0% of the way to reaching Hoop N (Fig 6D). Not surprisingly, the proportion of variance explained decreased as approach angle was measured farther back in time (Fig 7A). Nevertheless, there was still some evidence of anticipation early on in the segment.

Fig 7. Partial R2 as a function of the percentage of segment with angular position (A) and relative orientation (B) as the predictor variable.

Fig 7

Colored curves show partial R2 based on individual subject data.

Lastly, we fit the model in Eq 1 to the data from individual subjects. The variation in partial R2 values (see Fig 7A) suggests that some subjects anticipated upcoming hoops to a greater degree than others, especially toward the end of each segment (i.e., at 100%). However, all six subjects exhibited evidence of anticipation.

This analysis suggests that subjects took the relative position of the upcoming hoop into account. Next, we consider whether they also took relative orientation into account. In this analysis, the main predictor is the orientation of hoop N+1 relative to hoop N (ωN+1,N in Fig 6B). We found that only a small proportion of the variance in approach angle was explained by relative orientation (Fig 7B), suggesting that subjects did not adapt their approach trajectory to Hoop N in anticipation of the relative orientation of Hoop N+1.

Discussion

In this section, we summarize the main findings and expand upon their broader significance for our understanding of the gaze and control strategies for high-speed steering through cluttered environments. Our summary is organized into six key findings.

First, in the conditions that required subjects to steer through hoops, the predominant gaze behavior was to look through the center of the upcoming hoop. This is consistent with the findings of [20], which is the one previous study on gaze behavior in drone pilots. However, in the Hoops Only condition of the present study, subjects also spent a significant portion of time looking elsewhere (e.g., at nearby trees and surrounding terrain), which was not reported by [20]. Such differences can be attributed to the additional complexity of the scene used in the present study compared to the relative sparse environment used in [20], which lacked obstacles other than hoop edges and had less complex background scenery. The presence of clutter in the present study meant that subjects had to actively search for upcoming hoops by scanning the scene.

Second, in the conditions that required following the path, subjects performed the task quite well despite the fact that the proportion of frames spent fixating the path was only about 0.25 to 0.50. As we acknowledge in the Limitations section below, it is possible that this range is deflated due to error in measurement of gaze direction. However, the presence of dense clutter ensured that gaze fell off the path on a substantial proportion of frames. This suggests that sustained fixation of points on the future path is not necessary to successfully control steering. As mentioned in the introduction, many theories and models of steering assume that the information that is needed to skillfully control steering is revealed by fixating points on the ground over which the actor intends to travel. Given the results of the present study, an important goal for future work is to explain how humans are able to successfully negotiate a winding path when gaze often falls off the path. One possibility is that steering is coupled to gaze but only when the gaze point falls on the desired future path. In our view, a more likely possibility is that the information that is relevant for controlling steering can also be picked up using peripheral vision. If so, precise fixation of points on the future path may not be as critical as previously assumed.

Third, although subjects spent less than half the time looking at the path, gaze often fell near the path in the conditions that required following the path. More generally, subjects were able to direct gaze at or near the things that they were instructed to follow or move toward. This is interesting because subjects were moving at such high speeds, and because the path and the hoops blend into the background scenery, and were often hidden in shadows, or occluded by trees and hills. This points to a significant gap in our understanding of the role of gaze in steering [4]. Existing theories and models attempt to capture how steering is regulated based on currently available information, which is assumed to be made available by looking at the future path. However, because the focus has been on steering in uncluttered environments, the question of how humans know where to look to find the future path has received much less attention.

Fourth, pilots did not often direct gaze at things that they wanted to avoid, such as trees and the edges of the hoops. This suggests that obstacle avoidance does not require sustained tracking. A tentative conclusion is that the information needed to guide obstacle avoidance can be detected using peripheral vision. It is worth noting, however, that all the obstacles in this study were stationary. Avoidance of moving obstacles may require sustained tracking [25].

Fifth, pilots made anticipatory steering adjustments; that is, they adapted how they approached the upcoming waypoint in anticipation of having to steer through future waypoints. As explained in the introduction, evidence of anticipatory steering adjustments in previous studies is inconsistent. The fact that such behavior was exhibited in this study could reflect the presence of significant inertial constraints or the high level of piloting skill of the subjects. Future research should explore the conditions in which actors do and do not exhibit anticipatory steering adjustments, the role of skill, and the control strategies that capture such behavior. It is common in the literature to assume that anticipatory steering behavior reflects a path planning process that relies on internal forward and inverse models [4]. In principle, however, such behavior could also emerge from the coupling of information and action according to some control law [26].

Lastly, we found no significant differences on any performance measure on Block 5 (in which subjects traveled around the loop in the counterclockwise direction) compared to Block 4. This suggests a lack of evidence that subjects’ knowledge of the spatial layout of the environment played a role in their ability to perform the task. It could be that four blocks of trials were not sufficient for subjects to learn the course well enough for that knowledge to be useful to guide steering. Alternatively, it is possible that actors rely primarily on currently available information rather than knowledge of spatial layout. Further research is needed to tease apart these possibilities.

Implications for control of automobile steering

On the surface, it may seem that the findings of the present study are of little relevance to the visual control of steering along a winding road. After all, steering a car along a winding road differs from flying a quadcopter along a winding path in a forest in several respects. The controller dynamics differ, elevation is fixed in a car but variable in a quadcopter, and many of the sources of clutter that occluded the path in the present study (e.g., branches) are not found on roads. Given the absence of clutter on roadways, one could argue that the problem of searching for the desired future path is not an issue in the context of automobile driving as it was in the present study. While that may be true when driving under ideal conditions, people also frequently drive under less idealized conditions in which the upcoming road may be hard to distinguish from the surrounding terrain due to poor visibility. For example, when driving at night, in fog, and in heavy rain or snow, the road does not always immediately pop out. This is especially true if the lane markers are faded or if the driver has poor vision. As such, the problem of searching for the upcoming road is an important component of automobile driving in certain scenarios. Identifying the strategies that drivers use to search for the road in such conditions should be an important goal for future research. Likewise, it would be useful to expand models of steering to better account for human steering performance under conditions in which sustained fixation of the road is not possible. Such models could have useful applications for predicting driver performance under conditions in which visibility is degraded, which is an important but less well understood area of driver behavior modeling. By improving models of driver behavior, engineers could conduct more accurate offline virtual safety evaluations of driver assistance systems [27].

Limitations

Caution must be exercised when generalizing the findings due to the small sample size and the focus on subjects with extensive FPV drone piloting experience. For example, subjects in the present study were able to perform the task well despite the presence of dense clutter, which prevented continual fixation of the upcoming path. Nevertheless, it is possible that for less skilled actors, successful steering depends more critically on the ability to continually fixate the future path. Likewise, there may be individual differences or differences across skill levels in anticipatory steering behavior.

Another reason for caution stems from error in the measurement of gaze direction from the eye tracker. We attempted to mitigate the impact of error by focusing our analyses of gaze behavior on the three subjects whose mean eye tracking error during the calibration phase was less than 5 degrees overall and 2 to 3 degrees in the center of the display. It remains possible that the actual proportion of time spent looking at the upcoming path, the hoop edges, or the hoop center was greater than what was found in our analyses. However, given the speed at which subjects were moving and the fact that hoops and sections of the path were often occluded by trees, branches, or small hills or hidden in shadows, it is highly likely that gaze fell on the background or nearby trees or terrain on a substantial proportion of frames.

Conclusions

Although gaze and steering appear to be tightly coupled when driving along a winding road on a flat, obstacle-free ground surface, the control of steering is not likely to be as directly linked to gaze direction when moving through cluttered environments. The need to search for navigable spaces within the clutter means that some fixations will fall on objects and surfaces that are not in the intended direction of locomotion. Furthermore, humans adapt how they approach the most immediate waypoint in anticipation of future waypoints, suggesting that non-fixated waypoints also influence steering. It follows that the control of steering is not always guided by the direction of gaze and that sustained fixation of the desired future path is not necessary for following paths, steering through waypoints, and avoiding obstacles. Either intermittent fixation of the future path is sufficient, or actors can also rely on information picked up by peripheral vision.

Supporting information

S1 Movie. Recording of a sample subject from the Dense Trees condition of Experiment 2, in which subjects were instructed to follow the path while avoiding trees and overhanging branches.

The red crosshair shows the point of fixation.

(MP4)

Download video file (19.1MB, mp4)

Data Availability

All data files are available on Open Science Framework (https://osf.io/5dtxh/).

Funding Statement

This material is based upon work supported by the National Science Foundation (https://www.nsf.gov/) under Grant No. 2218220 to BRF and by the Office of Naval Research (https://www.nre.navy.mil/) under Grant No. N00014-18-1-2283 to BRF. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation or the Office of Naval Research. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Hayhoe MM. Vision and action. Annual Review of Vision Science. 2017;3:389–413. doi: 10.1146/annurev-vision-102016-061437 [DOI] [PubMed] [Google Scholar]
  • 2.Hayhoe MM, Matthis JS. Control of gaze in natural environments: effects of rewards and costs, uncertainty and memory in target selection. Interface Focus. 2018;8:20180009–20180007. doi: 10.1098/rsfs.2018.0009 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Land M, Tatler B. Looking and acting: vision and eye movements in natural behaviour. Oxford University Press; 2009
  • 4.Lappi O, Mole C. Visuomotor control, eye movements, and steering: A unified approach for incorporating feedback, feedforward, and internal models. Psychological Bulletin. 2018;144:981–1001. doi: 10.1037/bul0000150 [DOI] [PubMed] [Google Scholar]
  • 5.Diaz G, Cooper J, Rothkopf C, Hayhoe M. Saccades to future ball location reveal memory-based prediction in a virtual-reality interception task. Journal of Vision. 2013;13:1–14. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Land MF, McLeod P. From eye movements to actions: how batsmen hit the ball. Nature Neuroscience. 2000;3:1340–1345. doi: 10.1038/81887 [DOI] [PubMed] [Google Scholar]
  • 7.Sarpeshkar V, Abernethy B, Mann DL. Visual strategies underpinning the development of visual–motor expertise when hitting a ball. Journal of Experimental Psychology: Human Perception and Performance. 2017;43:1744. doi: 10.1037/xhp0000465 [DOI] [PubMed] [Google Scholar]
  • 8.Domínguez-Zamora FJ, Marigold DS. Motives driving gaze and walking decisions. Current Biology. 2021;31:1632–1642. e4. doi: 10.1016/j.cub.2021.01.069 [DOI] [PubMed] [Google Scholar]
  • 9.Matthis JS, Yates JL, Hayhoe MM. Gaze and the Control of Foot Placement When Walking in Natural Terrain. Current Biology. 2018;28:1–16. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Land MF, Lee DN. Where we look when we steer. Nature. 1994;369:742–744. doi: 10.1038/369742a0 [DOI] [PubMed] [Google Scholar]
  • 11.Tuhkanen S, Pekkanen J, Rinkkala P, Mole C, Wilkie RM, Lappi O. Humans use predictive gaze strategies to target waypoints for steering. Scientific Reports. 2019;9:8344. doi: 10.1038/s41598-019-44723-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Wilkie R, Wann J. Controlling steering and judging heading: retinal flow, visual direction, and extraretinal information. Journal of Experimental Psychology: Human Perception and Performance. 2003;29:363. doi: 10.1037/0096-1523.29.2.363 [DOI] [PubMed] [Google Scholar]
  • 13.Kandil FI, Rotter A, Lappe M. Car drivers attend to different gaze targets when negotiating closed vs. open bends. Journal of Vision. 2010;10:24–24. doi: 10.1167/10.4.24 [DOI] [PubMed] [Google Scholar]
  • 14.Wann J, Land M. Steering with or without the flow: Is the retrieval of heading necessary. Trends in Cognitive Sciences. 2000;4:319–324. doi: 10.1016/s1364-6613(00)01513-8 [DOI] [PubMed] [Google Scholar]
  • 15.Lappi O. Future path and tangent point models in the visual control of locomotion in curve driving. Journal of Vision. 2014;14:21–21. doi: 10.1167/14.12.21 [DOI] [PubMed] [Google Scholar]
  • 16.Wilkie RM, Kountouriotis GK, Merat N, Wann JP. Using vision to control locomotion: Looking where you want to go. Experimental Brain Research. 2010;204:539–547. doi: 10.1007/s00221-010-2321-4 [DOI] [PubMed] [Google Scholar]
  • 17.Robertshaw KD, Wilkie RM. Does gaze influence steering around a bend. Journal of Vision. 2008;8:18–18. doi: 10.1167/8.4.18 [DOI] [PubMed] [Google Scholar]
  • 18.Lehtonen E, Lappi O, Kotkanen H, Summala H. Look-ahead fixations in curve driving. Ergonomics. 2013;56:34–44. doi: 10.1080/00140139.2012.739205 [DOI] [PubMed] [Google Scholar]
  • 19.Wilkie RM, Wann JP, Allison RS. Active gaze, visual look-ahead, and locomotor control. Journal of Experimental Psychology: Human Perception and Performance. 2008;34:1150. doi: 10.1037/0096-1523.34.5.1150 [DOI] [PubMed] [Google Scholar]
  • 20.Pfeiffer C, Scaramuzza D. Human-Piloted Drone Racing: Visual Processing and Control. IEEE Robotics and Automation Letters. 2021;6:3467–3474. [Google Scholar]
  • 21.Kassner M, Patera W, Bulling A. Pupil: an open source platform for pervasive eye tracking and mobile gaze-based interaction. 2014;Proceedings of the 2014 ACM international joint conference on pervasive and ubiquitous computing: Adjunct publication:1151–1160.
  • 22.Brookes J, Warburton M, Alghadier M, Mon-Williams M, Mushtaq F. Studying human behavior with virtual reality: The Unity Experiment Framework. Behavior Research Methods. 2020;52:455–463. doi: 10.3758/s13428-019-01242-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Rangelova S, Andre E. A survey on simulation sickness in driving applications with virtual reality head-mounted displays. PRESENCE: Virtual and Augmented Reality. 2018;27:15–31. [Google Scholar]
  • 24.Saredakis D, Szpak A, Birckhead B, Keage HAD, Rizzo A, Loetscher T. Factors associated with virtual reality sickness in head-mounted displays: a systematic review and meta-analysis. Frontiers in Human Neuroscience. 2020;14:96. doi: 10.3389/fnhum.2020.00096 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Jovancevic-Misic J, Hayhoe M. Adaptive gaze control in natural environments. Journal of Neuroscience. 2009;29:6234–6238. doi: 10.1523/JNEUROSCI.5570-08.2009 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Fajen BR, Jansen AJ. Steering through multiple waypoints without model-based trajectory planning. Journal of Vision. 2023;23:5019–5019. [Google Scholar]
  • 27.Svärd M. Computational driver behavior models for vehicle safety applications [Dissertation]. Gothenburg, Sweden: Chalmers University of Technology; 2023.

Decision Letter 0

Markus Lappe

28 Sep 2023

PONE-D-23-23553Coordination of gaze and action during high-speed steering and obstacle avoidancePLOS ONE

Dear Dr. Fajen,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Nov 12 2023 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Markus Lappe

Academic Editor

PLOS ONE

Journal requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please note that PLOS ONE has specific guidelines on code sharing for submissions in which author-generated code underpins the findings in the manuscript. In these cases, all author-generated code must be made available without restrictions upon publication of the work. Please review our guidelines at https://journals.plos.org/plosone/s/materials-and-software-sharing#loc-sharing-code and ensure that your code is shared in a way that follows best practice and facilitates reproducibility and reuse.

3. Please include your full ethics statement in the ‘Methods’ section of your manuscript file. In your statement, please include the full name of the IRB or ethics committee who approved or waived your study, as well as whether or not you obtained informed written or verbal consent. If consent was waived for your study, please include this information in your statement as well.

4. We note that Figure 1 in your submission contain copyrighted images. All PLOS content is published under the Creative Commons Attribution License (CC BY 4.0), which means that the manuscript, images, and Supporting Information files will be freely available online, and any third party is permitted to access, download, copy, distribute, and use these materials in any way, even commercially, with proper attribution. For more information, see our copyright guidelines: http://journals.plos.org/plosone/s/licenses-and-copyright.

We require you to either (1) present written permission from the copyright holder to publish these figures specifically under the CC BY 4.0 license, or (2) remove the figures from your submission:

1. You may seek permission from the original copyright holder of Figure 1 to publish the content specifically under the CC BY 4.0 license.

We recommend that you contact the original copyright holder with the Content Permission Form (http://journals.plos.org/plosone/s/file?id=7c09/content-permission-form.pdf) and the following text:

“I request permission for the open-access journal PLOS ONE to publish XXX under the Creative Commons Attribution License (CCAL) CC BY 4.0 (http://creativecommons.org/licenses/by/4.0/). Please be aware that this license allows unrestricted use and distribution, even commercially, by third parties. Please reply and provide explicit written permission to publish XXX under a CC BY license and complete the attached form.”

Please upload the completed Content Permission Form or other proof of granted permissions as an "Other" file with your submission.

In the figure caption of the copyrighted figure, please include the following text: “Reprinted from [ref] under a CC BY license, with permission from [name of publisher], original copyright [original copyright year].”

2. If you are unable to obtain permission from the original copyright holder to publish these figures under the CC BY 4.0 license or if the copyright holder’s requirements are incompatible with the CC BY 4.0 license, please either i) remove the figure or ii) supply a replacement figure that complies with the CC BY 4.0 license. Please check copyright information on all replacement figures and update the figure caption with source information. If applicable, please specify in the figure caption text when a figure is similar but not identical to the original image and is therefore for illustrative purposes only.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: I Don't Know

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The study explores gaze and steering behavior of drone pilots engaged in first person-view drone racing within a dense forest environment. While participants effectively completed the racing tasks, they looked less to their future waypoints than expected. Participants anticipated upcoming hoops, optimizing their steering accordingly. Overall, the paper is well-written and provides valuable insights into human navigation behavior in FPV drone racing.

My main issue revolves around statistics. I am worried about the presentation of the statistical analysis.

First, the sample size in Experiment 1, involving only six subjects, and particularly the inclusion of only three subjects in Experiment 2 (one of them an author), raises concerns. PLOS One places great emphasis on appropriate sample sizes. Given the requirement for drone piloting expertise, it's understandable that recruiting a larger sample was challenging. Now that the pandemic has eased, it may be worthwhile to explore the possibility of obtaining additional subjects to enhance the robustness of the study. But I realize this might not be possible. For now, the decision to primarily focus on the results of Experiment 1 appears justified.

Second, the statistical details in the paper require further clarification. For instance, in line 316, it would be helpful to specify which post-hoc tests were employed and whether any correction method for multiple comparisons was applied. Additionally, when discussing the repeated-measures ANOVA, it should be explicitly mentioned that "condition" was treated as a between-subject factor.

The inverted block five really stretches the definition of repeated measures. However, it doesn't seem to be statistically different from the other blocks, except for what the authors describe as a "spurious finding." Does the analysis yield consistent results when Block 5 is excluded, apart from this "spurious" interaction?

The absence of statistical tests in the "Where did subjects look?" section could be clarified. Is this related to the exclusion of subjects with imprecise eye-tracking data? For Experiment 2, the small sample size is mentioned in line 440. Were the three subjects excluded from the eye movement analysis the same three who did not participate in Experiment 2?

In the section on anticipatory steering adjustments, an LMM is used, but no specific model results are presented. It seems like the analysis involved adding the angular position of Hoop N+1 to a separate LMM and calculating an LRT. This procedure should be explicitly documented for clarity.

While the statistical methodology appears sound, the presentation would benefit from some refinement to ensure transparency and enable readers to have a clearer understanding of the results.

Apart from these issues, the discussion already addresses the problem of eye tracking precision, and the authors are best positioned to assess its implications for the study.

Notes:

1. In line 257, Experiment 2 is described as identical to Experiment 1, but it remains unclear to me which configuration of hoops and path from Experiment 1 was used.

2. The eye tracking errors (exceeding 5 degrees for excluded subjects and averaging 3.05 degrees for the remaining subjects) are notably large. While the standards for eye tracking in VR differ from stationary eye trackers, it could be beneficial to include some discussion regarding why these errors were this large.

3. This is a minor point, but in line 575, a comparison with car steering is drawn: While some arguments against this comparison are given, the absence of elevation in automobile steering could be acknowledged, and the possibility of testing this with a larger sample size could be pointed out. It might be helpful to clarify the intended message here?

4. The paper lacks mention of motion sickness, despite the nature of the task, which could typically lead to such issues. While the Vive Pro is a solid device, motion sickness can still occur. Was motion sickness measured during the study? Were any countermeasures taken beforehand to mitigate its effects on the participants?

5. Figures 3 and 4 suffer from excessive JPEG compression in the PDF version.

Reviewer #2: I read this paper with interest. The topic is timely and relevant, and the general approach is novel.

I do, however, have a number of concerns that would appear to weaken many of the conclusions of the paper.

The first set of concerns focus on the author acknowledged weaknesses. The limitations section of the paper identifies three aspects of the design that limit the study (sample size, participant expertise and error in gaze measures), however these limitations do not seem to be fully considered elsewhere. The abstract does not state the sample size, or acknowledge these weaknesses, and throughout the manuscript conclusions are drawn from the data as if these issues do not exist. In the analysis I would have been really interested to see more examination of the individual patterns of behaviour in both steering trajectories and gaze behaviours. Currently the only hint is provided in Figure 3B which shows the extent of variation in gaze performance for 2 of the 3 individuals (1 dataset is masked by the average line). Instead, the manuscript focusses on presenting group analysis for each measure which masks that this performance will be highly influenced by a single participant. Similarly the limitations section flags that eye-tracking measures had an error of 2-5deg, but this is not addressed in the eye-tracking analysis in terms of the likelihood of misclassification of target objects/regions or estimating gaze distance from path center.

The second set of concerns tie in with the novelty of the task used and the extent to which flying a drone in the manner specified in the experiment is comparable to the experiments referenced in the introduction looking at steering along paths.

The task given to participants was: “fly as quickly as possible while staying close to the path, and attempting to pass through each hoop (when present)”. This task explicitly prioritises speed over accuracy, and would appear to reduce the importance of spatial accuracy with respect to the rendered path and the hoop.

With these instructions I would expect trajectories to only loosely align to the rendered path since the priority is taking the quickest route (that passes through the hoops). The equivalent when driving would cause drivers to take the “racing line” which leads to ‘corner cutting’ (deviation away from the center of the path). This is crucial for interpreting the presented data because research shows that such instructions can alter gaze patterns – e.g Reference [16] showed that when drivers were instructed to take the racing line, gaze fell on the inside edge of the demarcated road.

The task instructions causes two problems for the manuscript in it’s current form: (A) It is unclear that the analysis of steering and gaze behaviours are sufficient/appropriate to identify whether the above behaviours occur, or to confirm whether gaze is being directed toward the paths actually taken (rather than the paths rendered on the screen); and (B) The conclusions that gaze is only weakly coupled with steering could simply be a result of the reduced importance/relevance of spatial information in this task.

I will expand in detail on some of the issues around (A) and (B):

It seems that the analysis of steering does not really show whether there is good compliance to the rendered path. The value of 0.83m deviation from path center is reported and used to suggest that the drone remained on or close to the path, however deviation of more than +/- 0.5m from the path center could be considered leaving the path (given the whole path is only 1m wide). The fact that path deviation metrics do not differ when the path is no longer rendered (and only hoops are present) would seem to suggest that the rendered path is only weakly influencing steering because it is not prioritised in the task instructions. Further examination of the shape of the trajectories relative to the path may be able to provide a better sense of whether the drone pilots were merely taking the shortest/quickest route through each lap (as instructed), or whether trajectories are in fact constrained by the rendered path.

The manuscript states in a few places that “subjects performed the task extremely well” however because the priority instruction was to travel quickly it makes interpreting spatial errors difficult because there will clearly be speed-accuracy trade-offs. Trials where there were large spatial errors (>10m left/right, or 20m vertically) were halted and a new trial was run instead, though no report of how often this occurred is given making it hard to interpret whether the data are representative of most flight attempts. Number of collisions reported in Figure 2 varies, but after very high values in the first block they seem to settle to approximately 5-10 per lap - the authors interpret this as “infrequent” but I find this value hard to interpret. Compared to driving a vehicle on a road (where no collisions are tolerated) this value still seems extraordinarily high, however, it is possible that a very low threshold is being used to classify a collision (i.e. is clipping a branch/leaf on a tree a collision?) or that this could be considered low given how many potential collisions with trees/hoops there might be during a lap.

Unfortunately, I could see no definition of what constitutes a collision in the manuscript, how long a lap takes (on average), nor was it clear whether anything happened to the drone as a result of a collision (i.e. did it just fly through the collision object, or get slowed down, or stopped?). Mention is made of the drone getting stuck in branches which suggests that some collision detection was used to alter movement of the drone. Related to this it is unclear whether the path/ground is itself a potential collision object – it seems likely it would need to be or the drone could pass through the ground, however this does then undermine a key conclusion that pilots did not need to look at collision obstacles. The fact that the ground could be considered an ‘attractor’ along 2 axes and a ‘repellor’ in the vertical axis would seem to be of potential interesting but I could not see any consideration of this characteristic (i.e. how high did the pilots fly above the ground and how consistent was this height maintained?). Depending on how collisions/spatial errors are handled, in terms of error feedback, you might expect different behaviours from the pilots – e.g. participants might be inclined to reduce spatial error/collisions if that led to slower trials, or if they knew for sure from error feedback that they had failed the task of passing through each hoop. The number of collisions approximately halved when hoops were removed (in block 5) so it seems possible that around half of collisions (~5) were with the hoops themselves, suggesting that pilots often clipped the hoop rather than passing through the centre. This also highlights an issue with the ‘hoops completed’ metric – what was the spatial cut-off for success? Did colliding with the hoop constitute ‘completion’? If there was no temporal penalty for collisions then it seem likely that trajectories will again be biased toward the quickest trajectory at the cost of passing cleanly through the hoops, which again would seem to weaken the importance of gaze and steering being tightly coupled.

The currently the reported metrics would seem to reinforce the idea that the drone pilots were accepting spatial trajectory errors as a trade-off for increasing their speed completing laps. This is important since it could explain why gaze does not appear to be directed to the rendered path or hoop as much as would be expected by the literature.

In summary whilst the experimental design appears to have the potential for revealing some interesting findings, in it's current form this manuscript does not provide sufficient evidence/justification to support the claims made.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: Yes: Richard Wilkie

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2024 Mar 8;19(3):e0289855. doi: 10.1371/journal.pone.0289855.r002

Author response to Decision Letter 0


5 Dec 2023

Our responses to the reviewers' comments are included in the "Response to Reviewers" document.

Attachment

Submitted filename: Response To Reviewers.docx

pone.0289855.s002.docx (31KB, docx)

Decision Letter 1

Lei Zhang

16 Jan 2024

PONE-D-23-23553R1Coordination of gaze and action during high-speed steering and obstacle avoidancePLOS ONE

Dear Dr. Fajen,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Mar 01 2024 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Lei Zhang, PhD

Academic Editor

PLOS ONE

Journal Requirements:

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

Additional Editor Comments:

The revised manuscript has largely improved. But there are still some places that need further revisions.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #3: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #3: Partly

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #3: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #3: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #3: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The latest changes in the paper show a more careful consideration of limitations, a more transparent statistical analysis, and a discussion of "corner cutting behavior." The revised Anova is better explained, addressing assumptions and factors appropriately.

I'm not sure if Supporting Information is allowed, but if possible, consider presenting the additional analysis without Block 5 there. If there is no Supporting Information, maybe you find an elegant way of addressing it in the paper. If not, the current version is okay.

However, I'm confused about why information about mixed linear models was removed from the end of the Data Analyses section. The paper didn't claim that Anovas were run with LME4, but MLMs (still used in the "Did subjects make anticipatory steering adjustments?" section) were mentioned.

The main issue that still stands out is the small sample size. It's good that the focus on a small group of skilled individuals is clearly stated in the abstract, making it easy for any reader to understand.

Overall, I appreciate the improvements in this version, especially in addressing limitations and making the statistical analysis clearer.

Reviewer #3: 1.The abstract of this article needs to be summarized.

2.In conclusion,It is recommended that the authors point out the main future application scenarios of the main results of this paper.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Gianni Bremer

Reviewer #3: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2024 Mar 8;19(3):e0289855. doi: 10.1371/journal.pone.0289855.r004

Author response to Decision Letter 1


28 Jan 2024

Response to reviewers

Manuscript number: PONE-D-23-23553R2

Manuscript title: Coordination of gaze and action during high-speed steering and obstacle avoidance

Authors: Powell, Marshall, Diaz, & Fajen

We thank the reviewers for their careful reading of the revised manuscript. Our response to each comment can be found below in blue font. Changes to the manuscript are also indicated in the marked-up version. Note that line numbers below refer to those in the marked-up version of the manuscript with changes tracked. The line numbers in the unmarked version differ because MS Word includes deleted text in its count of line numbers.

Response to Editor instructions

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

We updated Reference 26, which was previously in press and is now published. We also added Reference 27 to support a point that was added in response to Reviewer 3’s second comment below.

Response to Reviewer #1 comments:

The latest changes in the paper show a more careful consideration of limitations, a more transparent statistical analysis, and a discussion of "corner cutting behavior." The revised Anova is better explained, addressing assumptions and factors appropriately.

I'm not sure if Supporting Information is allowed, but if possible, consider presenting the additional analysis without Block 5 there. If there is no Supporting Information, maybe you find an elegant way of addressing it in the paper. If not, the current version is okay.

We added a paragraph toward the end of the section on analyses of basic performance measures (see Lines 395-401) that summarizes the analyses without Block 5.

However, I'm confused about why information about mixed linear models was removed from the end of the Data Analyses section. The paper didn't claim that Anovas were run with LME4, but MLMs (still used in the "Did subjects make anticipatory steering adjustments?" section) were mentioned.

The reviewer is correct. The analyses in the section on anticipatory steering adjustments were run using the lmer function from the LME4 package. We added a sentence to the Data Analysis section to tell readers which R function was used (see Lines 325-327).

The main issue that still stands out is the small sample size. It's good that the focus on a small group of skilled individuals is clearly stated in the abstract, making it easy for any reader to understand. Overall, I appreciate the improvements in this version, especially in addressing limitations and making the statistical analysis clearer.

We are happy to hear that the reviewer is satisfied with the improvements and thank them for their helpful suggestions.

Response to Reviewer #3 comments:

1.The abstract of this article needs to be summarized.

We are unsure what the reviewer means by this comment. The abstract is a summary of the article, so it is unclear to us what it would mean to summarize the abstract. We wondered whether the reviewer meant to write “shortened” rather than “summarized” but the abstract is already under the 300-word limit, so that wouldn’t make sense. We also wondered if they meant that the main take-away of the article needs to be summarized in the main text, but that wouldn’t make sense either since that is exactly the point of the last paragraph of the article. We apologize for not being able to respond to this comment, but we simply don’t understand what the reviewer is asking us to do.

2.In conclusion,It is recommended that the authors point out the main future application scenarios of the main results of this paper.

We explained how the findings could inform the development of models of driver performance under conditions in which visibility is degraded, which is an important but neglected area of driver behavior modeling (see Lines 679-683).

Attachment

Submitted filename: Response To Reviewers.docx

pone.0289855.s003.docx (20.8KB, docx)

Decision Letter 2

Lei Zhang

7 Feb 2024

Coordination of gaze and action during high-speed steering and obstacle avoidance

PONE-D-23-23553R2

Dear Dr. Brett,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Lei Zhang, PhD

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

The revised manuscript can be accepted for publication.

Reviewers' comments:

Acceptance letter

Lei Zhang

28 Feb 2024

PONE-D-23-23553R2

PLOS ONE

Dear Dr. Fajen,

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team.

At this stage, our production department will prepare your paper for publication. This includes ensuring the following:

* All references, tables, and figures are properly cited

* All relevant supporting information is included in the manuscript submission,

* There are no issues that prevent the paper from being properly typeset

If revisions are needed, the production department will contact you directly to resolve them. If no revisions are needed, you will receive an email when the publication date has been set. At this time, we do not offer pre-publication proofs to authors during production of the accepted work. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few weeks to review your paper and let you know the next and final steps.

Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

If we can help with anything else, please email us at customercare@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Lei Zhang

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Movie. Recording of a sample subject from the Dense Trees condition of Experiment 2, in which subjects were instructed to follow the path while avoiding trees and overhanging branches.

    The red crosshair shows the point of fixation.

    (MP4)

    Download video file (19.1MB, mp4)
    Attachment

    Submitted filename: Response To Reviewers.docx

    pone.0289855.s002.docx (31KB, docx)
    Attachment

    Submitted filename: Response To Reviewers.docx

    pone.0289855.s003.docx (20.8KB, docx)

    Data Availability Statement

    All data files are available on Open Science Framework (https://osf.io/5dtxh/).


    Articles from PLOS ONE are provided here courtesy of PLOS

    RESOURCES