Skip to main content
Springer logoLink to Springer
. 2022 May 17;29(5):1531–1557. doi: 10.3758/s13423-022-02117-w

Peripheral vision in real-world tasks: A systematic review

Christian Vater 1,, Benjamin Wolfe 2,3,4, Ruth Rosenholtz 3,4
PMCID: PMC9568462  PMID: 35581490

Abstract

Peripheral vision is fundamental for many real-world tasks, including walking, driving, and aviation. Nonetheless, there has been no effort to connect these applied literatures to research in peripheral vision in basic vision science or sports science. To close this gap, we analyzed 60 relevant papers, chosen according to objective criteria. Applied research, with its real-world time constraints, complex stimuli, and performance measures, reveals new functions of peripheral vision. Peripheral vision is used to monitor the environment (e.g., road edges, traffic signs, or malfunctioning lights), in ways that differ from basic research. Applied research uncovers new actions that one can perform solely with peripheral vision (e.g., steering a car, climbing stairs). An important use of peripheral vision is that it helps compare the position of one’s body/vehicle to objects in the world. In addition, many real-world tasks require multitasking, and the fact that peripheral vision provides degraded but useful information means that tradeoffs are common in deciding whether to use peripheral vision or move one’s eyes. These tradeoffs are strongly influenced by factors like expertise, age, distraction, emotional state, task importance, and what the observer already knows. These tradeoffs make it hard to infer from eye movements alone what information is gathered from peripheral vision and what tasks we can do without it. Finally, we recommend three ways in which basic, sport, and applied science can benefit each other’s methodology, furthering our understanding of peripheral vision more generally.

Keywords: Peripheral vision, Walking, Aviation, Driving, Sports science

Introduction

Peripheral vision, the visual field beyond our current point of gaze (i.e., outside the parafovea or the central 4–5° around the fovea; Larson & Loschky, 2009), provides information that is essential for a vast range of tasks in everyday life. For example, walking and driving require us to be aware of the behavior of others so as not to collide with them (see Fig. 1 for a driving example). It is impossible to always fixate the most relevant visual information at the right time; our environment sometimes changes in an unpredictable manner, and the relevant information may not be localized to a single location. That peripheral vision is vital to our everyday life also becomes apparent from clinical cases of its absence. Patients suffering from retinitis pigmentosa, a disease that progressively robs the patient of peripheral input, have profound difficulties navigating the world, since so much happens outside their field of view (Crone, 1977; Pagon, 1988).

Fig. 1.

Fig. 1

Illustration of an urban street scene (“Crowded Street With Cars Passing By”, by Suzukii Xingfu; sourced from Pexels.com, under CC0) with cars, motorbikes, and pedestrians; (a) shows the entire scene, (b) shows a visualization of a useful field, approximately 15° radial from fixation, illustrating a commonly held misconception of the region of visual space around the point of fixation in which observers can perceive visual information, with the surrounding region faded out to illustrate how much information is missing

What we can or cannot do with peripheral vision has mostly been studied in fundamental research rather than applied research. This work has shown that we acquire information from the entire visual field when the task requires it, as when perceiving the gist of a scene (Boucart et al., 2013; Ehinger & Rosenholtz, 2016; Geuzebroek & van den Berg, 2018; Larson et al., 2014; Larson & Loschky, 2009; Loschky et al., 2019; Trouilloud et al., 2020; Wang & Cottrell, 2017). In fact, we use peripheral input to guide search (Hulleman & Olivers, 2017), and it can help us identify objects away from fixation, even when they are present in complex environments (Wijntjes & Rosenholtz, 2018; but see Ringer et al., 2021, and Sanocki et al., 2015, for cases where identification performance was impaired). Many experiments in basic vision research place few demands on participants that would push them to use peripheral vision, inadvertantly encouraging interpretations that focus on foveal vision (a point discussed in Gegenfurtner, 2016; Rosenholtz, 2016). If, for example, you are participating in a classic visual search experiment, looking for Ts among Ls, doing this task requires you to sequentially search through the array of letters, and it is tempting to focus on the sequence, rather than on what informs the sequence of gaze shifts and how they are planned, which rely on peripheral vision. That, then, begs the question: What might happen if we did not have the luxury of focusing on a single task at a time, which we seldom can in life outside the laboratory?

Our goal in this review was to discuss how peripheral vision is used in driving, walking, and aviation tasks, where successfully using it is necessary to our ability to complete these tasks. This builds on our previous work on peripheral vision in a range of contexts, from basic vision science (Rosenholtz, 2016, 2020) to driving (B. Wolfe et al., 2017; B. Wolfe et al., 2020) to sports (Vater et al., 2020). Because the task demands inherent to walking, driving, and aviation draw on the same fundamental processes and attributes that we have discussed in our previous work, our goal here is to extend this prior work and to identify how peripheral vision is used in a trio of very different real-world activities. Our goal is to not only see how peripheral vision is used in these applications, but to spur future research in both applied and basic areas to deepen our understanding of peripheral vision.

To provide context for our discussion of peripheral vision, we first provide a brief orientation in peripheral vision in basic vision science research, focused on the mechanisms of visual perception. Second, we discuss how peripheral vision is used in sports, and how the very different visual demands of a sport push players to adopt strategies that are not seen in simple laboratory experiments. Together, these brief reviews serve to set the stage for the present review, and contextualize our conclusions. We then dive into the topic of this review in earnest, examining how peripheral vision is used in tasks where it is an integral component and what impacts our ability to use peripheral vision. Finally, we conclude by discussing how our understanding of peripheral vision has been enhanced by this exercise, and provide three suggestions regarding future peripheral vision research and the information required for different tasks that we hope will foster new and innovative research.

The basics of peripheral vision

In order to understand why peripheral vision is different from foveal vision, we need to start with anatomy. The fovea, the location on the retina where light is focused, is the area of highest photoreceptor density and comprises 1% or less of the total surface of the retina, but accounts for 50% of visual cortex (Curcio et al., 1990; Tootell et al., 1982). Given this anatomical bias, peripheral visual input must be represented differently than foveal input, and phenomenologically, we notice that we are less able to resolve fine detail in the periphery (Anstis, 1974; Strasburger et al., 2011), that we have slightly poorer color vision (Abramov & Gordon, 1977; Gordon & Abramov, 1977; Hansen et al., 2009), and that, in general, our experience of vision away from our point of gaze is quite different (for a review, see Rosenholtz, 2016), even if we do not think about it much (Rosenholtz, 2020).

How, then, do the differences between foveal and peripheral vision impact our perceptual experience and abilities? Perhaps the most noticeable of these impacts is the phenomenon of visual crowding (Bouma, 1970), where objects near each other in the periphery become difficult to identify. This is not a lack of acuity or resolution, but a consequence of other differences between foveal and peripheral vision. While crowding is often studied with letters, it occurs for all objects in the periphery (e.g., letters, shapes, objects, patterns; for reviews, see Ringer et al., 2021; Rosenholtz, 2016, 2020; Sanocki et al., 2015). While it can be difficult to identify objects in the periphery because of crowding, we do not want to give the impression that the periphery is just a jumble of unrecognizable objects; the information present is useful and is used for a range of tasks.

Given the problem of crowding, one may think that peripheral vision only provides information for saccade planning, since if crowding renders peripheral objects unidentifiable, recognizing them requires making a saccade to bring them to the fovea. This is one role of peripheral vision, but only one among many. A key part of this process is covertly attending to the target of an impending saccade (i.e., by making use of peripheral vision) before the eye moves; this process of presaccadic attention (cf., Deubel & Schneider, 1996; Kowler et al., 1995) is necessary to plan accurate saccades to peripheral targets. However, the act of planning a saccade alone (i.e., without foveation of the target) can make peripherally crowded objects easier to identify and seems to access peripheral information that is otherwise inaccessible (Golomb et al., 2010; Harrison, Mattingley, & Remington, 2013a; B. Wolfe & Whitney, 2014). In fact, this peripheral information is remapped prior to the eye moving (Harrison, Retell, et al., 2013; B. Wolfe & Whitney, 2015), and is likely a key component of how we maintain a stable percept of the world in spite of making several saccades per second (Stewart et al., 2020).

In addition, even without planning a saccade, peripheral vision provides a great deal of useful information (Rosenholtz, 2016). For example, recognition of crowded objects can be improved by perceptual grouping (e.g., Banks & White, 1984; Bernard & Chung, 2011; Livne & Sagi, 2007; Manassi et al., 2012) and scene context can help resolve ambiguous peripheral information (Wijntjes & Rosenholtz, 2018). In addition, though crowding makes tasks like recognizing letters flanked by other letters difficult (but also in complex real-world scenes; cf., Ringer et al., 2021; Sanocki et al., 2015), it preserves sufficient information to support a range of tasks, for example tracking multiple objects at once (Pylyshyn & Storm, 1988) and understanding the gist of a scene at a glance (Boucart et al., 2013; Ehinger & Rosenholtz, 2016; Geuzebroek & van den Berg, 2018; Larson et al., 2014; Larson & Loschky, 2009; Loschky et al., 2019; Trouilloud et al., 2020; Wang & Cottrell, 2017). In both tasks, the distributed nature of the information needed for the task, as well as the need to keep up with temporal constraints, requires using peripheral vision. In other tasks, we do not have to look at each individual item in a group (B. Wolfe et al., 2015) to determine mean object size (Ariely, 2001) and orientation (Dakin & Watt, 1997), facial emotion (Haberman & Whitney, 2012; Yamanashi Leib et al., 2014) or the heading direction of walking figures (Sweeny et al., 2013).

For that matter, the information we can glean from peripheral vision can be impacted by attention, and the two are often considered together. At a relatively simple level, covert attention (i.e., attending to an object away from the point of gaze) can modestly improve contrast sensitivity (Cameron et al., 2002; Carrasco, 2011), processing speed (Carrasco et al., 2006), change-detection performance (Vater, 2019), and even the perception of an object (Carrasco & Barbot, 2019). There have been attempts to quantify the space around the locus of gaze within which covert attention facilitates object recognition: The functional visual field (Mackworth & Morandi, 1967), alternately known as the useful field of view (Ball et al., 1988; see also Ringer et al., 2016, for a recent UFOV (useful field of view) study using natural scenes). It should, however, be noted that highly salient stimuli (i.e., stimuli that are unusual or different to their surroundings) can be particularly easy to detect with covert attention (Itti & Koch, 2000).

In summary, basic vision science tells us that although peripheral vision might be limited, it remains useful for a number of tasks. For example, we can plan saccades, track multiple objects at once, perceive the gist of a scene or set, and perform some object-recognition tasks. These results suggest that peripheral vision is a powerful foundation on which many of our actions in daily life are constructed. It can be hard, particularly in the laboratory, to see the extent to which this is true, since many vision experiments simplify the world as much as possible, but if we step outside the laboratory, we might gain a better appreciation for how we really use peripheral vision.

Peripheral vision in sports

We can learn more about use of peripheral vision by studying vision in sports. Players do not have the luxury of simple visual environments. In most sports, multitasking is required and actions must be made quickly in order to be effective. As an example, football players often look at the player with the ball and use peripheral vision to monitor other players (opponents and teammates) and to position themselves in an optimal way to prevent the opposing team from scoring a goal (Vater et al., 2019). Vater et al. (2020) provide an overview on how athletes from different sports use peripheral vision and discuss three gaze strategies they use. In some settings, a player might need to monitor multiple locations, each of which require information only available with central vision. In this situation, players adopt a visual pivot strategy, choosing a gaze location that minimizes the time required to move their eyes to fixate a target once the player decides which one needs fixating. However, this strategy comes with its own costs, since visual information is suppressed during a saccade, and while these intervals of suppression are brief, the lack of information can prove decisive. To avoid this, a player might adopt a gaze anchor strategy, keeping their gaze in one location and relying exclusively on peripheral vision to monitor other locations, in spite of the differences between foveal and peripheral vision.

Finally, similar to the vision science notion of the functional visual field, in the foveal spot strategy, players optimize their fixation to gather information from both the target of fixation and its surround. For example, in a one-on-one situation in soccer, a defender fixates the hip of the opposing player with the ball, since this provides information about the player’s direction of travel (cf. Vaeyens et al., 2007). Fixating the hip rather than, for example, the head also reduces the risk of falling for a head fake (Weigelt et al., 2017) – another reason why it is better to fixate the hip and not the head.

On the whole, the gaze strategies adopted when playing sports suggest that, in complex situations, under time pressure, we leverage peripheral vision in a way that we simply do not in the lab, although we can see echoes of laboratory behavior on the sports field. A player adopting the visual pivot strategy is using a similar approach to what research participants do in the lab when told to monitor multiple moving targets in a multiple object-tracking task (Fehd & Seiffert, 2008, 2010; Vater, Kredel, & Hossner, 2016a). A gaze anchor, where the player’s gaze stays in one spot, is not dissimilar to what participants might do with unpredictable or brief objects, or in scene gist studies, where there is simply no time to move the eye there before the stimulus vanishes. For that matter, a foveal spot strategy looks a great deal like functional visual field strategies in search (Motter & Simoni, 2008; J. M. Wolfe, 2021; Wu & Wolfe, 2018).

Goals of the current review

Taking inspiration from discussions of peripheral vision in sports, and building on our interest in peripheral vision in a wide range of situations, we asked what everyday tasks might have unacknowledged peripheral vision components. In this paper, we review how drivers, pedestrians, and pilots use peripheral information, and which factors change our ability to use it. In doing so, we aim to elucidate patterns of behavior that indicate the use of peripheral vision and to draw connections between fundamental and applied research.

Method

Identification

To conduct this systematic review we followed the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) procedure (Moher et al., 2009) and conducted a systematic literature analysis in April 2019 using Pubmed, Scopus, ScienceDirect, and Web of Knowledge. The results of each search were exported as ris- or txt-files and imported into citavi® (version 6, 2018). To identify studies, we only included peer-reviewed articles, written in English, with accessible full texts. If the databases included filters, we used them to exclude conference abstracts, dissertations, book chapters, and reviews. We defined the search terms a priori and combined them with Boolean operators (“AND”, ”OR”, “NOT”) as follows: "attention* OR peripheral*" AND “eye movement” OR “eye tracking” OR "gaze*" OR “visual search” AND "walking* OR driving* OR aviation*" NOT “sport*”. The “*” is a wildcard operator (e.g., when searching for sport*, “sports” or “sportsmen” will also be found). We searched for these terms in title, abstract and, if available, keywords (for details, see Table 1, “Identification”).

Table 1.

Search strategy used for including papers in the reviewed set

Identification Screening Eligibility
Databases searched Inclusion criteria Exclusion criteria Search terms used Abstract exclusion criteria Example Search full text for terms Other exclusion criteria

Pubmed (title, abstract)

Scopus (title, abstract, keywords)

ScienceDirect (title, abstract, keywords)

Web of Knowledge (title, abstract, keywords)

Peer-reviewed

Full-text

English language

Conference abstracts

Dissertations

Book chapters

Reviews

"attention* OR peripheral*"

AND

“eye movement” OR “eye tracking” OR "gaze*" OR “visual search”

AND

"walking* OR driving* OR aviation*"

NOT

“sport*”

Diseases Parkinson, dementia,

"peripheral"

OR

"covert"

OR

"attention"

No car driving, aviation or walking

Not empirical (e.g., review)

No points in evaluation scheme (see Table 2)

Drugs Alcohol; cannabis; ecstasy;
Fatigue fatigue during driving; car accidents
Ageing Cognitive impairments in older people
Radiographs scanning radiographs

Identification includes the databases searched, the filter criteria used and the search terms used. In the screening columns, we name the excluded topics and provide some examples. In the first eligibility column, the search terms that were used for the full text are shown. In case these search terms were not found, studies were excluded from the analyses. In the second column, we show further reasons for exclusion

Screening

Using this search strategy, we found 975 unique articles. Of these, 850 were primarily focused on topics outside the scope of this review (e.g., diseases, drugs, fatigue, aging, and radiography (for examples, see Table 1, “Screening”)). Excluding these, we then searched the remaining 125 full texts (86 driving, 15 aviation, 24 walking) for the keywords “peripheral,” “covert,” or “attention”. If none of these search terms were found, the article was removed from the set. In addition, we manually excluded papers that did not focus on driving, aviation, or walking that were not otherwise excluded.

While this procedure risks missing articles that might inform our understanding of peripheral vision because they do not use the terms we required, finding such papers would require reading all existing papers even remotely related to the topics of this review, and potentially interpreting them in ways the authors did not, which is not possible. While imperfect, our selection procedure does enable us to have a formal process for including papers in our set, and those that focus on the role of peripheral vision are likely, in our estimation, to use the terms we searched for. We included both simulated lab experiments and real-world experiments because laboratory experiments provide important information that can be difficult to acquire from real-world experiments. It is sometimes safer to bring real-world tasks into the lab and create a controlled environment rather than on the road or in flight, especially when forcing participants to use their peripheral vision or a specific gaze pattern. In a simulator researchers can approximate the operational reality of driving a car with none of the risks to the driver or other road users. However, such simulators have their limits, since even the most high-fidelity simulation remains a simulation and there are few consequences for failure, unlike on the road. While these approaches, and other laboratory-based paradigms (e.g., screen-based environments) have the potential to reveal key elements of how and why we use peripheral input, there will always be limits to what we can learn in the lab (Godley et al., 2002), and the results will need to be validated in the real world. Based on the full text of papers, we excluded those papers that were not empirical studies (e.g., reviews) (see Table 1, “Eligibility”). In addition, while screening the full texts, we found three additional cited papers that fulfilled the inclusion criteria and added these to the set. This resulted in a final set of 60 papers (see Fig. 2).

Fig. 2.

Fig. 2

PRISMA flowchart showing the number of articles excluded and included in the different stages of the screening process. See Table 1 for inclusion and exclusion criteria

Quantitative analyses

Because the 60 included papers focused to different degrees on peripheral vision – their main research question may not be on peripheral vision – we developed 11 binary criteria (see Table 2) for describing papers and to help readers to identify papers that are relevant for them. We consider these 11 criteria as key points in the context of this review, but one should not interpret the result of the scoring procedure as a measure of paper quality; the score merely indicates whether the authors mentioned the topics listed. If the paper met a criterion, we scored it with a one for that criterion, if not, we scored it with a zero. For example, if a paper described inhomogeneities in the human retina or discussed visual crowding and its impact on peripheral vision and visual perception, it was noted as having “characterized visual capabilities.” As in Vater et al. (2020), we will use the term “functionality” to describe what peripheral vision is used for. The first author provided initial assessments for all studies across the 11 criteria, after which all three authors discussed the assessments for each paper until consensus was reached.

Table 2.

Overview of the criteria used to compare reviewed papers

Introduction Methods Results Discussion
Visual capabilities characterized Predictions on peripheral-vision usage Peripheral-vision manipulation Attentional manipulation Peripheral vision manipulation check Compares foveal and peripheral vision Compares with limited peripheral vision Different attentional load/demands Discussions based on own results Functionality discussed Effects on actions discussed

Visual acuity

Crowding

Saccade properties

Visual field

Retina characteristics

Differences between peripheral vision conditions

Effects of peripheral events on eye movements

Effects of peripheral events on performance

Changes in useful field of view

Eccentricity of objects

Moving window

Spatial cueing to periphery

Attentional/cognitive load/demands

Peripheral event detection

Peripheral vision blocked

Task in fovea vs. periphery

Changes in saccade behavior

Eccentricity differences

Dual tasks (with additional foveal task)

Occlusion of fovea or periphery

Limited field of vision

Effects of secondary task

High vs. low risk

Visual demands increased (e.g., additional pedestrians)

Cognitive workload

Demands of environment (e.g., walking surface)

Discussion of reported results in paper

What is peripheral vision good for (or not)?

When is it used?

Why does the useful field of view change?

Perceptual performance changes

Reference to actions performed

Reference to motor control

Effects on performance

In the second row of the table, 11 criteria are named across paper sections. In the rows below, we provide the criteria required to meet the criteria; one or more of these conditions was required to be met for the criteria to be scored with a 1 (see Table 3 for scores for each paper included)

Quantitative results

Study characteristics

Of the 60 included studies, six examined questions in aviation, 36 in driving, and 18 in walking. These studies investigated the use of peripheral vision in a variety of different ways, mostly in real-world situations (22 studies) or simulators (24 studies), but also head-mounted displays (three studies) or computer desktop-based paradigms (13 studies). Most of the walking studies (72%) examined peripheral vision in real-life situations (11% head-mounted display/HMD, 11% screen, 6% simulator/treadmill for the other testing modalities). In contrast, in driving, 50% of studies used a driving simulator, with the remaining studies using other modalities (26% on-road, 21% screen-based, 3% head-mounted displays). In aviation, simulators and screens were each used in 50% of the studies. There were also differences in the application of eye tracking to monitor eye movements. While eye-tracking devices were used in 79% of included studies, the three research areas used it to different extents (aviation: 100%, driving: 81%, and walking: 67%).

Criteria

Our criteria, listed in Table 3, are a tool for categorizing whether a study discussed a functionality of peripheral vision. The last column of Table 3 shows the sum of points each study received for our pre-defined peripheral vision criteria. The studies that met most (10/11) of our criteria were the walking study by Miyasike-daSilva and McIlroy (2016) and the driving study by Gaspar et al. (2016).

Table 3.

Overview of reviewed papers in aviation, driving, and walking (this table is available as an excel-file on the Open Science Framework at: https://osf.io/vea5r/?view_only=ba8597fef6514be68082d9e878fff5d2

Study characteristics Introduction Methods Results Discussion Points
Task First author Year Environment Eye-Tracking Visual capabilities characterized Predictions on peripheral vision usage Peripheral vision manipulation Attentional manipulation Peripheral vision manipulation check Compares foveal and peripheral vision Compares with limited peripheral vision Different attentional load/demands Discussions based on own results Functionality discussed Effects on actions discussed Functionality of peripheral vision
Aviation Brams 2018 Screen (videos) yes 0 0 0 0 0 0 0 0 1 1 0 Detection and “global scan” (similar to scene gist) 2
Aviation Imbert 2014 Screen (videos) yes 1 1 1 0 0 0 0 0 1 0 0 - 4
Aviation Kim 2010 Simulator yes 0 0 0 0 0 0 0 0 1 0 0 - 1
Aviation Robinski 2013 Simulator yes 0 0 0 0 0 0 0 0 1 0 0 - 1
Aviation Schaudt 2002 Screen (videos) yes 0 1 0 0 0 0 0 1 1 0 1 - 4
Aviation Yu 2014 Simulator yes 0 0 0 0 0 0 0 0 1 1 1 Control keys 3
Driving Alberti. 2014 Simulator yes 0 1 1 0 0 0 0 0 1 1 1 Speed estimation 5
Driving Beh 1999 Screen (videos) no 0 0 0 0 0 0 0 1 1 0 0 - 2
Driving Bian 2010 Simulator no 0 0 1 1 0 1 0 1 1 0 0 - 5
Driving Briggs 2016 Screen (videos) yes 0 0 1 0 0 1 0 0 1 1 1 Dual-tasking leads to visual and cognitive tunneling 5
Driving Cooper 2013 Simulator yes 0 0 0 1 1 1 0 1 0 1 0 Peripheral vision used for lane keeping 5
Driving Crundall 2002 Screen (videos) yes 0 1 1 0 1 1 0 1 1 0 0 - 6
Driving Crundall 2004 Simulator yes 0 1 1 1 0 0 0 1 0 0 0 - 4
Driving Danno 2011 Real world, Simulator yes 1 1 0 1 1 1 0 1 1 1 1 Peripheral preview 9
Driving Doshi 2012 Simulator yes 0 0 1 0 0 0 0 0 0 0 0 Covert attention attracted by peripheral event 1
Driving Edquist 2011 Simulator yes 0 0 0 0 0 0 0 1 0 1 0 Peripheral monitoring 2
Driving Gaspar 2016 Simulator yes 1 1 1 1 1 1 1 1 1 1 0 Peripheral monitoring 10
Driving Harbluk 2007 Real world yes 0 0 0 1 0 0 0 1 0 0 0 - 2
Driving Huestegge 2016 Screen (single images) yes 1 1 1 0 1 1 0 0 1 1 1 Peripheral preview 8
Driving Janelle 1999 Simulator yes 0 1 0 0 0 0 0 1 1 0 0 - 3
Driving Kountouriotis 2011 Simulator yes 0 0 1 0 1 1 0 0 1 1 1 Visual feedback of road edges 6
Driving Kountouriotis 2016 Simulator yes 0 0 0 1 0 0 0 1 0 1 1 Avoiding costs of saccades 4
Driving Lamble 1999 Real world no 0 1 1 1 0 1 0 0 1 1 1 Eccentricity costs 7
Driving Lehtonen 2014 Real world yes 0 1 1 0 1 1 0 0 1 1 1 Knowledge/memory (expert advantage) affects the use of peripheral vision 7
Driving Lehtonen 2018 Real world yes 1 1 1 0 1 1 0 0 1 1 1 Uncertainty affecting gaze transitions back to relevant information, eccentricity costs 8
Driving Lin 2010 Simulator yes 0 0 0 1 0 0 0 1 1 0 0 - 3
Driving Luoma 1983 Screen (single images) yes 1 0 0 0 0 0 0 0 1 1 0 Peripheral preview 3
Driving Mayeur 2008 Simulator no 1 0 1 0 0 1 0 1 1 0 0 - 5
Driving Mourant 1970 Real world yes 0 0 0 0 0 0 0 0 1 1 0 Monitoring and preview 2
Driving Patten 2006 Real world no 0 0 1 1 0 0 0 1 1 0 0 - 4
Driving Seya 2013 Simulator yes 1 0 1 0 1 1 0 0 1 1 0 Avoid costs of saccades 6
Driving Shahar 2012 Screen (videos) yes 0 0 0 0 0 0 0 0 1 1 1 Peripheral preview 3
Driving Shinoda 2001 HMD, Simulator yes 1 1 1 0 1 1 0 0 1 1 1 Peripheral preview (especially in situations with high probability) 8
Driving Strayer 2003 Simulator yes 0 0 0 1 0 0 0 1 0 1 0 Peripheral preview 3
Driving Summala 1996 Real world no 0 0 1 1 0 1 0 1 1 1 0 Eccentricity costs and dual-tasking costs 6
Driving Tsai 2007 Simulator yes 0 0 0 1 0 0 0 1 0 0 0 - 2
Driving Underwood 2003 Real world yes 0 0 0 1 0 0 0 1 1 1 0 Lead vehicle as "pivot"; peripheral preview 4
Driving Underwood 2005 Screen (videos) yes 0 1 0 1 0 0 0 1 0 0 0 - 3
Driving Victor 2005 Simulator yes 1 0 0 1 0 0 0 1 1 0 0 Peripheral monitoring under higher cognitive load 4
Driving Zhang 2016 Simulator yes 0 0 0 1 0 0 0 1 1 1 0 Anger reduces the ability to process peripheral information 4
Driving Zhao 2014 Screen (single images) yes 0 0 0 1 1 0 0 0 1 0 0 Distribution of attention as expertise characteristic 3
Driving Zwahlen 1989 Real world no 0 0 1 0 0 1 0 0 1 0 0 - 3
Walking Bardy 1999 Screen (videos) no 1 0 1 0 0 1 0 0 1 1 1 Functional use of optic flow 6
Walking Berensci 2005 Screen (videos) no 1 0 1 0 0 1 0 0 1 1 1 Reduce body sway 6
Walking Cinelli 2009 Real world yes 0 0 0 0 0 0 0 0 1 0 0 - 1
Walking Feld 2019 Real world yes 0 0 0 0 0 0 0 0 1 1 0 Monitor environment 2
Walking Hasanzadeh 2018 Real world yes 0 0 0 0 0 0 0 0 1 0 0 - 1
Walking Ioannidou 2017 Real world yes 0 0 0 0 0 0 0 0 1 0 1 - 2
Walking Jovancic 2006 HMD yes 0 1 0 1 0 0 1 0 1 1 1 Top-down monitoring of pedestrians 6
Walking King 2009 Real world yes 1 0 0 0 0 0 0 0 1 0 0 - 2
Walking Luo 2008 Real world yes 0 0 0 0 0 0 0 0 0 1 0 Top-down influence on saccade behavior 1
Walking Marigold 2007 Simulator yes 0 0 1 0 1 1 1 0 1 1 1 Obstacle detection 7
Walking Marigold 2008 Real world no 0 1 1 1 1 0 1 1 1 1 1 Monitor environment and adjust steps 9
Walking Miyasike-daSilva 2011 Real world yes 0 0 0 0 0 0 0 0 1 1 1 Detection of handrail and control of limb movements 3
Walking Miyasike-daSilva 2016 Real world yes 0 1 1 1 1 1 1 1 1 1 1 Monitoring of stairs and controlling steps 10
Walking Miyasike-daSilva 2019 Real world no 0 1 1 0 1 1 1 0 1 1 1 Online control of stair locomotion 8
Walking Murray 2014 Real world yes 1 1 1 0 1 0 1 0 1 1 1 Provides egocentric information 8
Walking Patla 1998 Real world no 0 0 1 0 1 0 1 0 1 1 1 Fine-tuning of limb trajectory during obstacle avoidance 6
Walking Timmis 2017 Real world no 0 0 0 0 0 0 0 0 1 1 1 Path planning 3
Walking Tong 2017 HMD yes 1 0 0 0 0 0 0 0 1 1 0 Guide future eye-movements 3

Studies are sorted first, for the three domains and, within each domain, in alphabetical order of the first author’s surname. If a criterion in the 11 categories was met (see Table 2), the value for that category for that paper was set to 1. In the second to last column, we summarize how the paper discussed peripheral vision and its functionality (i.e., how it is used). In the last column, we display the sum of these binary values for every paper. Note that this is not a quality assessment of the paper, but rather a metric of the extent to which the paper focused on peripheral vision.

The criteria we formulated were met by varying subsets of studies from our total set (see Table 4). The columns “met” and “% met” show the absolute and relative number of papers that met each of the criteria mentioned in the “criteria” column. As an example, the aggregated data show that 25% of all included papers characterized visual capabilities or that 40% of the papers compared conditions with different attentional loads or demands. The highest value (83%) was observed for the criterion “discussions based on own criterion,” which we consider important because papers that do not meet this criterion only refer to papers on peripheral vision, rather than discussing it directly. The table also shows how each combination of two criteria was met by the set of studies. For example, of the 83% of the studies that fulfilled the criterion “discussions based on own criterion,” 62% also discussed a specific functionality of peripheral vision.

Table 4.

Amount and percentages of papers meeting the 11 content criteria (columns 1–3) and percentages of papers within each criteria category meeting a second criterion (columns 4–14)

Criteria Met Percent met Visual capabilities characterized Predictions on peripheral-vision usage Peripheral-vision manipulation Attentional manipulation Peripheral vision manipulation check Compares foveal and peripheral vision Compares with limited peripheral vision Different attentional load/demands Discussions based on own results Functionalities discussed Effects on actions discussed
Visual capabilities characterized 15 25.00 100.00 46.67 66.67 20.00 46.67 60.00 13.33 26.67 100.00 66.67 46.67
Predictions on peripheral-vision usage 19 31.67 36.84 100.00 73.68 42.11 57.89 52.63 31.58 47.37 89.47 63.16 68.42
Peripheral-vision manipulation 27 45.00 37.04 51.85 100.00 29.63 51.85 70.37 25.93 33.33 92.59 66.67 59.26
Attentional manipulation 21 35.00 14.29 38.10 38.10 100.00 28.57 33.33 19.05 85.71 66.67 52.38 28.57
Peripheral vision manipulation check 17 28.33 41.18 64.71 82.35 35.29 100.00 76.47 41.18 35.29 94.12 82.35 70.59
Comparison foveal and peripheral vision 21 35.00 42.86 47.62 90.48 33.33 61.90 100.00 19.05 38.10 95.24 76.19 61.90
Comparison with and without (or limited) peripheral vision 8 13.33 25.00 75.00 87.50 50.00 87.50 50.00 100.00 37.50 100.00 87.50 87.50
Differences attentional load/demands 24 40.00 16.67 37.50 37.50 75.00 25.00 33.33 12.50 100.00 66.67 41.67 20.83
Discussions based on own results 50 83.33 30.00 34.00 50.00 28.00 32.00 40.00 16.00 32.00 100.00 62.00 48.00
Functionalities discussed 36 60.00 27.78 33.33 50.00 30.56 38.89 44.44 19.44 27.78 86.11 100.00 63.89
Effects on actions discussed 25 41.67 28.00 52.00 64.00 24.00 48.00 52.00 28.00 20.00 96.00 92.00 100.00

In column 1, the 11 criteria are listed. In columns 2 and 3, the number and percentage of studies meeting the criteria are displayed, respectively. In columns 4–14, the studies that met each criterion are further characterized. Percentages below 100% in a given row show the percentages of papers that met one of the other criteria. As an example, 46.67% of the papers that characterized visual capabilities also made predictions on peripheral vision usage (first criteria line, column 5)

Discussed functionalities

Table 5 shows a summary of the peripheral vision functionalities discussed in each study. In the last row, it can be seen that over the three research areas, the monitoring functionality (13/62) and the presaccadic preview functionality (10/62) were mentioned most. Also, walking studies mainly mentioned a monitoring functionality (7/19), and driving studies focused more on presaccadic preview functionality (8/37). In contrast, these functionalities were little mentioned in aviation, with the monitoring and action planning functionalities only mentioned once each. Overall, 23 studies did not mention a specific functionality, which should not be taken to mean that they ignored peripheral vision, merely that they did not focus on it particularly.

Table 5.

Overview of the functionalities discussed in the included papers (literature sources in parentheses)

Discussed functionality Aviation Driving Walking All areas
Monitoring 1 (Brams et al., 2018SA) 4 (Doshi & Trivedi, 2012MD; Edquist et al., 2011A,E; Gaspar et al., 2016MD,CL; Kountouriotis et al., 2011) 7 (Feld & Plummer, 2019MD; Jovancevic et al., 2006MD; Marigold et al., 2007; Marigold & PatlaO, 2008; Miyasike-daSilva et al., 2011; Miyasike-daSilva & McIlroy, 2016MD; Murray et al., 2014O) 12
Presaccadic preview 0 8 (Danno et al., 2011ES; Huestegge & Bröcker, 2016AF; Luoma, 1984; Mourant & Rockwell, 1970E,MD; Shahar et al., 2012E; Shinoda et al., 2001; Strayer et al., 2003MD; Underwood et al., 2003E) 2 (Luo et al., 2008; Tong et al., 2017) 10
Saccade/eccentricity costs 0 5 (Kountouriotis & Merat, 2016MD; Lamble et al., 1999E, MD; Lehtonen et al., 2018E; Seya et al., 2013CL; Summala et al., 1996E,CL) 0 5
Action planning 1 (Yu et al., 2014SA) 1 (Cooper et al., 2013CL) 4 (Berencsi et al., 2005O; Marigold & PatlaO, 2008; Miyasike-daSilva et al., 2019O; Patla, 1998O) 6
Other 0 3 (Alberti et al., 2014E; Lehtonen et al., 2014E; Zhang et al., 2016E,ES) 2 (Bardy et al., 1999; Timmis et al., 2017MD) 5
None* 4 (Imbert et al., 2014; Kim et al., 2010E; Robinski & SteinE, 2013E; Schaudt et al., 2002MD) 15 (Beh & Hirst, 1999MD; Bian et al., 2010MD,CL; Briggs et al., 2016MD,CL; Crundall et al., 2002E,MD; Crundall et al., 2004CL; Harbluk et al., 2007MD; Janelle et al., 1999ES; Lin & Hsu, 2010MD; Mayeur et al., 2008CL; Patten et al., 2006E; Tsai et al., 2007MD; Underwood et al., 2005A; Victor et al., 2005MD,CL; Zhao et al., 2014E; Zwahlen, 1989) 4 (Cinelli et al., 2009; Hasanzadeh et al., 2018SA; Ioannidou et al., 2017MD; King et al., 2009A) 24
Sum 6 37 19 62

Two studies (Marigold & Patla, 2008; Mourant & Rockwell, 1970) mentioned two functionalities, so that the sum of functionalities is 62, although we only included 60 studies

* Studies mentioned in the “None” category did not explicitly mention a specific functionality. Some studies discussed a functionality between the lines. Please see text for these interpretations.

Abbreviations in exponent notes. E – Expertise, MD – multitasking and distraction, CL – cognitive load, A – age, AF – action before fixation, ES – emotions and stress, SA – situational awareness, O – occlusion

Qualitative results

This review is informed by our understanding that peripheral vision is so central to many real-world tasks that its role passes unremarked. Yet, by looking to research in driving, walking, and aviation, we might gain insights into peripheral vision and how it supports complex tasks that we undertake outside the laboratory. With this in mind, our review and discussion section is structured in two parts. In the first, we consider how drivers, pedestrians, and pilots use peripheral vision; that is, what information it provides and the evidence for its often unacknowledged role. In the second part, we ask what impacts our ability to use peripheral vision while driving, walking, and flying planes, why we do not always use it if it provides useful information, and how our ability to do so is limited.

How we use peripheral vision

Here, we will look across our three very different real-world tasks to learn what the broad commonalities are in terms of how pilots, drivers, and pedestrians use peripheral vision, and how the ways in which they do so overlap. To impose some organization on the question, we have divided it into three subcases. The first is how peripheral vision is used to monitor our surroundings, an inherent component of most, if not all, real-world tasks. The second is how we use peripheral vision to plan action, and the third is how peripheral vision informs eye movements. While we have done this to impose some structure on an otherwise-unruly body of literature, we must also point out that these three functionalities of peripheral vision are intrinsically interwoven, and considering one without the other is likely to be an exercise in incompleteness and frustration.

Monitoring the environment

The use of peripheral vision for generalized monitoring can take many forms; pilots or air-traffic controllers may monitor the periphery against the occurrence of instrument failure (Brams et al., 2018; Imbert et al., 2014) or monitor instruments, like the speed indicator, while gazing out the windscreen (Schaudt et al., 2002). Similarly, drivers can use cues (e.g., warning lights or other simple visual alerts) that appear in the periphery to tell them when it is safe to change lanes, and drivers in fact perform better with these peripherally presented cues than with cues presented at fixation (Doshi & Trivedi, 2012), perhaps because drivers expect hazards due to a lane-changing maneuver to appear in their periphery. More broadly, a driver’s understanding of their overall environment no doubt leads them to expect hazards, like cyclists, to be in some parts of the scene, such as being on the road rather than in an arbitrary location (Zwahlen, 1989).

How are we able to monitor for changes in our environment? Our knowledge about the environment and the predictability of changes in that environment likely plays a considerable role. Cockpit instruments or alert lights in a car, for example, remain at a fixed position, which helps us to peripherally monitor a limited region of the visual field and allocate resources to this region, rather than monitoring the entire visual field all the time.

When the environment is less predictable, a wider visual field must be monitored with peripheral vision. That this is possible can be seen in a study by Marigold et al. (2008), where pedestrians in the laboratory were quite capable of noticing stumbling blocks that suddenly appeared in their path, without looking down at them. Critically, their participants’ ability to react to this obstacle without fixating it shows that they must be using peripheral vision. In another study, pedestrians texting and walking inherently used peripheral vision to avoid collisions, since the cell phone occluded their central vision (Feld & Plummer, 2019; see other references on peripheral monitoring in Table 5, “Monitoring”).

Peripheral vision for action

People can also perform some actions while relying only on peripheral vision. Whether or not this is possible depends significantly on the environment. For example, when walking down a flight of stairs, we habitually fixate transitional steps, which define the point of change between a level surface and a staircase, but often rely on peripheral vision to provide enough information about intermediate steps (Miyasike-daSilva et al., 2011). On the other hand, some environments and some staircases (as shown on the right-hand side of Fig. 3) demand careful fixation of each step because they are neither level nor predictable. This makes ascending or descending such a staircase a much slower and more methodical process. Given a predictable environment, we have little trouble ascending a staircase using only peripheral vision (Miyasike-daSilva & McIlroy, 2016). If pedestrians are restricted from using peripheral vision by experimental manipulation – in particular, if they are unable to use the lower visual field (Marigold & Patla, 2008) – they behave much as when climbing an uneven staircase, that is, looking at each tread to plan a step (see also Miyasike-daSilva et al., 2019). On the other hand, restricting central vision (Murray et al., 2014) does not adversely impact stair-climbing behavior, although the lack of fine detail might prove problematic in less-predictable environments, and perhaps makes the transition between the stairs and a flat surface harder to navigate. It seems that climbing stairs is possible with peripheral vision only, but why do people not look at the stairs? Perhaps because they want to see the path ahead, avoiding collisions and planning their next steps, similar to how pedestrians change how far ahead they fixate as a function of the difficulty of the walking path (Matthis et al., 2018).

Fig. 3.

Fig. 3

The left image (Sara Kurfeß, CC0 1.0) shows easy-to-walk stairs while the right image (taken by Greenville, SC Daily Photo, CC0 1.0) shows difficult stairs. The easy stairs are regular and can likely be walked using only peripheral vision. In contrast, the stairs on the right are very uneven and narrow (and are likely slippery due to the wet leaves on them). Their irregular nature will not be represented in sufficient detail with peripheral vision, requiring a pedestrian to look at each step as they ascend or descend them

We can see a similar reliance on peripheral vision in drivers using the location of road markings in their periphery to help them center their vehicle in a given lane (Robertshaw & Wilkie, 2008). Their ability to do so suffers if the information is not available on both sides of the road (Kountouriotis et al., 2011). Small amounts of optic flow (local motion signal) can indicate a lane departure. One possibility is that this is the cue people use to stay in their lane. People are apparently not only capable of monitoring this in peripheral vision, but in fact do use peripheral vision for monitoring this simple, high-contrast motion cue. Similar cues from the edge of the sidewalk are probably at play when staying on a path while walking (Bardy et al., 1999; Cinelli et al., 2009; Patla, 1998) and to monitor posture (Berencsi et al., 2005).

Together, walkers use peripheral vision to guide their feet, drivers to stay in their lane, and pilots to localize and operate controls in the visual periphery (Yu et al., 2014). In all of these examples, the actor chooses a fixaton location that has a certain distance from the to-be-controlled movement. The fact that they are reacting to object changes without looking at them clearly indicates the use of peripheral vision. The open questions here are: in which situations can we (or even should we) rely on peripheral vision and when should we initiate an eye movement and rely on foveal vision (for all references on how peripheral vision is directly linked to actions, see Table 5, “Action planning”).

Peripheral vision and eye movements

A particular case where peripheral vision’s role has long been acknowledged is in planning eye movements. While patients with retinitis pigmentosa will learn to plan eye movements beyond the range of retinal input (Luo et al., 2008; Vargas-Martín & Peli, 2006), in the absence of this retinal degradation, peripheral vision is critical to planning saccades. But, what can we learn from the applied literature about what information is available to plan saccades?

Some tasks require fixation, and others do not

There is a range of tasks in the world that require foveation, that is, looking at a specific object or location because the task demands more detailed information than peripheral vision can provide (cf., grasping; Hayhoe et al., 2003). While peripheral vision can tell a participant in a driving simulator experiment that a sign has changed (say, from a stop sign to a yield sign), correctly identifying the sign requires it to be fixated (Shinoda et al., 2001; see also Tong et al., 2017, for similar results). The gap here between localization and identification speaks to the respective capabilities of peripheral and foveal vision. Peripheral vision is sufficient for drivers to notice that something has changed and to tell them where that change occurred, which is sufficient to plan a saccade, but fixating the changed object is often necessary to determine identity (Clayden et al., 2020; David et al., 2021; Motter & Simoni, 2008; Nuthmann, 2014).

People might use peripheral vision to avoid fixation of irrelevant information

One cannot interpret a failure to fixate a given object in the world as evidence that an observer is unaware of it. A particularly telling example here is that distracted drivers fail to look at roadside billboards, and fail to recognize them later; meanwhile distraction has less impact on their ability to operate their vehicle (Strayer et al., 2003). The information available from fixating the billboards is irrelevant to the core driving task, and the lack of fixation may indicate that the drivers recognized them as billboards and chose to ignore them. To our knowledge, this has yet to be tested empirically, but, among other approaches, an EEG study could reveal if billboards are suppressed in cortical areas when irrelevant to the driver’s task.

Fixation is not always needed for action

Assuming that a given object needs to be fixated in order to plan an action in response to it can be problematic, since any motor action in the world takes time to plan and execute. For example, if the car ahead of you suddenly stops, would you fixate it first, and only then step on your own brake pedal? A recent study shows that drivers respond prior to fixating the hazard (Huestegge & Bröcker, 2016), relying on peripheral vision to tell them where the hazard is, and prioritizing response. We can see a similar reliance on peripheral information when it comes to detecting a motorcycle rider overtaking another vehicle, where drivers use information from their side mirror in the periphery to provide a general sense of their environment and to time their response (Shahar et al., 2012). The tendency for some actions to precede shifts in gaze when it comes to real-world tasks is counterintuitive and often at odds with our introspections about where we look and when (Luoma, 1984). Besides peripheral information processing, it is important to use depth information (Greenwald & Knill), optic flow information (Warren & Hannon, 1988) and flow parsing (Fajen & Matthis, 2013; Matthis & Fajen, 2014) to succesfully navigate through an environment (all references that found actions before fixating a target received the exponent note “AF” in Table 5).

The tradeoffs to initiate a saccade (or not)

It takes time to saccade back to important information if you look away, which means there are tradeoffs in deciding whether or not to saccade. On the road, for example, quick responses have to be made in response to hazardous situations. In such scenarios, participants seem to take the costs of saccades into account and detect a hazard 200–400 ms before they fixate the hazard (Huestegge & Bröcker, 2016). If drivers are forced to look away from and back towards the road, for example, when the costs of saccades are artificially raised, their ability to drive safely suffers (Lehtonen et al., 2018). This effect scales with the amplitude of the necessary saccade, with nearer objects requiring shorter saccades and having reduced impacts on the driver’s overall understanding of their environment (Danno et al., 2011).

Saccade tradeoffs depend on expertise and situational awareness

Expertise and situational awareness influence how well we can use peripheral vision. Our ability to use peripheral vision instead of saccades is almost certainly a function of our expertise with a given situation (Lamble et al., 1999; Summala et al., 1996; Underwood et al., 2003) and our level of situational awareness for the situation as a whole (Hasanzadeh et al., 2018). We review expertise effects and effects of load and distraction in the next section (all references on eye movements and its costs can be found in Table 5, “Saccade/eccentricity costs”; all references on situation awareness received the exponent note “SA”).

What impacts how we can use peripheral vision in real-world tasks?

Since we are not born knowing how to fly a plane, drive a car, or even walk, there is a vast amount of expertise we develop, and the literature shows that a component of our expertise is the ability to use peripheral vision when it is advantageous to do so. After discussing how expertise affects use of peripheral vision, we discuss how cognitive load, distraction and even certain emotional states reduce our ability to use peripheral vision, and what the consequences are.

The role of expertise

Becoming skilled at a real-world task like driving or flying a plane, or even a task as seemingly simple as walking, means developing perceptual expertise that supports our ability to complete these tasks. Our expertise affects how we use peripheral vision. Expert drivers look primarily at the road ahead (Summala et al., 1996), while novice drivers gaze about much more widely (Crundall et al., 1999; Mourant & Rockwell, 1970), suggesting that experts are better able to use peripheral information (Alberti et al., 2014). For that matter, novice drivers are slower, on the whole, to notice peripheral changes (Zhao et al., 2014), which implies that while the input is available to them, they have not yet learned to make sense of it (Patten et al., 2006). Alternatively, expert drivers might simply be better able to use executive control to maintain sustained attention to the more important information – the road ahead (Alberti et al., 2014). Particularly in the case of highway driving, most safety-critical information is in the road ahead, and choosing to focus on that area of the scene might provide all, or nearly all, the information the driver truly needs.

This pattern in which the impact of expertise is revealed by changes in gaze pattern can be seen beyond driving. When comparing where trainee and expert helicopter pilots look, trainee pilots used a broad search strategy similar to that used by novice drivers (Robinski & Stein, 2013). Skilled drivers, pilots, and pedestrians must learn to use optic flow cues to maintain heading and position, and novice pilots, even if they have been taught to look at the vanishing point, must learn to use the available cues (Kim et al., 2010). In a similar vein, expert drivers fixate further ahead than do novice drivers, allowing them to anticipate, for example, turns in the road (Lehtonen et al., 2014; also Mars & Navarro, 2012).

Another reason for the change in gaze patterns may be that experts can better make use of imperfect information available in peripheral vision. Skilled drivers, with knowledge of how their vehicle and pedestrians tend to move, often need only a glance, if that, at an oncoming pedestrian to avoid a collision (Jovancevic et al., 2006). A pilot or driver’s ability to push a button or use a control without looking away from the windshield reflects a deep understanding and detailed mental model of their proximate environment, that is, the cab of the plane or vehicle (Yu et al., 2014). The predictability of a control panel affords this understanding, since buttons and gauges can be expected to stay in the same location, but in addition pilots must develop the perceptual expertise to interact with controls without looking at them directly (Yu et al., 2014).

Expertise, of course, interacts with age. Across our lifespan, we walk, drive, and fly for decades, but age might diminish our capacity to benefit from our expertise. Furthermore, the ability to acquire peripheral information likely declines with age (Owsley, 2011; Scialfa et al., 2013). Older drivers are not, however, always worse than younger drivers; often, they can detect as many road hazards as their younger compatriots (Underwood et al., 2005), and they can detect transients in their peripheral field of view while driving (Ward et al., 2018), but they are prone to more perceptual and motor errors, like steering their car less carefully (Edquist et al., 2011) or greater steering variability in following a lead vehicle (Ward et al., 2018). Experience could almost be said to breed a certain contempt for foveal vision; in a locomotion study, where participants would need to grab a handrail, older participants were less likely to fixate it on entering the space and less likely to grab it when they needed to (King et al., 2009). This, then, illustrates just how tricky the question of expertise is in the context of peripheral vision, and why it is worth considering as an evolution across the lifespan, rather than simple progress towards a peak (all references that link peripheral vision usage to expertise effects received exponent note “E” and those on age an “A” in Table 5).

Multitasking and distraction

Experts may see driving as one complex task – driving itself becomes quite automatic for them – while novices might understand driving as number of linked tasks requiring focus and attention, like steering the car while monitoring the environment for pedestrians, other vehicles, and road signs. Multitasking is inherent in these situations; for example, while driving, one must maintain awareness of the environment, control the brake and accelerator pedals, and maintain steering input. Visual perception studies, on the other hand, typically only explicitly introduce multitasking to study the effects of attention. The impacts of multitasking are often described in terms of the dangers of distraction (Strayer et al., 2019), and while these dangers are very real, our question here is what happens to someone’s ability to use and benefit from peripheral vision when they are multitasking, rather than the perils of distraction itself.

Distraction sometimes causes drivers to take their eyes off the road; this inherently causes them to be less aware of their operating environment, as it puts driving-relevant information into the periphery or outside of the field of view. However, distraction can cause problems even when the distracting task does not take the driver’s eyes off the road. In fact, auditory monitoring and driving-irrelevant visual detection tasks can produce similar effects: distracted drivers appear to rely more on peripheral vision for lane-keeping, but are less able to process and react to the information that peripheral vision provides (Gaspar et al., 2016; Lin & Hsu, 2010). On the other hand, Kountouriotis and Merat (2016) found that visual distractions caused more deviations in vehicle position than non-visual, though performance improved if one had a lead vehicle to follow. Distraction can also impact drivers’ ability to maintain fine control (Strayer & Johnston, 2001). Drivers performing an audioverbal arithmetic task gaze more at the road ahead, but are slower to react to changes in the environment than without the additional cognitive load (Harbluk et al., 2007; Tsai et al., 2007; Victor et al., 2005).

On the other hand, drivers appear to some extent to compensate for slower reaction times by changing their following distance or reducing their speed (Haigney et al., 2000). Similarly, in studies of distracted walking, for example due to texting, pedestrians slow down and remain able to navigate safely (Timmis et al., 2017). Even when climbing stairs while texting, participants are only moderately slower (20%), yet they can walk up the stairs without incident (Ioannidou et al., 2017). In walking and driving, we can see evidence for participants using peripheral information at a diminished but useful level even when distracted and looking away. Any difference between the safety of distracted walking and that of distracted driving may simply arise from the difference in how quickly one must react in order to be safe (largely due to differences in the speed of travel), rather than due to a fundamental difference in visual processing under high load conditions.

The impact of distraction depends greatly on task, and in particular participants seem to make a distinction between driving-relevant tasks and more irrelevant distractions. Cognitive load can greatly affect the detection of driving-irrelevant events (like a driving-irrelevant light flashing on the dashboard), and does so more in the upper than in the lower visual field (Seya et al., 2013). However, it is unclear whether this represents a degradation in peripheral vision with load, or a rational tradeoff between critical driving tasks and other tasks (as shown in studies where load has been imposed by such a driving-orthogonal task; Crundall et al., 2002; see also Bian et al., 2010; Gaspar et al., 2016; Mayeur et al., 2008). In driving, particularly, distraction impairs the ability to report irrelevant stimuli, suggesting that distraction might lead to tradeoffs in effort between two driving-irrelevant tasks – performing the nominally distracting task (e.g., using a cell phone) versus processing a less-relevant light on the dashboard or billboards on the roadside. Additional cognitive load can certainly impact observers’ performance, but the story may be complex because of compensatory behavior or tradeoffs between tasks. The pattern of eye movements can also be affected by distraction or multitasking. Distraction causes increased reliance on peripheral vision not only because drivers fixate on the distracting task, for example the texting app on their phone (Harbluk et al., 2007; Strayer et al., 2003), but because cognitive load can cause them to move their eyes differently (Briggs et al., 2016; Summala et al., 1996; Victor et al., 2005). Cognitive load can lead to drivers limiting their fixations to a smaller region of the visual field (Miura, 1986; Recarte & Nunes, 2003; Reimer et al., 2012), and this change cause ambiguity about whether distraction directly causes poorer performance, or does so indirectly by changing fixations. One can explicitly test the effects of gaze patterns as opposed to cognitive load per se by forcing drivers to maintain a particular fixation pattern and separately varying cognitive load. Using such an approach, Cooper et al. (2013) demonstrated that making eye movements over a narrow versus a wide range on the forward roadway had no effect on performance, but increasing cognitive load paradoxically led to better performance on the lane-keeping task, pointing to the complexities here.

One might think of a person’s internal state as a different sort of distraction. Angry drivers, for example, behave much like distracted drivers, and are less aware of their surroundings (Zhang et al., 2016). Anxious drivers, like those in a new driving environment or who are simply predisposed to worrying about their safety and that of everyone around them, also have difficulty in using peripheral information (Janelle et al., 1999). Overall stress has similar effects; when stressed, drivers do not look at objects in the periphery even when they need to, and are slower to respond to hazards (Danno et al., 2011). The results of these manipulations could be interpreted as tunnel vision, where drivers are unable to perceive beyond a certain spatial extent around fixation. Tunnel vision is often observed if the task requires a speeded response and includes foveal load (Ringer et al., 2016; L. J. Williams, 1985, 1988, 1989), which is the case in many of the included studies. However, given results questioning whether high cognitive load really leads to tunnel vision (Gaspar et al., 2016; B. Wolfe et al., 2019), a better hypothesis may be that certain emotional states (and other factors, like increased cognitive load) make it more difficult to perceive peripheral information, rather than impossible. Even something as seemingly mundane as loud music can have similar impacts on how drivers can use peripheral vision; it diminishes their ability to report peripheral events in a timely manner, while, counterintuitively, facilitating detection of central targets (Beh & Hirst, 1999). However, using a different form of auditory distraction, Briggs et al. (2016) showed worse hazard detection with greater cognitive load, independent of eccentricity of the hazard (see Table 5 for all references that link peripheral vision usage to multitasking and distraction – exponent note “MD,” to cognitive load – exponent note “CL,” and to emotions and stress – exponent note “ES”).

General discussion

This review has shown that natural tasks, with their time constraints, more complex stimuli, and richer measures of performance, reveal new insights about how we use peripheral vision. For example, we use it during multitasking (many real-world tasks require at least dual tasking) and to guide our actions and eye movements. We identified tasks that people can solve without fixating task-relevant information – and our ability to do this clearly points to the use of peripheral vision to perform the task. Nonetheless, when using peripheral vision, performance can be affected by factors like our knowledge of the task, age, distraction, or the relative importance of multiple tasks. Therefore, it is essential to remember that there are always tradeoffs in deciding whether to use peripheral vision or eye-movements (foveal vision). To better understand these tradeoffs and to point to where research might go in the future, we will now integrate our review of these applied literatures with what is known in the context of sport and vision sciences. In addition, we will suggest three new approaches to research, drawn from this work, that might help further illuminate our understanding of peripheral vision more generally.

Integrating peripheral vision findings across disciplines

Peripheral vision is used for monitoring the environment; a functionality reported in driving, walking, and aviation as well as in sport and vision science. The forms of monitoring, however, may be subtly different, particularly when comparing vision science to the more applied fields. A pilot using peripheral vision to monitor a peripheral gauge, or a driver navigating the road while noticing a motorcycle in the side mirror may be doing a gist-like scene-perception task (Larson & Loschky, 2009; Loschky et al., 2007; Oliva, 2005; Rousselet et al., 2005), for which they draw information from a sizeable region centered on their point of gaze (Mackworth & Morandi, 1967), while simultaneously monitoring known peripheral locations. Peripheral vision may rely upon simple low-level saliency to detect hazards or obstacles (Crundall et al., 2003), but it remains an open question whether this wide field of view monitoring additionally relies on more complex recognition processes like gist or event identification. In sports, athletes need to use their wide field of view, for example to monitor opponents and teammates (Vater et al., 2020).

A major difference between applied and basic science seems that monitoring is necessary but often not considered as a conscious task in the applied domains, compared to the explicit tasks common in vision science (as discussed in Vater et al., 2017a, 2017b). In sum, the diverse cases of monitoring that exist outside the laboratory suggest that we are almost always doing multiple tasks at once, without being aware we are doing some of them, because the world is too complex and dynamic to do otherwise.

While peripheral vision is, of course, essential in many cases to plan saccades (Deubel & Schneider, 1996; Kowler et al., 1995), it is merely one special case of what we use peripheral vision for more broadly. We can, to some degree, detect hazards, road signs, and obstacles with peripheral vision, and use this info to guide a saccade if needed. However, a number of factors make it difficult to assess saccade planning. From vision science, we know that reducing information in the periphery (e.g., by removing information from peripheral vision with image filtering or adding noise) may reduce the likelihood of saccades to these less informative locations (Cajar et al., 2016; Nuthmann, 2014). In our review, we note that it is also a question of task demands, what the observer knows about the environment, and the tradeoffs involved in making or withholding a saccade. Making a saccade always puts previously foveated information in the periphery, which can have its costs. For example, looking away from the road ahead can result in a collision when the car ahead brakes, but the driver fails to perceive that braking in time (foveating the car would have been better). In sports, looking away from the opponent in martial arts can result in losing a bout when the punch or kick is seen too late (Hausegger et al., 2019). In both examples, the task must be solved under time pressure, and under these circumstances, the observer must account for the potential information they might acquire by moving their eyes, but also the information they would lose while the saccade was in progress. If researchers do not properly address factors like time pressure and situational contexts, one could, for instance, erroneously reason that driving experts know less about peripheral hazards than novice drivers, because experts rarely fixate hazards.

Our review, additionally, provides key insights into the factors that impact our ability to use peripheral information, including knowledge, aging, distraction, and emotional state. That greater knowledge or expertise leads to better visual performance is, of course, an accepted fact. However, at least in basic vision science, the prior knowledge often takes the form of reducing the number of likely target locations in a search task, or reducing the set of possible objects in an object recognition task. For example, prior knowledge aids monitoring not only in driving studies but also in sports (M. Williams & Davids, 1995) as well as in basic vision science (Castelhano & Heaven, 2011; Draschkow & Võ, 2017; Tsai et al., 2007). In vision science, it is understood that our knowledge about scene context helps to identify peripheral objects, at least in part by narrowing down the possible objects to those likely to occur in the scene (Wijntjes & Rosenholtz, 2018). In sport science, experts are better able to monitor the movements of other players (Vater et al., 2019), which may be due to additional knowledge about the likelihood of certain movements. However, while applied vision shows similar effects, like the ability to use peripheral vision to interact with buttons or monitor alerts at known locations, knowledge can also impact use of peripheral vision in a somewhat different way. In some real-life situations, people can quickly acquire enough information from a single glance at an object to enable them to then rely only on peripheral vision. For example, a single glance at a pedestrian and a driver can, thereafter, monitor the pedestrian well enough to avoid a collision (cf., Eckstein et al., 2006; Torralba et al., 2006). With a glance to gather knowledge about the stairs, one can continue up them without further need to fixate each riser. Route familiarity induces drivers to use peripheral vision more than they would on an unfamiliar route (Mourant & Rockwell, 1970). It is as if one can become an “expert” about a particular location or situation, sometimes from a mere glance, and then, as needed, fill in the information not available to peripheral vision. If so, one might expect to observe more effects of expertise and knowledge in peripheral vision than in more foveal tasks. To put it simply, knowledge may improve the utility of limited peripheral information. However, age is closely intertwined with expertise because the older participants are, the more knowledge they have (theoretically) acquired. Yet, from fundamental research, we know that contrast and acuity decline with age (Owsley, 2011). That means, especially for applied research, that declines in visual capability and expertise effects need to be separated, rendering the question of expertise more complicated.

Distraction has been long known to have adverse perceptual impacts, as shown in inattentional blindness (Mack & Rock, 1998; Wood & Simons, 2019) and dual-task experiments (Rosenholtz et al., 2012; VanRullen et al., 2004). In real-world tasks as well as in basic science research on tunnel vision, distraction changes fixation patterns, both when there is a secondary visual task and also simply indirectly due to load (Gaspar et al., 2016; Ringer et al., 2016; Ward et al., 2018). How distraction affects the use of peripheral vision in sports has yet to be examined. It can, however, be expected that distraction is a factor, for example, when a basketball player is preparing to free-throw a ball, the members of the crowd supporting the opposing team might intentionally move and make noise to try to distract them. Furthermore, emotions and their impact on perception are well studied in sports, and the effects of stress and anxiety on performance are known to impact decision times and gaze behavior (Vater, Roca, & Williams, 2016b) and especially the processing efficiency of foveated information (Vine et al., 2013). Vision science has examined questions of valence, i.e., the impact of the stimulus attractiveness or averseness on performance of visual tasks (e.g., happy, sad, or scary stimuli impact reaction times or lead to distraction), rather than the impact of emotional states (Bugg, 2015). The result that emotions cause people to miss peripheral targets, particularly when they are task-irrelevant, may suggest a tradeoff between relevant and irrelevant information, under “load” from one’s emotional state, analogous to the impact of more general cognitive load (Engström et al., 2017).

Three recommendations for future peripheral vision research

Our goal here is to propose three potential avenues for future research, drawing from this review: First, probing the contribution of various portions of the visual field to determine their role in particular tasks, and to confirm or refute our view of peripheral vision’s role in these tasks. Second, to use eye tracking in a new way, and rather than asking where participants look, ask where they do not, since the absence of a gaze to a certain location does not mean the participant has no information. Finally, we suggest looking at cases where participants are or are not permitted to look at particular locations, to determine whether their informational needs can only be met by saccades and subsequent fixation, or if peripheral vision can serve their needs. These approaches draw from techniques (e.g., gaze-contingency paradigms, as pioneered by McConkie & Rayner, 1975) used across basic and applied research, but will provide answers to key questions at the intersection of real-world tasks, peripheral vision, and saccades.

The first line of research focuses on occluding portions of the field of view, or vision entirely (it may be possible that vision is not needed at all), to investigate changes in performance and see if the occluded region of the visual field made a meaningful contribution to the task at hand (for an early study on walking with very low vision and limited peripheral vision see Pelli, 1987; all references that used occlusion methods in the set of included studies received the exponent note “O” in Table 5). That it is possible to drive a car even without vision – at least for some seconds – has been shown in self-paced occlusion studies on real roads (cf., Senders et al., 1967). This research shows that especially experienced drivers sometimes do not need vision at all times to steer a car (for a recent review, see Kujala et al., 2021). Therefore, it is important to figure out when and how peripheral vision is used. One way to do this is with gaze-contingent paradigms, which is common in vision science and in some applied laboratory studies (e.g., see Ryu et al., 2013; Ryu et al., 2015), and allow stimuli to be manipulated based on where the observer is looking at any given time. This paradigm could also be useful to investigate peripheral preview capabilities (all references discussing the preview functionality can be found in in Table 5, “Presaccadic preview”). One could also manipulate the usefulness of information on the saccade target during or immediately prior to the saccade. It is hypothesized that if peripheral information is used, then the fixation duration on the target will be less when the information remains the same, but longer when it is changed (as it needs to be updated with foveal vision). In on-road studies, where such stimulus control is impractical, participants could be instructed where to look (and control that with eye-tracking or tasks which require fixation; e.g., Wolfe et al., 2019), which has been done in some driving studies (Lehtonen et al., 2014; Lehtonen et al., 2018). By doing so, it becomes possible to control the eccentricity of events and to determine how task performance and reaction time change accordingly; that is, the penalties that occur when an observer must rely on peripheral vision.

The other two revealing lines of research use precise eye tracking in conjunction with motor responses that reveal what information the participant requires for a particular task. One variant of this would be to ask whether participants are using information they are not fixating, suggesting a reliance on peripheral vision, to complete specific actions. This might be done relatively easily, since applied research often lets participants freely view their environment while monitoring gaze position (cf., Peißl et al., 2018, for a review on eye tracking in aviation; also Ziv, 2017). For example, if participants do not fixate an obstacle, but step over it, they must have used peripheral vision to do so (Marigold et al., 2007; see Marigold, 2008, for a review). Similarly, in driving, if a driver began to steer away from an obstacle before fixating it, this could have only been based on peripheral information. It should, however, be noted that it can be is easy to misuse and misinterpret eye-tracking data (B. Wolfe et al., 2020), since it is impossible to be certain that participants are actually using the information they are fixating (e.g., looked-but-failed-to-see errors, Herslund & Jørgensen, 2003).

Finally, one can reason about what peripheral vision might be used for by making use of models of the information available across the field of view and across a saccade. Vision science has made considerable progress on modeling and visualizing the information preserved in peripheral vision (Balas et al., 2009; Deza et al., 2018; Doerig et al., 2019; Freeman & Simoncelli, 2011; Rosenholtz, Huang, Raj, et al., 2012). These models, and work inspired by them, may help experimenters identify relevant information that does or does not survive peripherally as a function of eccentricity, helping us understand why we may or may not saccade to and fixate an object.

Summary

Using peripheral vision is intrinsic to many real-life tasks, like driving, walking, and aviation, and its role is acknowledged in sport science and well investigated in vision science, but no review has tried to draw together all of these very different threads. Here, we have done so, showing commonalities across a range of different tasks in very different settings, reflecting a global functionality for peripheral vision, anchored in monitoring and saccade planning, but that defies simple classification, since these functionalities are susceptible to interference from distraction, multitasking, and other factors. We then go on to draw on all of these very different elements to propose avenues for future research, including manipulating what visual information is available, investigating assumptions about what tasks require foveal information, and examining when and why we look where we do in real-world tasks, based on our informational needs. Peripheral vision is the sea we all swim in, from basic research in the laboratory to practitioners solving problems in the field, and by understanding how and why we use it, and when and why we do not, we can better understand its capabilities and limitations, and better explain human behavior.

Funding

Open access funding provided by University of Bern

Data availability

Table 3 is provided as an excel document on the Open Science Framework (https://osf.io/vea5r/?view_only=ba8597fef6514be68082d9e878fff5d2). The review was not pre registered.

Declarations

Conflicts of interest

We have no conflicts of interest to disclose.

Footnotes

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  1. Abramov I, Gordon J. Color vision in the peripheral retina. I. Spectral sensitivity. Journal of the Optical Society of America. 1977;67(2):195–202. doi: 10.1364/JOSA.67.000195. [DOI] [PubMed] [Google Scholar]
  2. Alberti CF, Shahar A, Crundall D. Are experienced drivers more likely than novice drivers to benefit from driving simulations with a wide field of view? Transportation Research Part F: Traffic Psychology and Behaviour. 2014;27:124–132. doi: 10.1016/j.trf.2014.09.011. [DOI] [Google Scholar]
  3. Anstis SM. A chart demonstrating variations in acuity with retinal position. Vision Research. 1974;14(7):589–592. doi: 10.1016/0042-6989(74)90049-2. [DOI] [PubMed] [Google Scholar]
  4. Ariely D. Seeing sets: Representation by statistical properties. Psychological Science. 2001;12(2):157–162. doi: 10.1111/1467-9280.00327. [DOI] [PubMed] [Google Scholar]
  5. Balas, B., Nakano, L., & Rosenholtz, R. (2009). A summary-statistic representation in peripheral vision explains visual crowding. Journal of Vision, 9(12), 13.1–18. 10.1167/9.12.13 [DOI] [PMC free article] [PubMed]
  6. Ball KK, Beard BL, Roenker DL, Miller RL, Griggs DS. Age and visual search: Expanding the useful field of view. Journal of the Optical Society of America a: Optics and Image Science, and Vision. 1988;5(12):2210–2219. doi: 10.1364/JOSAA.5.002210. [DOI] [PubMed] [Google Scholar]
  7. Banks WP, White H. Lateral interference and perceptual grouping in visual detection. Perception & Psychophysics. 1984;36(3):285–295. doi: 10.3758/BF03206370. [DOI] [PubMed] [Google Scholar]
  8. Bardy BG, Warren WH, Kay BA. The role of central and peripheral vision in postural control during walking. Perception & Psychophysics. 1999;61(7):1356–1368. doi: 10.3758/BF03206186. [DOI] [PubMed] [Google Scholar]
  9. Beh HC, Hirst R. Performance on driving-related tasks during music. Ergonomics. 1999;42(8):1087–1098. doi: 10.1080/001401399185153. [DOI] [Google Scholar]
  10. Berencsi A, Ishihara M, Imanaka K. The functional role of central and peripheral vision in the control of posture. Human Movement Science. 2005;24(5-6):689–709. doi: 10.1016/j.humov.2005.10.014. [DOI] [PubMed] [Google Scholar]
  11. Bernard J-B, Chung STL. The dependence of crowding on flanker complexity and target-flanker similarity. Journal of Vision. 2011;11(8):1. doi: 10.1167/11.8.1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Bian Z, Kang JJ, Andersen GJ. Changes in Extent of Spatial Attention with Increased Workload in Dual-Task Driving. Transportation Research Record: Journal of the Transportation Research Board. 2010;2185(1):8–14. doi: 10.3141/2185-02. [DOI] [Google Scholar]
  13. Boucart M, Moroni C, Thibaut M, Szaffarczyk S, Greene M. Scene categorization at large visual eccentricities. Vision Research. 2013;86:35–42. doi: 10.1016/j.visres.2013.04.006. [DOI] [PubMed] [Google Scholar]
  14. Bouma H. Interaction effects in parafoveal letter recognition. Nature. 1970;226(5241):177–178. doi: 10.1038/226177a0. [DOI] [PubMed] [Google Scholar]
  15. Brams S, Hooge ITC, Ziv G, Dauwe S, Evens K, De Wolf T, Levin O, Wagemans J, Helsen WF. Does effective gaze behavior lead to enhanced performance in a complex error-detection cockpit task? PLoS One. 2018;13(11):e0207439. doi: 10.1371/journal.pone.0207439. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Briggs GF, Hole GJ, Land MF. Imagery-inducing distraction leads to cognitive tunnelling and deteriorated driving performance. Transportation Research Part F: Traffic Psychology and Behaviour. 2016;38:106–117. doi: 10.1016/j.trf.2016.01.007. [DOI] [Google Scholar]
  17. Bugg JM. The relative attractiveness of distractors and targets affects the coming and going of item-specific control: Evidence from flanker tasks. Attention, Perception, & Psychophysics. 2015;77(2):373–389. doi: 10.3758/s13414-014-0752-x. [DOI] [PubMed] [Google Scholar]
  18. Cajar A, Schneeweiß P, Engbert R, Laubrock J. Coupling of attention and saccades when viewing scenes with central and peripheral degradation. Journal of Vision. 2016;16(2):8. doi: 10.1167/16.2.8. [DOI] [PubMed] [Google Scholar]
  19. Cameron E, Tai JC, Carrasco M. Covert attention affects the psychometric function of contrast sensitivity. Vision Research. 2002;42(8):949–967. doi: 10.1016/S0042-6989(02)00039-1. [DOI] [PubMed] [Google Scholar]
  20. Carrasco M. Visual attention: The past 25 years. Vision Research. 2011;51(13):1484–1525. doi: 10.1016/j.visres.2011.04.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Carrasco M, Barbot A. Spatial attention alters visual appearance. Current Opinion in Psychology. 2019;29:56–64. doi: 10.1016/j.copsyc.2018.10.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Carrasco M, Giordano AM, McElree B. Attention speeds processing across eccentricity: Feature and conjunction searches. Vision Research. 2006;46(13):2028–2040. doi: 10.1016/j.visres.2005.12.015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Castelhano MS, Heaven C. Scene context influences without scene gist: Eye movements guided by spatial associations in visual search. Psychonomic Bulletin and Review. 2011;18(5):890–896. doi: 10.3758/s13423-011-0107-8. [DOI] [PubMed] [Google Scholar]
  24. Cinelli ME, Patla AE, Allard F. Behaviour and gaze analyses during a goal-directed locomotor task. Quarterly Journal of Experimental Psychology. 2009;62(3):483–499. doi: 10.1080/17470210802168583. [DOI] [PubMed] [Google Scholar]
  25. Clayden AC, Fisher RB, Nuthmann A. On the relative (un)importance of foveal vision during letter search in naturalistic scenes. Vision Research. 2020;177:41–55. doi: 10.1016/j.visres.2020.07.005. [DOI] [PubMed] [Google Scholar]
  26. Cooper JM, Medeiros-Ward N, Strayer DL. The impact of eye movements and cognitive workload on lateral position variability in driving. Human Factors. 2013;55(5):1001–1014. doi: 10.1177/0018720813480177. [DOI] [PubMed] [Google Scholar]
  27. Crone, R. A. (1977). Die Physiologie der Netzhautperipherie [The physiology of the peripheral retina]. In W. Jaeger (Ed.), Deutsche Ophthalmologische Gesellschaft, Bericht über die. Zusammenkunft in Essen 1975: Periphere Retina, 74, (pp. 17–21). J.F. Bergmann-Verlag. [PubMed]
  28. Crundall D, Chapman P, Phelps N, Underwood G. Eye movements and hazard perception in police pursuit and emergency response driving. Journal of Experimental Psychology. Applied. 2003;9(3):163–174. doi: 10.1037/1076-898X.9.3.163. [DOI] [PubMed] [Google Scholar]
  29. Crundall D, Shenton C, Underwood G. Eye movements during intentional car following. Perception. 2004;33(8):975–986. doi: 10.1068/p5105. [DOI] [PubMed] [Google Scholar]
  30. Crundall D, Underwood G, Chapman P. Driving experience and the functional field of view. Perception. 1999;28(9):1075–1087. doi: 10.1068/p281075. [DOI] [PubMed] [Google Scholar]
  31. Crundall D, Underwood G, Chapman P. Attending to the peripheral world while driving. Applied Cognitive Psychology. 2002;16(4):459–475. doi: 10.1002/acp.806. [DOI] [Google Scholar]
  32. Curcio CA, Sloan KR, Kalina RE, Hendrickson AE. Human photoreceptor topography. The Journal of Comparative Neurology. 1990;292(4):497–523. doi: 10.1002/cne.902920402. [DOI] [PubMed] [Google Scholar]
  33. Dakin SC, Watt RJ. The computation of orientation statistics from visual texture. Vision Research. 1997;37(22):3181–3192. doi: 10.1016/S0042-6989(97)00133-8. [DOI] [PubMed] [Google Scholar]
  34. Danno M, Kutila M, Kortelainen JM. Measurement of driver's visual attention capabilities using real-time UFOV method. International Journal of Intelligent Transportation Systems Research. 2011;9(3):115–127. doi: 10.1007/s13177-011-0033-1. [DOI] [Google Scholar]
  35. David EJ, Beitner J, Võ ML-H. The importance of peripheral vision when searching 3D real-world scenes: A gaze-contingent study in virtual reality. Journal of Vision. 2021;21(7):3. doi: 10.1167/jov.21.7.3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Deubel H, Schneider WX. Saccade target selection and object recognition: Evidence for a common attentional mechanism. Vision Research. 1996;36(12):1827–1837. doi: 10.1016/0042-6989(95)00294-4. [DOI] [PubMed] [Google Scholar]
  37. Deza, A., Jonnalagadda, A., & Eckstein, M. P. (2018). Towards Metamerism via Foveated Style Transfer. International Conference on Learning Representations. https://openreview.net/forum?id=BJzbG20cFQ
  38. Doerig, A., Bornet, A., Rosenholtz, R., Francis, G., Clarke, A. M., & Herzog, M. H. (2019). Beyond Bouma's window: How to explain global aspects of crowding? PLoSComputational Biology, e1006580. 10.1371/journal.pcbi.1006580 [DOI] [PMC free article] [PubMed]
  39. Doshi A, Trivedi MM. Head and eye gaze dynamics during visual attention shifts in complex environments. Journal of Vision. 2012;12(2):9. doi: 10.1167/12.2.9. [DOI] [PubMed] [Google Scholar]
  40. Draschkow D, Võ ML-H. Scene grammar shapes the way we interact with objects, strengthens memories, and speeds search. Scientific Reports. 2017;7(1):16471. doi: 10.1038/s41598-017-16739-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Eckstein MP, Drescher BA, Shimozaki SS. Attentional cues in real scenes, saccadic targeting, and Bayesian priors. Psychological Science. 2006;17(11):973–980. doi: 10.1111/j.1467-9280.2006.01815.x. [DOI] [PubMed] [Google Scholar]
  42. Edquist J, Horberry T, Hosking S, Johnston I. Effects of advertising billboards during simulated driving. Applied Ergonomics. 2011;42(4):619–626. doi: 10.1016/j.apergo.2010.08.013. [DOI] [PubMed] [Google Scholar]
  43. Ehinger KA, Rosenholtz R. A general account of peripheral encoding also predicts scene perception performance. Journal of Vision. 2016;16(2):13. doi: 10.1167/16.2.13. [DOI] [PubMed] [Google Scholar]
  44. Engström J, Markkula G, Victor T, Merat N. Effects of Cognitive Load on Driving Performance: The Cognitive Control Hypothesis. Human Factors. 2017;59(5):734–764. doi: 10.1177/0018720817690639. [DOI] [PubMed] [Google Scholar]
  45. Fajen BR, Matthis JS. Visual and non-visual contributions to the perception of object motion during self-motion. PloS One. 2013;8(2):e55446. doi: 10.1371/journal.pone.0055446. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Fehd HM, Seiffert AE. Eye movements during multiple object tracking: Where do participants look? Cognition. 2008;108(1):201–209. doi: 10.1016/j.cognition.2007.11.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Fehd HM, Seiffert AE. Looking at the center of the targets helps multiple object tracking. Journal of Vision. 2010;10(4):19. doi: 10.1167/10.4.19. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Feld JA, Plummer P. Visual scanning behavior during distracted walking in healthy young adults. Gait and Posture. 2019;67:219–223. doi: 10.1016/j.gaitpost.2018.10.017. [DOI] [PubMed] [Google Scholar]
  49. Freeman, J., & Simoncelli, E. P. (2011). Metamers of the ventral stream. Nature Neuroscience, 14(9), 1195–1201. 10.1038/nn.2889 [DOI] [PMC free article] [PubMed]
  50. Gaspar JG, Ward N, Neider MB, Crowell J, Carbonari R, Kaczmarski H, Ringer RV, Johnson AP, Kramer AF, Loschky LC. Measuring the Useful Field of View during simulated driving with gaze-contingent displays. Human Factors. 2016;58(4):630–641. doi: 10.1177/0018720816642092. [DOI] [PubMed] [Google Scholar]
  51. Gegenfurtner KR. The interaction between vision and eye movements. Perception. 2016;45(12):1333–1357. doi: 10.1177/0301006616657097. [DOI] [PubMed] [Google Scholar]
  52. Geuzebroek AC, van den Berg AV. Eccentricity scale independence for scene perception in the first tens of milliseconds. Journal of Vision. 2018;18(9):9. doi: 10.1167/18.9.9. [DOI] [PubMed] [Google Scholar]
  53. Godley ST, Triggs TJ, Fildes BN. Driving simulator validation for speed research. Accident Analysis & Prevention. 2002;34(5):589–600. doi: 10.1016/S0001-4575(01)00056-2. [DOI] [PubMed] [Google Scholar]
  54. Golomb JD, Nguyen-Phuc AY, Mazer JA, McCarthy G, Chun MM. Attentional facilitation throughout human visual cortex lingers in retinotopic coordinates after eye movements. Journal of Neuroscience. 2010;30(31):10493–10506. doi: 10.1523/JNEUROSCI.1546-10.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Gordon J, Abramov I. Color vision in the peripheral retina. Ii. Hue and saturation. Journal of the Optical Society of America. 1977;67(2):202–207. doi: 10.1364/JOSA.67.000202. [DOI] [PubMed] [Google Scholar]
  56. Greenwald HS, Knill DC. Cue integration outside central fixation: A study of grasping in depth. Journal of Vision. 2009;9(2):11.1-1116. doi: 10.1167/9.2.11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Haberman J, Whitney D. Ensemble Perception. In: Wolfe JM, Robertson L, editors. From Perception to Consciousness. Oxford University Press; 2012. pp. 339–349. [Google Scholar]
  58. Haigney D, Taylor R, Westerman S. Concurrent mobile (cellular) phone use and driving performance: task demand characteristics and compensatory processes. Transportation Research Part F: Traffic Psychology and Behaviour. 2000;3(3):113–121. doi: 10.1016/S1369-8478(00)00020-6. [DOI] [Google Scholar]
  59. Hansen T, Pracejus L, Gegenfurtner KR. Color perception in the intermediate periphery of the visual field. Journal of Vision. 2009;9(4):26.1-12. doi: 10.1167/9.4.26. [DOI] [PubMed] [Google Scholar]
  60. Harbluk JL, Noy YI, Trbovich PL, Eizenman M. An on-road assessment of cognitive distraction: Impacts on drivers' visual behavior and braking performance. Accident; Analysis and Prevention. 2007;39(2):372–379. doi: 10.1016/j.aap.2006.08.013. [DOI] [PubMed] [Google Scholar]
  61. Harrison WJ, Mattingley JB, Remington RW. Eye movement targets are released from visual crowding. Journal of Neuroscience. 2013;33(7):2927–2933. doi: 10.1523/JNEUROSCI.4172-12.2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. Harrison WJ, Retell JD, Remington RW, Mattingley JB. Visual crowding at a distance during predictive remapping. Current Biology. 2013;23(9):793–798. doi: 10.1016/j.cub.2013.03.050. [DOI] [PubMed] [Google Scholar]
  63. Hasanzadeh S, Esmaeili B, Dodd MD. Examining the relationship between construction workers' visual attention and situation awareness under fall and tripping hazard conditions: Using mobile eye tracking. Journal of Construction Engineering and Management. 2018;144(7):1–18. doi: 10.1061/(ASCE)CO.1943-7862.0001516. [DOI] [Google Scholar]
  64. Hausegger T, Vater C, Hossner E-J. Peripheral vision in martial arts experts: The cost-dependent anchoring of gaze. Journal of Sport and Exercise Psychology. 2019;41(3):137–146. doi: 10.1123/jsep.2018-0091. [DOI] [PubMed] [Google Scholar]
  65. Hayhoe MM, Shrivastava A, Mruczek R, Pelz JB. Visual memory and motor planning in a natural task. Journal of Vision. 2003;3(1):49–63. doi: 10.1167/3.1.6. [DOI] [PubMed] [Google Scholar]
  66. Herslund M-B, Jørgensen NO. Looked-but-failed-to-see-errors in traffic. Accident Analysis & Prevention. 2003;35(6):885–891. doi: 10.1016/S0001-4575(02)00095-7. [DOI] [PubMed] [Google Scholar]
  67. Huestegge L, Bröcker A. Out of the corner of the driver's eye: Peripheral processing of hazards in static traffic scenes. Journal of Vision. 2016;16(2):11. doi: 10.1167/16.2.11. [DOI] [PubMed] [Google Scholar]
  68. Hulleman J, Olivers CNL. The impending demise of the item in visual search. The Behavioral and Brain Sciences. 2017;40:e132. doi: 10.1017/S0140525X15002794. [DOI] [PubMed] [Google Scholar]
  69. Imbert J-P, Hodgetts HM, Parise R, Vachon F, Dehais F, Tremblay S. Attentional costs and failures in air traffic control notifications. Ergonomics. 2014;57(12):1817–1832. doi: 10.1080/00140139.2014.952680. [DOI] [PubMed] [Google Scholar]
  70. Ioannidou F, Hermens F, Hodgson TL. Mind your step: the effects of mobile phone use on gaze behavior in stair climbing. Journal of Technology in Behavioral Science. 2017;2(3-4):109–120. doi: 10.1007/s41347-017-0022-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  71. Itti L, Koch C. A saliency-based search mechanism for overt and covert shifts of visual attention. Vision Research. 2000;40(10-12):1489–1506. doi: 10.1016/S0042-6989(99)00163-7. [DOI] [PubMed] [Google Scholar]
  72. Janelle CM, Singer RN, Williams AM. External distraction and attentional narrowing: Visual search evidence. Journal of Sport and Exercise Psychology. 1999;21(1):70–91. doi: 10.1123/jsep.21.1.70. [DOI] [Google Scholar]
  73. Jovancevic J, Sullivan B, Hayhoe MM. Control of attention and gaze in complex environments. Journal of Vision. 2006;6(12):1431–1450. doi: 10.1167/6.12.9. [DOI] [PubMed] [Google Scholar]
  74. Kim J, Palmisano SA, Ash A, Allison RS. Pilot gaze and glideslope control. ACM Transactions on Interactive Intelligent Systems. 2010;7(3):18:1-18:18. [Google Scholar]
  75. King EC, McKay SM, Lee TA, Scovil CY, Peters AL, Maki BE. Gaze behavior of older adults in responding to unexpected loss of balance while walking in an unfamiliar environment: A pilot study. Journal of Optometry. 2009;2(3):119–126. doi: 10.3921/joptom.2009.119. [DOI] [Google Scholar]
  76. Kountouriotis GK, Merat N. Leading to distraction: Driver distraction, lead car, and road environment. Accident; Analysis and Prevention. 2016;89:22–30. doi: 10.1016/j.aap.2015.12.027. [DOI] [PubMed] [Google Scholar]
  77. Kountouriotis GK, Merat N, Floyd RC, Gardner PH, Wilkie RM. The role of gaze and road edge information during high-speed locomotion. Journal of Experimental Psychology. Human Perception and Performance. 2011;38(3):687–702. doi: 10.1037/a0026123. [DOI] [PubMed] [Google Scholar]
  78. Kowler E, Anderson E, Dosher B, Blaser E. The role of attention in the programming of saccades. Vision Research. 1995;35(13):1897–1916. doi: 10.1016/0042-6989(94)00279-U. [DOI] [PubMed] [Google Scholar]
  79. Kujala, T., Kircher, K., & Ahlström, C. (2021). A review of occlusion as a tool to assess attentional demand in driving. Human Factors: The Journal of the Human Factors and Ergonomics Society, 187208211010953. 10.1177/00187208211010953 [DOI] [PMC free article] [PubMed]
  80. Lamble D, Laakso M, Summala H. Detection thresholds in car following situations and peripheral vision: implications for positioning of visually demanding in-car displays. Ergonomics. 1999;42(6):807–815. doi: 10.1080/001401399185306. [DOI] [Google Scholar]
  81. Larson AM, Freeman TE, Ringer RV, Loschky LC. The spatiotemporal dynamics of scene gist recognition. Journal of Experimental Psychology. Human Perception and Performance. 2014;40(2):471–487. doi: 10.1037/a0034986. [DOI] [PubMed] [Google Scholar]
  82. Larson AM, Loschky LC. The contributions of central versus peripheral vision to scene gist recognition. Journal of Vision. 2009;9(10):6.1-16. doi: 10.1167/9.10.6. [DOI] [PubMed] [Google Scholar]
  83. Lehtonen E, Lappi O, Koirikivi I, Summala H. Effect of driving experience on anticipatory look-ahead fixations in real curve driving. Accident; Analysis and Prevention. 2014;70:195–208. doi: 10.1016/j.aap.2014.04.002. [DOI] [PubMed] [Google Scholar]
  84. Lehtonen E, Lappi O, Koskiahde N, Mansikka T, Hietamäki J, Summala H. Gaze doesn't always lead steering. Accident; Analysis and Prevention. 2018;121:268–278. doi: 10.1016/j.aap.2018.09.026. [DOI] [PubMed] [Google Scholar]
  85. Lin CY, Hsu CC. Measurement of auditory cues in drivers distraction. Perceptual & Motor Skills. 2010;111(2):503–516. doi: 10.2466/03.13.20.24.26.PMS.111.5.503-516. [DOI] [PubMed] [Google Scholar]
  86. Livne T, Sagi D. Configuration influence on crowding. Journal of Vision. 2007;7(2):4.1-12. doi: 10.1167/7.2.4. [DOI] [PubMed] [Google Scholar]
  87. Loschky LC, Sethi A, Simons DJ, Pydimarri TN, Ochs D, Corbeille JL. The importance of information localization in scene gist recognition. Journal of Experimental Psychology. Human Perception and Performance. 2007;33(6):1431–1450. doi: 10.1037/0096-1523.33.6.1431. [DOI] [PubMed] [Google Scholar]
  88. Loschky LC, Szaffarczyk S, Beugnet C, Young ME, Boucart M. The contributions of central and peripheral vision to scene-gist recognition with a 180° visual field. Journal of Vision. 2019;19(5):15. doi: 10.1167/19.5.15. [DOI] [PubMed] [Google Scholar]
  89. Luo G, Vargas-Martin F, Peli E. The role of peripheral vision in saccade planning: Learning from people with tunnel vision. Journal of Vision. 2008;8(14):25–25. doi: 10.1167/8.14.25. [DOI] [PMC free article] [PubMed] [Google Scholar]
  90. Luoma J. Perception and eye movements in simulated traffic situations. Acta Ophthalmologica. 1984;62(Suppl. 161):128–134. doi: 10.1111/j.1755-3768.1984.tb06794.x. [DOI] [PubMed] [Google Scholar]
  91. Mack A, Rock I. Inattentional blindness. MIT press; 1998. [Google Scholar]
  92. Mackworth NH, Morandi AJ. The gaze selects informative details within pictures. Perception & Psychophysics. 1967;2(11):547–552. doi: 10.3758/BF03210264. [DOI] [Google Scholar]
  93. Manassi M, Sayim B, Herzog MH. Grouping, pooling, and when bigger is better in visual crowding. Journal of Vision. 2012;12(10):13. doi: 10.1167/12.10.13. [DOI] [PubMed] [Google Scholar]
  94. Marigold DS. Role of peripheral visual cues in online visual guidance of locomotion. Exercise and Sport Sciences Reviews. 2008;36(3):145–151. doi: 10.1097/JES.0b013e31817bff72. [DOI] [PubMed] [Google Scholar]
  95. Marigold DS, Patla AE. Visual information from the lower visual field is important for walking across multi-surface terrain. Experimental Brain Research. 2008;188(1):23–31. doi: 10.1007/s00221-008-1335-7. [DOI] [PubMed] [Google Scholar]
  96. Marigold DS, Weerdesteyn V, Patla AE, Duysens J. Keep looking ahead? Re-direction of visual fixation does not always occur during an unpredictable obstacle avoidance task. Experimental Brain Research. 2007;176(1):32–42. doi: 10.1007/s00221-006-0598-0. [DOI] [PubMed] [Google Scholar]
  97. Mars, F., & Navarro, J. (2012). Where We Look When We Drive with or without Active Steering Wheel Control. PLoS One, 7(8). [DOI] [PMC free article] [PubMed]
  98. Matthis JS, Fajen BR. Visual control of foot placement when walking over complex terrain. Journal of Experimental Psychology. Human Perception and Performance. 2014;40(1):106–115. doi: 10.1037/a0033101. [DOI] [PubMed] [Google Scholar]
  99. Matthis JS, Yates JL, Hayhoe MM. Gaze and the Control of Foot Placement When Walking in Natural Terrain. Current Biology. 2018;28(8):1224–1233.e5. doi: 10.1016/j.cub.2018.03.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  100. Mayeur A, Bremond R, Bastien JMC. Effect of task and eccentricity of the target on detection thresholds in mesopic vision: Implications for road lighting. Human Factors. 2008;50(4):712–721. doi: 10.1518/001872008X312260. [DOI] [PubMed] [Google Scholar]
  101. McConkie GW, Rayner K. The span of the effective stimulus during a fixation in reading. Perception & Psychophysics. 1975;17(6):578–586. doi: 10.3758/BF03203972. [DOI] [Google Scholar]
  102. Miura T. Coping with situational demands: A study of eye movements and peripheral vision performance. In: Gale AG, Brown ID, Taylor SP, Haslegrave CM, editors. Vision in Vehicles. Elsevier Science Publishers B.V; 1986. pp. 205–221. [Google Scholar]
  103. Miyasike-daSilva V, Allard F, McIlroy WE. Where do we look when we walk on stairs? Gaze behaviour on stairs, transitions, and handrails. Experimental Brain Research. 2011;209(1):73–83. doi: 10.1007/s00221-010-2520-z. [DOI] [PubMed] [Google Scholar]
  104. Miyasike-daSilva V, McIlroy WE. Gaze shifts during dual-tasking stair descent. Experimental Brain Research. 2016;234(11):3233–3243. doi: 10.1007/s00221-016-4721-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  105. Miyasike-daSilva V, Singer JC, McIlroy WE. A role for the lower visual field information in stair climbing. Gait and Posture. 2019;70:162–167. doi: 10.1016/j.gaitpost.2019.02.033. [DOI] [PubMed] [Google Scholar]
  106. Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. PLoS Medicine. 2009;6(7):e1000097. doi: 10.1371/journal.pmed.1000097. [DOI] [PMC free article] [PubMed] [Google Scholar]
  107. Motter BC, Simoni DA. Changes in the functional visual field during search with and without eye movements. Vision Research. 2008;48(22):2382–2393. doi: 10.1016/j.visres.2008.07.020. [DOI] [PubMed] [Google Scholar]
  108. Mourant RR, Rockwell TH. Mapping eye-movement patterns to the visual scene in driving: An exploratory study. Human Factors. 1970;12(1):81–87. doi: 10.1177/001872087001200112. [DOI] [PubMed] [Google Scholar]
  109. Murray, N. G., Leon, M. P. de, Ambati, V. N. P., Saucedo, F., Kennedy, E., & Reed-Jones, R. J. (2014). Simulated visual field loss does not alter turning coordination in healthy young adults. Journal of Motor Behavior, 46(6), 423–431. [DOI] [PubMed]
  110. Nuthmann A. How do the regions of the visual field contribute to object search in real-world scenes? Evidence from eye movements. Journal of Experimental Psychology. Human Perception and Performance. 2014;40(1):342–360. doi: 10.1037/a0033854. [DOI] [PubMed] [Google Scholar]
  111. Oliva A. Gist of the scene. In: Itti L, Rees G, Tsotsos JK, editors. Neurobiology of Attention. Elsevier; 2005. pp. 251–256. [Google Scholar]
  112. Owsley C. Aging and vision. Vision Research. 2011;51(13):1610–1622. doi: 10.1016/j.visres.2010.10.020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  113. Pagon, R. A. (1988). Retinitis pigmentosa. Survey of Ophthalmology, 33(3), 137–177. 10.1016/0039-6257(88)90085-9 [DOI] [PubMed]
  114. Patla AE. How Is Human Gait Controlled by Vision. Ecological Psychology. 1998;10(3-4):287–302. doi: 10.1080/10407413.1998.9652686. [DOI] [Google Scholar]
  115. Patten CJD, Kircher A, Ostlund J, Nilsson L, Svenson O. Driver experience and cognitive workload in different traffic environments. Accident; Analysis and Prevention. 2006;38(5):887–894. doi: 10.1016/j.aap.2006.02.014. [DOI] [PubMed] [Google Scholar]
  116. Peißl S, Wickens CD, Baruah R. Eye-Tracking Measures in Aviation: A Selective Literature Review. International Journal of Aerospace Psychology. 2018;28(3-4):98–112. doi: 10.1080/24721840.2018.1514978. [DOI] [Google Scholar]
  117. Pelli, D. G. (1987). The visual requirements of mobility. In G. C. Woo (Ed.), Low Vision: Principles and Applications. Proceedings of the International Symposium on Low Vision, University of Waterloo, June 25-27 1986 (pp. 134–146). Springer New York.
  118. Pylyshyn ZW, Storm RW. Tracking multiple independent targets: Evidence for a parallel tracking mechanism. Spatial Vision. 1988;3(3):179–197. doi: 10.1163/156856888X00122. [DOI] [PubMed] [Google Scholar]
  119. Recarte MA, Nunes LM. Mental Workload While Driving: Effects on Visual Search, Discrimination, and Decision Making. Journal of Experimental Psychology. Applied. 2003;9(2):119–137. doi: 10.1037/1076-898X.9.2.119. [DOI] [PubMed] [Google Scholar]
  120. Reimer B, Mehler B, Wang Y, Coughlin JF. A field study on the impact of variations in shortterm memory demands on drivers' visual attention and driving performance across three age groups. Human Factors: The Journal of the Human Factors and Ergonomics Society. 2012;54(3):454–468. doi: 10.1177/0018720812437274. [DOI] [PubMed] [Google Scholar]
  121. Ringer RV, Coy AM, Larson AM, Loschky LC. Investigating Visual Crowding of Objects in Complex Real-World Scenes. I-Perception. 2021;12(2):204166952199415. doi: 10.1177/2041669521994150. [DOI] [PMC free article] [PubMed] [Google Scholar]
  122. Ringer RV, Throneburg Z, Johnson AP, Kramer AF, Loschky LC. Impairing the useful field of view in natural scenes: Tunnel vision versus general interference. Journal of Vision. 2016;16(2):7. doi: 10.1167/16.2.7. [DOI] [PubMed] [Google Scholar]
  123. Robertshaw KD, Wilkie RM. Does gaze influence steering around a bend? Journal of Vision. 2008;8(4):18.1-13. doi: 10.1167/8.4.18. [DOI] [PubMed] [Google Scholar]
  124. Robinski M, Stein M. Scanning techniques of helicopter pilots. Journal of Eye Movement Research. 2013;63(2):1–17. [Google Scholar]
  125. Rosenholtz R. Capabilities and limitations of peripheral vision. Annual Review of Vision Science. 2016;2:437–457. doi: 10.1146/annurev-vision-082114-035733. [DOI] [PubMed] [Google Scholar]
  126. Rosenholtz R. Demystifying visual awareness: Peripheral encoding plus limited decision complexity resolve the paradox of rich visual experience and curious perceptual failures. Attention, Perception, & Psychophysics. 2020;82(3):901–925. doi: 10.3758/s13414-019-01968-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  127. Rosenholtz R, Huang J, Ehinger KA. Rethinking the role of top-down attention in vision: Effects attributable to a lossy representation in peripheral vision. Frontiers in Psychology. 2012;3:13. doi: 10.3389/fpsyg.2012.00013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  128. Rousselet G, Joubert O, Fabre-Thorpe M. How long to get to the “gist” of real-world natural scenes? Visual Cognition. 2005;12(6):852–877. doi: 10.1080/13506280444000553. [DOI] [Google Scholar]
  129. Ryu D, Abernethy B, Mann DL, Poolton JM. The contributions of central and peripheral vision to expertise in basketball: How blur helps to provide a clearer picture. Journal of Experimental Psychology. Human Perception and Performance. 2015;41(1):167–185. doi: 10.1037/a0038306. [DOI] [PubMed] [Google Scholar]
  130. Ryu D, Abernethy B, Mann DL, Poolton JM, Gorman AD. The role of central and peripheral vision in expert decision making. Perception. 2013;42(6):591–607. doi: 10.1068/p7487. [DOI] [PubMed] [Google Scholar]
  131. Sanocki T, Islam M, Doyon JK, Lee C. Rapid scene perception with tragic consequences: Observers miss perceiving vulnerable road users, especially in crowded traffic scenes. Attention, Perception, & Psychophysics. 2015;77(4):1252–1262. doi: 10.3758/s13414-015-0850-4. [DOI] [PubMed] [Google Scholar]
  132. Schaudt WA, Caufield KJ, Dyre BP. Effects of a virtual air speed error indicator on guidance accuracy and eye movement control during simulated flight. Proceedings of the Human Factors and Ergonomics Society Annual Meeting. 2002;46(17):1594–1598. doi: 10.1177/154193120204601714. [DOI] [Google Scholar]
  133. Scialfa CT, Cordazzo S, Bubric K, Lyon J. Aging and visual crowding. The Journals of Gerontology Series B: Psychological Sciences and Social Sciences. 2013;68(4):522–528. doi: 10.1093/geronb/gbs086. [DOI] [PubMed] [Google Scholar]
  134. Senders JW, Kristofferson AB, Levison WH, Dietrich CW. The attentional demand of automobile driving. Highway Research Record. 1967;195:9–17. [Google Scholar]
  135. Seya Y, Nakayasu H, Yagi T. Useful field of view in simulated driving: Reaction times and eye movements of drivers. I-Perception. 2013;4:285–298. doi: 10.1068/i0512. [DOI] [PMC free article] [PubMed] [Google Scholar]
  136. Shahar A, van Loon E, Clarke D, Crundall D. Attending overtaking cars and motorcycles through the mirrors before changing lanes. Accident; Analysis and Prevention. 2012;44(1):104–110. doi: 10.1016/j.aap.2011.01.001. [DOI] [PubMed] [Google Scholar]
  137. Shinoda H, Hayhoe MM, Shrivastava A. What controls attention in natural environments? Vision Research. 2001;41(25-26):3535–3545. doi: 10.1016/S0042-6989(01)00199-7. [DOI] [PubMed] [Google Scholar]
  138. Stewart EEM, Valsecchi M, Schütz AC. A review of interactions between peripheral and foveal vision. Journal of Vision. 2020;20(12):1–35. doi: 10.1167/jov.20.12.2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  139. Strasburger, H., Rentschler, I., & Jüttner, M. (2011). Peripheral vision and pattern recognition: A review. Journal of Vision, 11(5). 10.1167/11.5.13 [DOI] [PMC free article] [PubMed]
  140. Strayer, D. L., & Johnston, W. A. (2001). Driven to distraction: Dual-Task studies of simulated driving and conversing on a cellular telephone. Psychological Science, 12(6), 462–466. 10.1111/1467-9280.00386 [DOI] [PubMed]
  141. Strayer DL, Cooper JM, McCarty MM, Getty DJ, Wheatley CL, Motzkus CJ, Goethe RM, Biondi F, Horrey WJ. Visual and Cognitive Demands of CarPlay, Android Auto, and Five Native Infotainment Systems. Human Factors. 2019;61(8):1371–1386. doi: 10.1177/0018720819836575. [DOI] [PubMed] [Google Scholar]
  142. Strayer DL, Drews FA, Johnston WA. Cell phone-induced failures of visual attention during simulated driving. Journal of Experimental Psychology. Applied. 2003;9(1):23–32. doi: 10.1037/1076-898X.9.1.23. [DOI] [PubMed] [Google Scholar]
  143. Summala H, Nieminen T, Punto M. Maintaining lane position with peripheral vision during in-vehicle tasks. Human Factors. 1996;38(3):442–451. doi: 10.1518/001872096778701944. [DOI] [Google Scholar]
  144. Sweeny TD, Haroz S, Whitney D. Perceiving group behavior: Sensitive ensemble coding mechanisms for biological motion of human crowds. Journal of Experimental Psychology. Human Perception and Performance. 2013;39(2):329–337. doi: 10.1037/a0028712. [DOI] [PubMed] [Google Scholar]
  145. Timmis, M. A., Bijl, H., Turner, K., Basevitch, I., Taylor, M. J. D., & van Paridon, K. N. (2017). The impact of mobile phone use on where we look and how we walk when negotiating floor based obstacles. PLoS One, 12(6). 10.1371/journal.pone.0179802 [DOI] [PMC free article] [PubMed]
  146. Tong MH, Zohar O, Hayhoe MM. Control of gaze while walking: Task structure, reward, and uncertainty. Journal of Vision. 2017;17(1):28. doi: 10.1167/17.1.28. [DOI] [PMC free article] [PubMed] [Google Scholar]
  147. Tootell RB, Silverman MS, Switkes E, de Valois RL. Deoxyglucose analysis of retinotopic organization in primate striate cortex. Science (New York, N.Y.) 1982;218(4575):902–904. doi: 10.1126/science.7134981. [DOI] [PubMed] [Google Scholar]
  148. Torralba A, Oliva A, Castelhano MS, Henderson JM. Contextual guidance of eye movements and attention in real-world scenes: The role of global features in object search. Psychological Review. 2006;113(4):766–786. doi: 10.1037/0033-295X.113.4.766. [DOI] [PubMed] [Google Scholar]
  149. Trouilloud A, Kauffmann L, Roux-Sibilon A, Rossel P, Boucart M, Mermillod M, Peyrin C. Rapid scene categorization: From coarse peripheral vision to fine central vision. Vision Research. 2020;170:60–72. doi: 10.1016/j.visres.2020.02.008. [DOI] [PubMed] [Google Scholar]
  150. Tsai Y, Viirre E, Strychacz C, Chase B, Jung T-P. Task Performance and Eye Activity: Predicting Behavior Relating to Cognitive Workload. Aviation, Space, and Environmental Medicine. 2007;78:B176–B185. [PubMed] [Google Scholar]
  151. Underwood G, Chapman P, Brocklehurst N, Underwood J, Crundall D. Visual attention while driving: sequences of eye fixations made by experienced and novice drivers. Ergonomics. 2003;46(6):629–646. doi: 10.1080/0014013031000090116. [DOI] [PubMed] [Google Scholar]
  152. Underwood G, Phelps N, Wright C, van Loon E, Galpin A. Eye fixation scanpaths of younger and older drivers in a hazard perception task. Ophthalmic and Physiological Optics. 2005;25(4):346–356. doi: 10.1111/j.1475-1313.2005.00290.x. [DOI] [PubMed] [Google Scholar]
  153. Vaeyens R, Lenoir M, Williams AM, Philippaerts RM. Mechanisms underpinning successful decision making in skilled youth soccer players: An analysis of visual search behaviors. Journal of Motor Behavior. 2007;39(5):395–408. doi: 10.3200/JMBR.39.5.395-408. [DOI] [PubMed] [Google Scholar]
  154. VanRullen R, Reddy L, Koch C. Visual search and dual tasks reveal two distinct attentional resources. Journal of Cognitive Neuroscience. 2004;16(1):4–14. doi: 10.1162/089892904322755502. [DOI] [PubMed] [Google Scholar]
  155. Vargas-Martín F, Peli E. Eye movements of patients with tunnel vision while walking. Investigative Ophthalmology & Visual Science. 2006;47(12):5295–5302. doi: 10.1167/iovs.05-1043. [DOI] [PMC free article] [PubMed] [Google Scholar]
  156. Vater C. How selective attention affects the detection of motion changes with peripheral vision in MOT. Heliyon. 2019;5(8):e02282. doi: 10.1016/j.heliyon.2019.e02282. [DOI] [PMC free article] [PubMed] [Google Scholar]
  157. Vater C, Kredel R, Hossner E-J. Detecting single-target changes in multiple object tracking: The case of peripheral vision. Attention, Perception, and Psychophysics. 2016;78(4):1004–1019. doi: 10.3758/s13414-016-1078-7. [DOI] [PubMed] [Google Scholar]
  158. Vater C, Kredel R, Hossner E-J. Detecting target changes in multiple object tracking with peripheral vision: More pronounced eccentricity effects for changes in form than in motion. Journal of Experimental Psychology. Human Perception and Performance. 2017;43(5):903–913. doi: 10.1037/xhp0000376. [DOI] [PubMed] [Google Scholar]
  159. Vater C, Kredel R, Hossner E-J. Disentangling vision and attention in multiple-object tracking: How crowding and collisions affect gaze anchoring and dual-task performance. Journal of Vision. 2017;17(5):1–13. doi: 10.1167/17.5.21. [DOI] [PubMed] [Google Scholar]
  160. Vater C, Luginbühl SP, Magnaguagno L. Testing the functionality of peripheral vision in a mixed-methods football field study. Journal of Sports Sciences. 2019;37(24):2789–2797. doi: 10.1080/02640414.2019.1664100. [DOI] [PubMed] [Google Scholar]
  161. Vater C, Roca A, Williams AM. Effects of anxiety on anticipation and visual search in dynamic, time-constrained situations. Sport, Exercise, and Performance Psychology. 2016;5(3):179–192. doi: 10.1037/spy0000056. [DOI] [Google Scholar]
  162. Vater C, Williams AM, Hossner E-J. What do we see out of the corner of our eye? The role of visual pivots and gaze anchors in sport. International Review of Sport and Exercise Psychology. 2020;13(1):81–103. doi: 10.1080/1750984X.2019.1582082. [DOI] [Google Scholar]
  163. Victor TW, Harbluk JL, Engstrom JA. Sensitivity of eye-movement measures to in-vehicle task difficulty. Transportation Research Part F: Traffic Psychology and Behaviour. 2005;8(2):167–190. doi: 10.1016/j.trf.2005.04.014. [DOI] [Google Scholar]
  164. Vine SJ, Lee D, Moore LJ, Wilson MR. Quiet eye and choking: Online control breaks down at the point of performance failure. Medicine and Science in Sports and Exercise. 2013;45(10):1988–1994. doi: 10.1249/MSS.0b013e31829406c7. [DOI] [PubMed] [Google Scholar]
  165. Wang P, Cottrell GW. Central and peripheral vision for scene recognition: A neurocomputational modeling exploration. Journal of Vision. 2017;17(4):9. doi: 10.1167/17.4.9. [DOI] [PubMed] [Google Scholar]
  166. Ward N, Gaspar JG, Neider MB, Crowell J, Carbonari R, Kaczmarski H, Ringer RV, Johnson AP, Loschky LC, Kramer AF. Older Adult Multitasking Performance Using a Gaze-Contingent Useful Field of View. Human Factors: The Journal of the Human Factors and Ergonomics Society. 2018;60(2):236–247. doi: 10.1177/0018720817745894. [DOI] [PubMed] [Google Scholar]
  167. Warren WH, Hannon DJ. Direction of self-motion is perceived from optical flow. Nature. 1988;336(6195):162–163. doi: 10.1038/336162a0. [DOI] [Google Scholar]
  168. Weigelt M, Güldenpenning I, Steggemann-Weinrich Y, Alaboud AA, M., & Kunde, W. Control over the processing of the opponent's gaze direction in basketball experts. Psychonomic Bulletin & Review. 2017;24(3):828–834. doi: 10.3758/s13423-016-1140-4. [DOI] [PubMed] [Google Scholar]
  169. Wijntjes MWA, Rosenholtz R. Context mitigates crowding: Peripheral object recognition in real-world images. Cognition. 2018;180:158–164. doi: 10.1016/j.cognition.2018.06.015. [DOI] [PubMed] [Google Scholar]
  170. Williams LJ. Tunnel vision induced by a foveal load manipulation. Human Factors: The Journal of the Human Factors and Ergonomics Society. 1985;27(2):221–227. doi: 10.1177/001872088502700209. [DOI] [PubMed] [Google Scholar]
  171. Williams LJ. Tunnel Vision or General Interference? Cognitive Load and Attentional Bias Are Both Important. The American Journal of Psychology. 1988;101(2):171. doi: 10.2307/1422833. [DOI] [PubMed] [Google Scholar]
  172. Williams LJ. Foveal Load Affects the Functional Field of View. Human Performance. 1989;2(1):1–28. doi: 10.1207/s15327043hup0201_1. [DOI] [Google Scholar]
  173. Williams M, Davids K. Declarative Knowledge in Sport: A By-Product of Experience or a Characteristic of Expertise? Journal of Sport and Exercise Psychology. 1995;17(3):259–275. doi: 10.1123/jsep.17.3.259. [DOI] [Google Scholar]
  174. Wolfe, J. M. (2021). Guided Search 6.0: An updated model of visual search. Psychonomic Bulletin & Review, 1–33. 10.3758/s13423-020-01859-9 [DOI] [PMC free article] [PubMed]
  175. Wolfe B, Dobres J, Rosenholtz R, Reimer B. More than the Useful Field: Considering peripheral vision in driving. Applied Ergonomics. 2017;65:316–325. doi: 10.1016/j.apergo.2017.07.009. [DOI] [PubMed] [Google Scholar]
  176. Wolfe B, Kosovicheva A, Leib AY, Wood K, Whitney D. Foveal input is not required for perception of crowd facial expression. Journal of Vision. 2015;15(4):11. doi: 10.1167/15.4.11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  177. Wolfe B, Sawyer BD, Kosovicheva A, Reimer B, Rosenholtz R. Detection of brake lights while distracted: Separating peripheral vision from cognitive load. Attention, Perception, & Psychophysics. 2019;81(8):2798–2813. doi: 10.3758/s13414-019-01795-4. [DOI] [PubMed] [Google Scholar]
  178. Wolfe, B., Sawyer, B. D., & Rosenholtz, R. (2020). Toward a theory of visual information acquisition in driving. Human Factors, 18720820939693. 10.1177/0018720820939693 [DOI] [PMC free article] [PubMed]
  179. Wolfe B, Whitney D. Facilitating recognition of crowded faces with presaccadic attention. Frontiers in Human Neuroscience. 2014;8:103. doi: 10.3389/fnhum.2014.00103. [DOI] [PMC free article] [PubMed] [Google Scholar]
  180. Wolfe B, Whitney D. Saccadic remapping of object-selective information. Attention, Perception & Psychophysics. 2015;77(7):2260–2269. doi: 10.3758/s13414-015-0944-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  181. Wood K, Simons DJ. Processing without noticing in inattentional blindness: A replication of Moore and Egeth (1997) and Mack and Rock (1998) Attention, Perception, & Psychophysics. 2019;81(1):1–11. doi: 10.3758/s13414-018-1629-1. [DOI] [PubMed] [Google Scholar]
  182. Wu CC, Wolfe JM. Comparing eye movements during position tracking and identity tracking: No evidence for separate systems. Attention, Perception & Psychophysics. 2018;80(2):453–460. doi: 10.3758/s13414-017-1447-x. [DOI] [PubMed] [Google Scholar]
  183. Yamanashi Leib A, Fischer J, Liu Y, Qiu S, Robertson L, Whitney D. Ensemble crowd perception: A viewpoint-invariant mechanism to represent average crowd identity. Journal of Vision. 2014;14(8):26. doi: 10.1167/14.8.26. [DOI] [PMC free article] [PubMed] [Google Scholar]
  184. Yu C-S, Wang EM, Li W-C, Braithwaite G. Pilots' Visual Scan Patterns and Situation Awareness in Flight Operations. Aviation Space and Environmental Medicine. 2014;85(7):708–714. doi: 10.3357/ASEM.3847.2014. [DOI] [PubMed] [Google Scholar]
  185. Zhang T, Chan AHS, Ba Y, Zhang W. Situational driving anger, driving performance and allocation of visual attention. Transportation Research Part F: Traffic Psychology and Behaviour. 2016;42:376–388. doi: 10.1016/j.trf.2015.05.008. [DOI] [Google Scholar]
  186. Zhao N, Chen W, Xuan Y, Mehler B, Reimer B, Fu X. Drivers' and non-drivers' performance in a change detection task with static driving scenes: Is there a benefit of experience? Ergonomics. 2014;57(7):998–1007. doi: 10.1080/00140139.2014.909952. [DOI] [PubMed] [Google Scholar]
  187. Ziv G. Gaze Behavior and Visual Attention: A Review of Eye Tracking Studies in Aviation. International Journal of Aviation Psychology. 2017;26(3-4):75–104. doi: 10.1080/10508414.2017.1313096. [DOI] [Google Scholar]
  188. Zwahlen, H. T. (1989). Conspicuity of suprathreshold reflective targets in a driver's peripheral visual field at night. Transportation Research Record (1213), 35–46.

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Table 3 is provided as an excel document on the Open Science Framework (https://osf.io/vea5r/?view_only=ba8597fef6514be68082d9e878fff5d2). The review was not pre registered.


Articles from Psychonomic Bulletin & Review are provided here courtesy of Springer

RESOURCES