Skip to main content
PLOS One logoLink to PLOS One
. 2019 Mar 25;14(3):e0214281. doi: 10.1371/journal.pone.0214281

Traffic symbol recognition modulates bodily actions

Mayuko Iriguchi 1,*, Rumi Fujimura 1,#, Hiroki Koda 1,*,#, Nobuo Masataka 1
Editor: Hisao Nishijo2
PMCID: PMC6433245  PMID: 30908546

Abstract

Traffic signals, i.e., iconic symbols conveying traffic rules, generally represent spatial or movement meanings, e.g., “Stop”, “Go”, “Bend warning”, or “No entry”, and we visually perceive these symbols and produce appropriate bodily actions. The traffic signals are clearly thought to assist in producing bodily actions such as going forward or stopping, and the combination of symbolic recognition through visual perception and production of bodily actions could be one example of embodied cognition. However, to what extent our bodily actions are associated with the symbolic representations of commonly used traffic signals remains unknown. Here we experimentally investigated how traffic symbol recognition cognitively affects bodily action patterns, by employing a simple stimulus-response task for traffic sign recognition with a response of either sliding or pushing down on a joystick in a gamepad. We found that when operating the joystick, participants’ slide reaction in response to the “Go” traffic symbol was significantly faster than their push reaction, while their response time to the “Stop” signal showed no differences between sliding and pushing actions. These results suggested that there was a possible association between certain action patterns and traffic symbol recognition, and in particular the “Go” symbol was congruent with a sliding action as a bodily response. Our findings may thus reveal an example of embodied cognition in visual perception of traffic signals.

Introduction

Embodied cognition has been discussed widely during the last few decades in several fields, such as psychology, linguistics and philosophy. The concepts of embodied cognition appear to oppose traditional views of human cognition, which posit that the mind as an information processor does not depend on the physical body [1,2]. These concepts focus on the embodiment of sensory and motor functions in cognition, and how the body modulates and shapes mental processing [3,4].

So far, many theoretical studies have discussed embodied cognition from various points of view. Cognitive linguistics theories, for example, argue that abstract concepts are metaphorically based on embodied knowledge [5,6]. Metaphors such as “good is up” and “life is a gamble” have underlying spatial orientation, and structured experiences and activities, respectively, and these metaphors fundamentally include aspects of how people think and understand [7]. Mental processing needs to be understood as involving the interaction between a physical body activities and its environment [8]. In other views, Barsalou [9,10] argued that perceptual symbols are often modal and represented in the same way as they are perceived, so he proposed his perceptual symbol system, which integrates traditional theories with theories of embodied cognition.

Empirically, embodied cognition has been tested to examine how the interactions between actions and cognition occurs. For instance, we use gestures to describe specific meanings such as shape, placement and motion in communication, and these gestural body movements are more frequently represented particularly when we express spatial concepts including directions, locations and motion in space (e.g., [1113]). Some experiments showed that participants produced gestures more than twice as frequently when they spoke about spatial topics as when they spoke about verbal or non-spatial topics [11]. In a situation in which participants were prohibited from using gestures, their speech became less frequent, and when gestures and meanings of speech were not congruent, participants tended to produce more errors in conversation [1417].

Finger counting, which clearly involves bodily actions, also conveys numerical concepts, and it supports people’s understanding of numbers, including arithmetical calculation [18,19]. In finger counting, the bodily actions are often spatially oriented relative to the horizontal (left/right) and vertical (top/bottom) axes, and these spatially dependent numerical axes are likely to be advantageous for the numerical recognitions (e.g., [20,21]). More fundamentally, visual perceptions are likely coupled with bodily actions, and humans process meanings of visual symbols or objects by responding with appropriate body movements [2224]. A number of psychological experiments found that there were associations between visual stimuli such as images of objects (e.g., vegetables and clothes) or colour patches and bodily actions as responses to them, and that specific positions or symbolic concepts represented by visual stimuli could determine appropriate sensory modalities and promote bodily responses, as a congruent condition between visual stimuli and modality could produce faster reaction times or fewer errors in bodily responses, while incongruent conditions seemed to produce the opposite results [2426]. In those experiments, participants answered concepts about an object (e.g., either the object was upright or inverted) or colours of a patch (e.g., either red or green) by pressing either a left or right button, and participants showed a significant delay of reaction times or more errors when concepts or colours of visual stimuli were not spatially congruent with the corresponding buttons to which they should react with their hands [23,24,2729].

Pedestrian traffic signals we get used to in daily life have underlying symbolic meanings, “Go” and “Stop”, and we perceive these visual symbols and produce relevant bodily responses. We must perceive the meanings of the symbols and respond to them bodily as precisely and rapidly as possible: this means that the symbols’ designs should function to promote our appropriate bodily actions, and traffic symbols are an ideal set of visual symbols to empirically test regarding the embodied cognition. However, the possible cognitive-action interactions invoked by traffic signals still remain unclear.

Here we experimentally investigated the association between symbolic recognition and bodily action considering the concepts of embodied cognition, and examined the execution of action patterns in response to traffic signals by subjects using a joystick in a gamepad. A joystick is a common interface between a computer device and a player, and is used to simulate bodily actions, particularly in roleplay games in which a player becomes a character and produces relevant actions corresponding to recognition of symbols, conditions and environments in the game. We simply hypothesised that congruence/incongruence between traffic signals and the joystick reactions could influence the reaction time, based on concepts of embodied cognition.

Materials and methods

Ethics

All experiments were carried out in accordance with the Guidelines for Research in Human Participants, issued by the Human Research Ethics Committee of the Primate Research Institute, Kyoto University, and the experimental protocol was approved by the Committee (Permit No.2017-04). Before the experiments, we obtained written informed consent from all participants.

Participants

Twenty-six adult participants (mean with standard deviation of age, 31.0 +/- 9.98 yrs, range 22–55 yrs) participated in the experiments. The participants included 10 males (age: 26.9 +/- 7.88 yrs, range: 22–47) and 16 females (age: 33.6 +/- 10.51, range: 23–55). All participants were Japanese and right handed, and participants did not show limited intellectual skills as tested by the Raven Coloured Progressive Matrices.

Stimuli

We used Japanese pedestrian traffic signals, which consisted of two types of symbol shapes, “a man walking from the right to the left side” as a “Go” symbol, or “a standing man” as a “Stop” symbol, with black-coloured conversion (Fig 1). In Japan, the Go signal always appears unidirectionally (walking from right to left), but we prepared Go signals using two directions, i.e., from right-to-left or left-to-right, in order to examine if an enhancement/interference effect of perception on horizontal action appears bidirectionally. We converted all stimuli to a size fitted in 250 x 250 pixels.

Fig 1. Type of stimuli.

Fig 1

Stimuli in the experiments included three types: (a) Go-left, (b) Go-right and (c) Stop.

Apparatus

All experiments were controlled using a custom-made program written using OpenSesame software ver. 3.1.6 (Mathôt, 2010–2016) on a laptop computer (HP Pavilion dv6, Tokyo, Japan) connected to a USB-gamepad (Elecom, JC-FU2912FBK, Japan). During the experiments, we recorded the reactions of participants on the gamepad in response to the traffic signal types “Go” or “Stop”.

Procedure

Participants sat in front of the 16-inch screen of a laptop computer (resolutions: 1366 x 768 pixels), and held the gamepad with both hands. The screen was approximately 70 cm away from the participant, and the estimated visual angle of the stimuli was 20 degrees. First, in a single trial, a fixation dot with 8 pixels radius appeared at the centre of the screen. After 0.5–1.5 seconds, the fixation dot was replaced with either a “Go” or “Stop” stimulus at the centre of the screen. Participants were required to respond to either the “Go” or “Stop” stimulus by forward-sliding or pushing down, respectively, on the right joystick of the gamepad with the right hand. We examined two action conditions, i.e., “Go-Slide-Stop-Push (GSSP)” or “Go-Push-Stop-Slide (GPSS)” conditions. In the GSSP condition, the “Go” signal should be reacted to by forward-sliding of the joystick, and the “Stop” signal by pushing down on the stick (Fig 2). In contrast, in the GPSS condition, the “Go” signal should be reacted to by pushing down on the stick, and “Stop” by sliding forward (Fig 2). After the reactions, the next trial was started. First, the experiment included a practice phase with 16 trials (2 types of stimulus: Go or Stop x 8 times) for each of the “GSSP” condition and “GPSS” condition, and then the main phases. In the experiment, the main phases included 144 trials in total. These trials were divided into 2 action conditions, the “GSSP” and “GPSS” conditions, and the order of these conditions was counterbalanced for each participant. All participants carried out both conditions. Each condition consisted of 72 trials that also had 2 patterns, namely, combinations of the following stimuli: left-to-right-walking Go (Go-left stimulus) and Stop (Stop stimulus), and right-to-left-walking Go (Go-right stimulus) and Stop. In each pattern, Go (Go-left or Go-right) and Stop stimuli appeared 18 times in random order. During the experiments, the reaction times were always recorded.

Fig 2. Experimental procedure.

Fig 2

A fixation dot appeared, and after a 0.5–1.5 second interval, the fixation dot was replaced with one of two types of stimulus: either one of a pair of Go-left and Stop, or Go-right and Stop. Each stimulus of Go and Stop type appeared 18 times in random order, and participants responded to a stimulus type by gamepad actions: either the action type GSSP (Go Slide-Stop-Push) or GPSS (Go-Push-Stop-Slide). Each participant performed 144 trials: two types of stimulus (Go or Stop) x 18 times x two combinations (Go-left and Stop or Go-right and Stop) x two types of action (GSSP or GPSS).

Analysis

The reaction times were analysed by analysis of variance tests (ANOVA) based on the linear mixed models in SPSS ver. 20. In the models, we set the action types (GSSP or GPSS) and stimulus shapes (Go or Stop) with their interaction effect terms (action type x stimulus shape) as the fixed main effect terms, and participant ID as a random effect term. When a significant interaction effect was observed, we examined the simple main effects as a post-hoc comparison. To examine the simple main effects in the mixed models of SPSS, we used the “Estimated Marginal Means” option with Bonferroni correction of the SPSS. We computed the 95% confidence intervals of the estimated marginal means between two comparison levels for each level of the other condition, and examined whether the two levels differed significantly.

Results

When we conducted a two-way 2 x 3 ANOVA, we found a significant interaction effect between action type and stimulus shape (F2, 120.937 = 46.469, p < 0.001, partial η2 = 0.090, Fig 3). Next, we performed post-hoc comparisons by computing the estimated marginal means. The reaction times for Go-Left or Go-Right stimuli were significantly shorter for the sliding reaction than for the pushing reaction, while those for the Stop stimulus did not significantly differ between the sliding and pushing reactions (Fig 3, Table 1 for the statistical details). Likewise, when comparing the reaction times for stimulus types for each action condition, we found that the reaction times for Stop stimuli were significantly longer than those for Go-Left or Go-Right stimuli in the case of GSSP, while we found the opposite in the case of GPSS (Fig 3, Table 1 for the statistical details). We also calculated the accuracy rates for each stimulus: Go-left, Go-right and Stop, in GSSP and GPSS, and found that they were 98.9% for Go-Left, 99.4% for Go-right and 97.3% for Stop in GSSP, and 97.9%, 95.1% and 98.8% in GPSS, respectively.

Fig 3. Average reaction time of participants (N = 26) according to the action performed on a gamepad joystick and stimulus shapes.

Fig 3

The graph (left half) shows the average reaction time for different stimuli (Go-left, Go-right and Stop) with the action of sliding for Go and pushing for Stop (GSSP), and the graph (right half) shows the average reaction time for the same stimuli with the action of pushing for Go and sliding for Stop (GPSS).

Table 1. Statistical results for post-hoc comparisons using the estimated marginal means.

Stimulus type Action type Mean difference SE p-value 95% confidence interval
lower bound upper bound
Go-left GSSP GPSS -140.272 7.836 .000 -155.635 -124.908
GPSS GSSP 140.272 7.836 .000 124.908 155.635
Go-right GSSP GPSS -144.969 7.895 .000 -160.448 -129.489
GPSS GSSP 144.969 7.895 .000 129.489 160.448
Stop GSSP GPSS 3.318 5.582 .552 -7.626 14.262
GPSS GSSP -3.318 5.582 .552 -14.262 7.626

Post-hoc comparisons between stimulus type (Go-left, Go-right or Stop) and action type (GSSP or GPSS).

Discussion

The experiment reported here clearly showed faster reaction, particularly for the “Go” stimulus, in the GSSP condition, which would be assumed to be a “congruent” condition, and slower reaction for the “Go” stimulus in the GPSP condition, which would be assumed to be an “incongruent” condition. That is, in particular, the participants easily slid the sticks for “Go”. In contrast, they had difficulty in operating the stick in the reverse way: pushing down for the “Go” decision. For “Stop” stimulus, a delay or shortening of reaction time was not observed in either the GSSP or GPSP condition. These results suggested that cognitive-motor engagement could be observed when recognising traffic signs, particularly for motions such as pedestrian walking, providing evidence of embodied cognition.

Traffic signals are familiar icons for the representation of traffic rules, which historically started in London in the late 19th century as actual physical movements of police officers’ arms and hands and blowing of whistles [30], conveying the rules in order to avoid serious traffic accidents. To achieve their purpose, the traffic signals require us to quickly perceive their symbolic meanings and react with appropriate body operations, and thus traffic signals should be designed to be as quickly and easily understandable as possible. In fact, misunderstanding of the signal meaning often leads to the critical traffic accidents [31,32], suggesting the importance of the signal designs. Recently, the influence of the signal designs for the symbol recognitions has been reported. The memory task of traffic signals revealed that some symbolic features enhances their memorizations of signal meaning, e.g., an advantage of signals made from Chinese characters for Chinese people [3335]. Likewise, signal designs (e.g., arrow direction) influence their driving abilities when the drivers recognise the signal meaning [36]. Some recent studies revealed strong relationships between recognition of traffic signals and bodily actions. For example, traffic signals representing obligatory and prohibited actions effectively influence decision making and responses in appropriate mental processing of the signals’ meanings [3739]. When recognizing traffic signs including the direction (represented by an arrow) of an airport (represented by an airplane), participants could also most rapidly and precisely detect the meaning of directionally congruent signs of the airplane and arrow [36]. How we visually perceive and process the meanings of traffic signals in our cognition possibly affects actual bodily actions.

A gamepad or similar devices are familiar tools for computer games, and these are also used in many experiments such as driving simulation in order to examine drivers’ behaviour [36,40,41]. In computer games, a joystick in a gamepad is commonly and easily used for everyone in Japan. A player act as a character on the screen and move the body of character by a joystick, and forward sliding is often associated with forward moving of character. Similarly, in experiments, a gamepad or similar devices were used in order to examine traffic symbol recognition and drivers’ behaviour. Recognition of airport direction signs with an arrow and airplane symbols were examined using a gamepad action, and this study revealed that congruence between signs and bodily actions possibly improves traffic signage [36].

As shown in our study, congruence between perception of symbols and body action is fundamental for producing responses. In particular, recognition of a visual “Go” symbol was more likely to cause a faster reaction time of a sliding action than of a pushing down action. This is consistent with various congruences regarding other actions, e.g., gestures or finger counting. Gestures convey specific meanings in conversation, and those meanings are often associated with direction, location and motion involving space, and body movements used as gestures and the meanings should be matched to facilitate understanding and communication of correct meanings [1113]. Finger counting clearly shows the magnitude of number-associated space (the SNARC effect), and these concepts underlie the bodily action of finger counting, combined with eye gaze or changing body directions during counting numbers [4244]. Bodily movements such as gestures and finger counting enable us to transmit information more effectively and enhance understanding when meanings and bodily movements are congruent in our mind. These effects of the congruency between symbolic meanings and bodily actions have been observed in studies of both gestures and finger counting, and incongruent conditions can cause errors and decreases of fluency and understanding of meanings [1418,45]. The results of our study indicated that the symbolic meaning of “Go” may be more associated with sliding action than with pushing action, and “Stop” may possibly be more associated with pushing action than with sliding action, and whether there was congruency or incongruency between the recognition of a symbol and bodily actions could affect reaction times.

As with gestures and finger counting, visual perception such as perception of symbols is strongly associated with bodily actions, and incongruence between symbolic meanings and perceptions of symbols could cause negative effects on responses, such as errors or delays of reaction time [2426]. We observed that participants responded to a “Go” stimulus faster by a sliding action than by a pushing action, and “Go” symbol might be congruent with a sliding action. At this stage, however, we could not conclude whether the symbols could either enhance the bodily action of sliding a joystick, or inhibit that of pushing it in this experimental study, or whether our findings could have occurred due to a combination of both enhancement and inhibition, because any neutral conditions were lacking in our experiments. Some reported studies examined whether either an enhancement effect or inhibition effect impacted the variation of reaction time or accuracy of performance in experimental tasks such as word-translation and spatial orienting paradigm, and those studies examined which effect, enhancement or inhibition, could determine subjects’ responses [4650]. For instance, in spatial attention tasks, a cue appeared on the PC screen to fixate a participant’s eye location in the centre once before the target stimulus appeared, and then a participant moved their eyes to the target [50]. In these tasks, they used the neutral condition stimuli to separate the stimulus influence of “enhancement” from “inhibition.” Thus, participants in our experiments might have responded to symbols by bodily actions according to either enhancement or inhibition effects, or even both. In the near further, we should test which effects have more influence on bodily responses, by applying a neutral condition to enable a deep understanding of stimulus-action interaction.

Conclusions

Our results clearly suggested a common embodied cognition in visual perception shown as a cognition-action association between symbolic recognition of traffic signals, “Go” and “Stop”, and bodily actions of sliding and pushing of a joystick. Congruence between symbolic recognition and bodily action may enhance participants’ responses, and incongruence may produce the opposite effects. This is a fundamental idea for understanding embodied cognitive features of recognition of symbolic meanings and for future practices of traffic signal design.

Supporting information

S1 Dataset. Data of reaction times of participants.

(CSV)

Acknowledgments

We would like to thank all participants who joined the experiments. We also thank Elizabeth Nakajima for English proofreading.

Data Availability

All relevant data are within the manuscript and its Supporting Information files.

Funding Statement

This research was funded by Japan Society for the Promotion of Science (JSPS) and Ministry of Education, Culture, Sports, Science and Technology (MEXT) KAKENHI (#18H03503 and #4903, JP17H06380, both awarded to HK). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Block N. The mind as the software of the brain In: Smith EE, Osherson DN, editors. Thinking. MA: MIT Press; 1995. pp. 377–425. [Google Scholar]
  • 2.Fodor JA, Pylyshyn ZW. Connectionism and cognitive architecture: a critical analysis. Cognition. 1988;28: 3–71. [DOI] [PubMed] [Google Scholar]
  • 3.Niedenthal PM, Barsalou, Lawrence W, Winkielman P, Krauth-Gruber S, Ric F. Embodiment in attitudes, social perception, and emotion. Personal Soc Psychol Rev. 2005;9: 184–211. 10.1207/s15327957pspr0903_1 [DOI] [PubMed] [Google Scholar]
  • 4.Foglia L, Wilson RA. Embodied cognition. Embodied Cogn. 2013;4: 319–325. 10.4324/9780203850664 [DOI] [PubMed] [Google Scholar]
  • 5.Lakoff G, Johnson M. Metaphors we live by. Chicago: University of Chicago Press; 1980. [Google Scholar]
  • 6.Lakoff G, Johnson M. Philosophy in the Flesh: the embodied mind and its challenge to Western Thought. New York: Basic Books; 1999. [Google Scholar]
  • 7.Lakoff G, Johnson M. The metaphorical structure of the human conceptual system. Cogn Sci. 1980;4: 195–208. 10.1016/S0364-0213(80)80017-6 [DOI] [Google Scholar]
  • 8.Wilson M. Six views of embodied cognition. Psychon Bull Rev. 2002;9: 625–636. 10.3758/BF03196322 [DOI] [PubMed] [Google Scholar]
  • 9.Barsalou LW. Perceptual symbol systems. Behav Brain Sci. 1999;22: 577–660. 10.1017/S0140525X99532147 [DOI] [PubMed] [Google Scholar]
  • 10.Barsalou LW. Grounded Cognition. Annu Rev Psychol. 2008;59: 617–645. 10.1146/annurev.psych.59.103006.093639 [DOI] [PubMed] [Google Scholar]
  • 11.Alibali MW. Gesture in spatial cognition: expressing, communicating, and thinking about spatial information. Spat Cogn Comput. 2005;5: 307–331. [Google Scholar]
  • 12.Emmorey K., Tversky B. and Taylor H. Using space to describe space: Perspective in speech, sign and gesture. Spat Cogn Comput. 2000;2: 157–180. [Google Scholar]
  • 13.Kita S. and Özyürek A. What does cross-linguistic variation in semantic coordination of speech and gesture reveal?: Evidence for an interface representation of spatial thinking and speakin. J Mem Lang. 2003;48: 16–32. [Google Scholar]
  • 14.Emmorey K. and Casey S. A comparison of spatial language in English and American Sign Language. Sign Lang Stud. 1995;88: 255–288. [Google Scholar]
  • 15.Emmorey K. and Casey S. Gesture, thought, and spatial language. Gesture. 2001;1: 35–50. [Google Scholar]
  • 16.McNeill D. Hand and Mind: What gestures reveal about thought. University. Chicago; 1992. [Google Scholar]
  • 17.Rauscher F. H., Krauss R. M. and Chen Y. Gesture, speech, and lexical access: the role of lexical movements in speech production. Psychol Sci. 1996;7: 226–231. [Google Scholar]
  • 18.Di Luca S, Pesenti M. Finger numeral representations: More than just another symbolic code. Front Psychol. 2011;2: 1–3. 10.3389/fpsyg.2011.00001 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Andres M, Olivier E, Badets A. Actions, Words, and Numbers: A Motor Contribution to Semantic Processing? Curr Dir Psychol Sci. 2008;17: 313–317. 10.1111/j.1467-8721.2008.00597.x [DOI] [Google Scholar]
  • 20.Dehaene S., Bossini S. and Giraux P. The mental representation of parity and number magnitude. J Exp Psychol Gen. 1993;122: 371–396. [Google Scholar]
  • 21.Ito Y, Hatta T. Spatial structure of quantitative representation of numbers: Evidence from the SNARC effect. Mem Cogn. 2004;32: 662–673. 10.3758/BF03195857 [DOI] [PubMed] [Google Scholar]
  • 22.Gibson JJ. The Ecological Approach to Visual Perception. New York: Houghton Mifflin; 1979. [Google Scholar]
  • 23.Tucker M, Ellis R. On the relations between seen objects and components of potential actions. J Exp Psychol Hum Percept Perform. 1998;24: 830–846. [DOI] [PubMed] [Google Scholar]
  • 24.Tucker M, Ellis R. The potentiation of grasp types during visual object categorization. Vis cogn. 2001;8: 769–800. 10.1080/13506280042000144 [DOI] [Google Scholar]
  • 25.Pecher D, Zeelenberg R, Barsalou LW. Verifying different-modality properties for concepts produces switching costs. Psychol Sci. 2003;14: 119–124. 10.1111/1467-9280.t01-1-01429 [DOI] [PubMed] [Google Scholar]
  • 26.Pecher D, Zeelenberg R, Barsalou LW. Sensorimotor simulations underlie conceptual representations: Modality-specific effects of prior activation. Psychon Bull Rev. 2004;11: 164–167. 10.3758/BF03206477 [DOI] [PubMed] [Google Scholar]
  • 27.Proctor RW, Miles JD, Baroni G. Reaction time distribution analysis of spatial correspondence effects. Psychon Bull Rev. 2011;18: 242–266. 10.3758/s13423-011-0053-5 [DOI] [PubMed] [Google Scholar]
  • 28.Kubo-Kawai N, Kawai N. Elimination of the enhanced Simon effect for older adults in a three-choice situation: Ageing and the Simon effect in a go/no-go Simon task. Q J Exp Psychol. 2010;63: 452–464. 10.1080/17470210902990829 [DOI] [PubMed] [Google Scholar]
  • 29.Simon JR. Choice reaction time as a function of auditory S-R correspondence, age, and sex. Ergonomics. 1967;10: 659–664. [DOI] [PubMed] [Google Scholar]
  • 30.Helmer JR, Meth G, Young SD. Sustainable traffic signal development. ITE J (Institute Transp Eng. 2015;85: 14–19. [Google Scholar]
  • 31.Massie DL, Campbell KL, Blower DF. Development of a collision typology for evaluation of collision avoidance strategies. Accid Anal Prev. 1993;25: 241–257. 10.1016/0001-4575(93)90019-S [DOI] [PubMed] [Google Scholar]
  • 32.Retting RA, Weinstein HB, Solomon MG. Analysis of motor-vehicle crashes at stop signs in four U.S. cities. J Safety Res. 2003;34: 485–489. 10.1016/j.jsr.2003.05.001 [DOI] [PubMed] [Google Scholar]
  • 33.Ng AWY, Chan AHS. The effects of driver factors and sign design features on the comprehensibility of traffic signs. J Safety Res. 2008;39: 321–328. 10.1016/j.jsr.2008.02.031 [DOI] [PubMed] [Google Scholar]
  • 34.Ng AWY, Chan AHS. Investigation of the effectiveness of traffic sign training in terms of training methods and sign characteristics. Traffic Inj Prev. 2011;12: 283–295. 10.1080/15389588.2011.556171 [DOI] [PubMed] [Google Scholar]
  • 35.Ou YK, Liu YC. Effects of sign design features and training on comprehension of traffic signs in Taiwanese and Vietnamese user groups. Int J Ind Ergon. Elsevier Ltd; 2012;42: 1–7. 10.1016/j.ergon.2011.08.009 [DOI] [Google Scholar]
  • 36.Di Stasi LL, Megías A, Cándido A, Maldonado A, Catena A. Congruent visual information improves traffic signage. Transp Res Part F Traffic Psychol Behav. Elsevier Ltd; 2012;15: 438–444. 10.1016/j.trf.2012.03.006 [DOI] [Google Scholar]
  • 37.Castro C, Moreno-Ríos S, Tornay F, Vargas C. Mental representations of obligatory and prohibitory traffic signs. Acta Psychol (Amst). 2008;129: 8–17. 10.1016/j.actpsy.2008.03.016 [DOI] [PubMed] [Google Scholar]
  • 38.Roca J, Castro C, Bueno M, Moreno-Ríos S. A driving-emulation task to study the integration of goals with obligatory and prohibitory traffic signs. Appl Ergon. Elsevier Ltd; 2012;43: 81–88. 10.1016/j.apergo.2011.03.010 [DOI] [PubMed] [Google Scholar]
  • 39.Hössinger R, Berger WJ. Comprehension of new instructions for car drivers in merging areas. Transp Res Part F Traffic Psychol Behav. 2012;15: 152–161. 10.1016/j.trf.2011.12.009 [DOI] [Google Scholar]
  • 40.Lenné MG, Rudin-Brown CM, Navarro J, Edquist J, Trotter M, Tomasevic N. Driver behaviour at rail level crossings: Responses to flashing lights, traffic signals and stop signs in simulated rural driving. Appl Ergon. Elsevier Ltd; 2011;42: 548–554. 10.1016/j.apergo.2010.08.011 [DOI] [PubMed] [Google Scholar]
  • 41.Casucci M, Marchitto M, Cacciabue PC. A numerical tool for reproducing driver behaviour: Experiments and predictive simulations. Appl Ergon. Elsevier Ltd; 2010;41: 198–210. 10.1016/j.apergo.2009.01.008 [DOI] [PubMed] [Google Scholar]
  • 42.Hartmann M., Mast F. W. and Fischer MH. Counting is a spatial process: evidence from eye movements. Psychol Res. 2016;80: 399–409. 10.1007/s00426-015-0722-5 [DOI] [PubMed] [Google Scholar]
  • 43.Anelli F, Lugli L, Baroni G, Borghi AM, Nicoletti R. Walking boosts your performance in making additions and subtractions. Front Psychol. 2014;5: 1–5. 10.3389/fpsyg.2014.00001 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Shaki S, Fischer MH. Random walks on the mental number line. Exp Brain Res. 2014;232: 43–49. 10.1007/s00221-013-3718-7 [DOI] [PubMed] [Google Scholar]
  • 45.Fischer MH. Finger counting habits modulate spatial-numerical associations. Cortex. 2008;44: 386–392. 10.1016/j.cortex.2007.08.004 [DOI] [PubMed] [Google Scholar]
  • 46.Bloem I, La Heij W. Semantic facilitation and semantic interference in word translation: Implications for models of lexical access in language production. J Mem Lang. 2003;48: 468–488. 10.1016/S0749-596X(02)00503-X [DOI] [Google Scholar]
  • 47.Bourgeois A, Chica AB, Migliaccio R, de Schotten MT, Bartolomeo P. Cortical control of inhibition of return: Evidence from patients with inferior parietal damage and visual neglect. Neuropsychologia. Elsevier Ltd; 2012;50: 800–809. 10.1016/j.neuropsychologia.2012.01.014 [DOI] [PubMed] [Google Scholar]
  • 48.Bourgeois A, Chica AB, Valero-Cabré A, Bartolomeo P. Cortical control of inhibition of return: Causal evidence for task-dependent modulations by dorsal and ventral parietal regions. Cortex. 2013;49: 2229–2238. 10.1016/j.cortex.2012.10.017 [DOI] [PubMed] [Google Scholar]
  • 49.Chica AB, Martín-Arévalo E, Botta F, Lupiáñez J. The Spatial Orienting paradigm: How to design and interpret spatial attention experiments. Neurosci Biobehav Rev. 2014;40: 35–51. 10.1016/j.neubiorev.2014.01.002 [DOI] [PubMed] [Google Scholar]
  • 50.Taylor TL, Klein R. Visual and motor effects in inhibition of return. J Exp Psychol Hum Percept Perform. 2000;26: 1639–1656. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

S1 Dataset. Data of reaction times of participants.

(CSV)

Data Availability Statement

All relevant data are within the manuscript and its Supporting Information files.


Articles from PLoS ONE are provided here courtesy of PLOS

RESOURCES