Abstract
In comparison with the high level of knowledge about vehicle dynamics which exists nowadays, the role of the driver in the driver–vehicle system is still relatively poorly understood. A large variety of driver models exist for various applications; however, few of them take account of the driver’s sensory dynamics, and those that do are limited in their scope and accuracy. A review of the literature has been carried out to consolidate information from previous studies which may be useful when incorporating human sensory systems into the design of a driver model. This includes information on sensory dynamics, delays, thresholds and integration of multiple sensory stimuli. This review should provide a basis for further study into sensory perception during driving.
Keywords: Sensory dynamics, Driver modelling, Perception thresholds, Sensory integration, Driver–vehicle dynamics
Introduction
The continued development of advanced driver assistance systems (ADAS) in road vehicles is resulting in increasingly complex interactions between driver and vehicle (Gordon and Lidberg 2015). However, the role of the human driver in controlling the vehicle is still poorly understood. Consequently the vehicle development process still relies heavily on subjective evaluation of prototype vehicles by test drivers, which is expensive and time consuming. By building a deeper understanding of the interactions between driver and vehicle, models can be developed to assist with the design and evaluation of vehicle components and systems. One feature of driver–vehicle control that has been neglected to date is the sensory perception of the driver. The aim of this paper is to review the role of human sensory systems in the driving task, with a view to improving the capability of mathematical models of the driver.
Driving a vehicle involves a wide range of information processing levels, from the high-level navigation task to the low-level control of vehicle speed and direction. The focus of this review is on the role of human sensory dynamics in the low-level control task. Donges (1978) considered the steering control task as the superposition of a target following task (feedforward control) and a disturbance rejection task (feedback control). Disturbances may act on the vehicle from sources such as wind gusts, uneven road surfaces and nonlinearities in the vehicle dynamics, or they may originate from the driver due to physiological noise sources, constraints and nonlinearities.
A simplified block diagram of the feedforward and feedback control of vehicle direction and speed is shown in Fig. 1. The driver previews the future road path using their visual system and then, using an internal model of the vehicle dynamics, determines target path and speed profiles and corresponding feedforward control actions (Timings and Cole 2013, 2014). Simultaneously, the driver senses the motion of the vehicle in relation to the target profiles and generates feedback control actions to reduce the effect of disturbances. The hypothesis presented in Fig. 1 assumes that feedback of vehicle motion is not used directly for generating the feedforward control action; however, the feedback loop is able to correct for any discrepancies introduced by imperfections in the driver’s feedforward control. It has been found that without visual feedback during lane change or obstacle avoidance manoeuvres drivers do not always initiate the return phase of the manoeuvre, failing to steer back towards the target path (Wallis et al. 2002; Cloete and Wallis 2009).
Modelling the driver mathematically has been the subject of research for many decades. Comprehensive reviews are provided by Macadam (2003) and Plöchl and Edelmann (2007). Recent research has focussed on the application of optimal control theory, using model predictive or linear quadratic controllers that are able to preview the future road path, as shown in Fig. 2, and calculate an optimal sequence of control actions (Macadam 1981; Sharp and Valtetsiotis 2001; Peng 2002; Cole et al. 2006). This approach has been extended to include neuromuscular dynamics (Pick and Cole 2007, 2008; Odhams and Cole 2009; Abbink et al. 2011; Cole 2012) and to the control of nonlinear vehicle dynamics (Ungoren and Peng 2005; Thommyppillai et al. 2009; Keen and Cole 2011). Feedforward and feedback control are usually assumed to share a common objective function. Timings and Cole (2014) synthesised independent feedforward and feedback controllers to examine in more detail the robustness of the driver’s control strategy to disturbances.
While driver steering control has a fairly well-defined objective, to follow a target line and stay within road boundaries, the motivation for driver’s speed choice depends on the situation. In a normal driving situation drivers will balance factors such as safety, comfort, journey time and control effort (Prokop 2001; Odhams and Cole 2004). Drivers have been found to decrease their speed to minimise their lateral acceleration in corners (Ritchie et al. 1968; Herrin and Neuhardt 1974; Reymond et al. 2001). Road width has also been found to affect speed choice, with drivers adjusting their speed to remain within lane boundaries (Bottoms 1983; Defazio et al. 1992). In contrast, racing drivers aim to maximise their lateral acceleration within the limits of the tyres in order to minimise lap time (Timings and Cole 2014; Lot and Dal Bianco 2015). In situations with heavy traffic, driver speed choice may also be dictated by the speed of other vehicles, with the driver aiming to maintain a safe distance behind the car in front (Boer 1999; Kondoh et al. 2008).
Despite these developments, most models assume the driver has full knowledge of the vehicle states, and no existing driver models appear to take full advantage of current understanding of human sensory dynamics. While this review is primarily focussed on driving of road vehicles, clear parallels can be drawn with research into pilots in the aerospace industry. Indeed, sensory dynamics have been considered in greater detail in this area, and many of the studies cited in this review have come from work carried out by aerospace engineers to investigate human perception during control tasks. In particular, models of sensory dynamics have been used in studies carried out in flight simulators to understand how sensory information is used during real and simulated flight (Pool et al. 2008; Ellerbroek et al. 2008; Nieuwenhuizen et al. 2013; Drop et al. 2013; Zaal et al. 2009a, c, 2010, 2012, 2013).
Driving is just one of many human sensorimotor tasks that involve perceiving stimuli in the surrounding environment and responding with a physical action. The neurophysiological processes involved in such tasks are shown in Fig. 3. A stimulus may excite various senses, which produce chemical signals characterised by the dynamics of the sensory receptors (explored in Sect. 2). Sensory signals are then transmitted through the nerves as electrical impulses caused by firing neurons, with the firing rate encoding a frequency-modulated signal (Carpenter 1984). Certain stimuli can elicit reflexive responses which bypass the brain by activating motor neurons emerging from the spinal cord (Carpenter 1984).
There are physical and biochemical limitations to the speed with which each of the processes shown in boxes in Fig. 3 can be carried out; therefore, time delays are introduced into the sensorimotor system. These delays are discussed further in Sect. 3. In addition, noise is introduced due to nonlinearities in the receptor and neuromuscular dynamics, errors in the brain’s internal models and spontaneous firing of neurons (Fernandez and Goldberg 1971). This means that humans are unable to measure stimuli with perfect accuracy or plan and execute an ideal response. It also results in thresholds below which stimuli cannot be perceived, as discussed in Sect. 4.
Once the sensory signals are received in the brain, they are processed in the sensory cortex in order to extract the information from the encoded signals transmitted through the nerves (Kandel et al. 2000). The information from the different senses is then integrated to form a single representation of the surrounding environment, as explained further in Sect. 5. Based on this, the physical response to the perceived stimuli is planned using internal models of the human body and the surrounding world (Wolpert and Ghahramani 2000). The signals required to activate the muscles are generated in the motor cortex and fine-tuned in the cerebellum using feedback from the sensory measurements (Kandel et al. 2000). Signals are then transmitted along motor neurons which activate muscle fibres, causing them to contract. The physical response is shaped by the dynamic properties of the activated muscles. In the context of driving, earlier studies have measured and modelled the neuromuscular dynamics of drivers’ arms holding a steering wheel (Pick and Cole 2007, 2008; Odhams and Cole 2009; Cole 2012) and legs actuating a gas pedal (Abbink et al. 2011).
An important feature of perception during driving tasks is that the stimuli perceived by the driver’s sensory systems arise from the motion of the vehicle, which is controlled by the driver. This means that the driver is involved in an active closed-loop perception and control task, as opposed to a passenger who is a passive observer (Flach 1990). The driver is able to anticipate future motion of the vehicle, allowing more accurate sensory integration as discussed in Sect. 5. Driving also involves many sensory stimuli being presented simultaneously in different axes and stimulating different sensors (multimodal) compared with sensory measurements which have been carried out in one axis to stimulate one sensor (unimodal). Care must be taken when relating results from investigations carried out in passive, unimodal conditions to models of active, multimodal control and perception. This is discussed in relation to time delays in Sect. 3 and sensory thresholds in Sect. 4.
The scope of this review is broad, and thus it is not possible to review every topic in great detail; each section could be extended significantly. However, the aim of the review is to give an overview of the key results from the literature, with particular focus on motivating and informing further development of driver models incorporating human sensory system dynamics. Both steering and speed control are considered concurrently, since in many cases the sensory mechanisms discussed are relevant for both control tasks. The main findings of the review are summarised and discussed in Sect. 6. The review extends considerably an earlier review by Bigler and Cole (2011).
Sensory dynamics
Various sensory systems are used by the driver to infer the state of the vehicle and its surroundings. The main sensory systems used in the control of vehicle speed and direction are:
Visual: The visual system is the only means the driver has of detecting the upcoming road geometry. The visual system can also sense the motion of the vehicle relative to the surrounding environment.
Vestibular: The vestibular organs are located within the inner ear, and they sense rotations and translations of the driver’s head.
Somatosensory: Somatosensors include a wide range of sensory organs which detect various states of the body, such as contact pressure, temperature, limb position and pain. They include proprioceptors which detect joint angles, muscle lengths and tensions and their derivatives.
The following subsections give an overview of the published literature on these three sensory systems. Other senses such as hearing may also play a role but will not be discussed in detail.
Visual system
Visual perception is the subject of significant research activity in psychology, neuroscience and biology. There is still much to understand about how a human interprets the neural signals received by the retina from a potentially complex three-dimensional visual scene containing objects that might be familiar or unfamiliar, and moving or stationary, with a moving or stationary observer. The various processes involved in visual perception are discussed in detail by Gibson (1950), Johansson (1975), Ullman (1979), Nakayama (1985), Lappe et al. (1999) and Raudies and Neumann (2012). Human visual perception is a complex, multi-layered process, and for the purpose of driver modelling it is not necessary or feasible to model all aspects. Therefore, the focus of this review is on the most relevant results towards modelling visual perception in a driving environment.
In the two-level model of vehicle control (Donges 1978), the visual system is used in both the feedback task and the feedforward task. The feedback task involves using the visual system in combination with the vestibular and somatosensory systems to perceive the motion of the driver and thus of the vehicle, which in turn is used to perform feedback control of the vehicle. In the feedforward task, the visual system views the geometry of the road ahead of the vehicle so that feedforward control inputs to the vehicle can be generated. Higher levels of the driving task, not considered in this review, involve the visual system in perceiving additional information such as motion of other vehicles and pedestrians.
Perception of self-motion (feedback)
Visually induced motion perception is typically caused by motion of the eyes relative to fixed surroundings, although illusory self-motion perception known as vection can also be induced by moving surroundings (Dichgans and Brandt 1978). Since vehicle motion is primarily planar, the role of the driver’s visual system in perceiving self-motion is mainly concerned with three axes: longitudinal and lateral translations, and yaw (heading) rotations.
Various mechanisms have been suggested for visual motion perception, such as ‘optic flow’ (Gibson 1950; Koenderink 1986; Lappe et al. 1999). This is the velocity field created as points in the visual scene ‘flow’ over the retina, along lines known as streamers. Optic flow patterns while driving on straight and curved roads are shown by the dashed lines in Fig. 4. For straight motion, the streamers all originate from a point directly in front of the observer known as the ‘focus of radial outflow’ (FRO). This can be used as a visual cue to control the vehicle’s heading direction (Gibson 1950), for example by aligning with the ‘vanishing point’ at the end of a straight road. For rotational motion, the streamers are curved and the FRO does not exist, although the point on the horizon directly in front of the observer may still be used as a visual cue to heading direction (Grunwald and Merhav 1976). However, Riemersma (1981) suggested that the FRO and heading direction are too crude to play a role in car driving. Multi-level models of perception of motion from optic flow have been developed (Grossberg et al. 2001; Mingolla 2003; Browning et al. 2009); however, these descriptions do not lead easily to a simple relationship between vehicle motion and visually perceived motion, as they are dependent on the characteristics of the surroundings.
Alternatively, it has been proposed that humans measure the rates of change of vectors between themselves and specific objects in the visual field (Gordon 1965; Zacharias et al. 1985). This allows drivers to calculate their ‘time-to-collision’ with objects, which can be particularly useful when following a leading vehicle (Kondoh et al. 2008). The distance and relative velocity of the objects can only be inferred with prior knowledge of the object’s size or by comparison between two visually similar environments (Gordon 1965; Bremmer and Lappe 1999). Road edges and centre line have also been identified as key visual features used by drivers (Gordon 1966; Riemersma 1981).
Because of the variety of mechanisms involved in visual perception, it is difficult to say what constitutes the ‘input’ to the visual system. Optic flow models would suggest that velocities are measured, although the FRO can be used to measure heading direction (yaw angle), and it is clearly possible to discriminate translational displacements with reference to stationary features such as road markers. Gordon (1965) used the unnatural appearance of the acceleration field to argue that accelerations and higher derivatives are not directly sensed by the visual system. The most appropriate inputs to the feedback component of the driver’s visual system therefore appear to be translational and angular velocities. Since displacements and angles can only be measured with respect to references such as road markers, they can be included within models of drivers’ feedforward visual perception.
It is not clear from the mechanisms involved in visual perception whether the perceived rotational and translational velocities depend on the frequency of the stimulus. One possible approach is simply to assume unity gains between the actual and perceived velocities. An alternative estimate of the frequency response of the visual system may be obtained from sensory threshold measurements (Soyka et al. 2011, 2012, see Sect. 4 for more information). Riemersma (1981) and Bigler (2013) both measured thresholds of visual perception of lateral and yaw velocities, superimposed on a constant longitudinal velocity. Both studies presented subjects with a typical driving scene, with Riemersma (1981) displaying edge lines for a straight road and Bigler (2013) displaying a more realistic rendering of a straight road bordered by trees. Riemersma (1981) found that lateral and yaw thresholds were independent of longitudinal speed. Bigler (2013) found thresholds for stimuli of different frequencies, and reanalysing the results using the model of (Soyka et al. 2011, 2012) gives visual dynamics that can be described by a low-pass filter, given by:
1 |
and taking lateral velocity and yaw angular velocity as inputs. The same cutoff frequency rad/s was found to fit the results for both sway and yaw motion. This low-pass characteristic was also seen by Riemersma (1981). In the absence of direct measurements of nervous responses to sensory stimulation, this model inferred from sensory threshold data can be used to give some insight into the function of the visual system. However, further research is needed to validate this approach.
Perception of road path geometry (feedforward)
One of the key characteristics of driving tasks is the ability of the driver to use their visual system to ‘preview’ the road ahead in order to carry out feedforward control. Studies have investigated the key features of road geometry which are perceived while driving, often using eye tracking instrumentation to investigate where the drivers look. Shinar et al. (1977) found a difference between straight roads, where drivers tend to focus near the FRO, and curved roads, where drivers scan the geometry of the curve. Many studies have found that drivers focus on the ‘tangent point’ on the inside of a bend, as shown in Fig. 4 (Land and Lee 1994; Boer 1996; Kandil et al. 2009, 2010). The angle between the current vehicle heading vector and the tangent point can be used to estimate the road curvature (Land and Lee 1994) and required steering angle (Kandil et al. 2009). Other studies have suggested drivers may look at a point on the predicted vehicle path, the ‘future path point’ (Land 1998) as shown in Fig. 4. There is no overwhelming evidence in favour of the tangent point over the future path point or other nearby points as a fixation point during driving (Mars 2008; Robertshaw and Wilkie 2008; Lappi et al. 2013).
Eye tracking studies have found that drivers tend to focus on a point around 1–2 s ahead of the vehicle on straight roads (Land and Lee 1994; Donges 1978), and that their gaze tends to move to an upcoming curve around 1 s before they steer in that direction (Chattington et al. 2007; Land and Tatler 2001). Drivers have also been found to make short ‘look-ahead fixations’, looking further along the road for short periods of time (Lehtonen et al. 2013). While eye tracking instrumentation is useful for determining the gaze direction of a driver, Land and Lee (1994) noted that it does not necessarily indicate where the driver is directing their attention, because the driver may be using their peripheral vision to gather information about road geometry away from the gaze point. Grunwald and Merhav (1976) and Land and Horwood (1995) both measured driver performance with only certain parts of the road visible and found that the full visual control task can be represented by two viewing points, one near to the driver and one further down the road. Land and Horwood (1995) found that performance was not degraded from the full visibility condition if drivers could see a near point 0.53 s ahead and a distant point 0.93 s ahead.
Steen et al. (2011) reviewed many studies which proposed one, two or multi-point preview models and concluded that a two-point preview model was the most realistic, with one point close to the driver and one more distant point. However, Sharp and Valtetsiotis (2001) used a shift register to formulate a multi-point preview controller using visual information taken from a single preview point, suggesting that a human driver in a moving vehicle could use memory to construct a multi-point image of the road geometry from data sensed at just one or two discrete points. The use of linear quadratic optimal control theory to calculate the gains on multi-point road path geometry ahead of the vehicle shows that the gains eventually tend to zero as the time ahead of the vehicle increases. This indicates that looking beyond a certain point might result in diminishing returns (Sharp and Valtetsiotis 2001; Cole et al. 2006), with the time ahead of the vehicle at which this occurs dependent on the dynamic properties of the vehicle and the driver, and the amount of control effort applied by the driver.
Vestibular system
There is some disagreement in the literature as to the relative importance of the vestibular system in nonvisual motion perception. Studies measuring thresholds of human motion perception in the dark often assume that the influence of the vestibular system is much larger than that of the somatosensors (Benson et al. 1986, 1989; Grabherr et al. 2008; Soyka et al. 2012, 2009, 2011; Kingma 2005). However, Gianna et al. (1996) found that perception thresholds for subjects with vestibular deficiencies were not significantly higher than for normal subjects, and Bronstein and Hood (1986) found that neck proprioception largely replaced vestibular function in vestibular deficient subjects for head rotations relative to the body. In contrast, Mallery et al. (2010) found that a subject with vestibular deficiencies had rotational velocity thresholds an order of magnitude higher than those of normal subjects and Valko et al. (2012) found that vestibular deficient subjects had significantly higher perception thresholds in four different motion axes. The relative importance of the vestibular and somatosensory systems may depend on the precise nature of the stimuli; however, it does appear that the vestibular system is an important source of information for drivers.
The vestibular system consists of two sets of organs located in the inner ear: the semicircular canals (SCCs) which sense rotational motion and the otoliths which sense translational motion (Kandel et al. 2000). Many studies have investigated the function of the vestibular system in primates and humans, either directly by measuring electrical signals in the brain or indirectly by measuring the vestibulo-ocular reflex (VOR), a reflexive eye movement which uses vestibular information to compensate for head movements.
Otoliths
The otoliths are formed from small granular particles contained in a gelatinous membrane which is in turn connected to sensory cells via hairs called cilia. When subjected to translational acceleration, the inertia forces on the otoliths deflect the cilia and excite the sensory cells (Kandel et al. 2000). Most mathematical models are based on empirical data from experiments carried out on humans and animals.
It is a natural extension of Einstein’s equivalence principal (Einstein 1907) that humans cannot tell the difference between a translational acceleration and a change in orientation of the gravity vector. Young and Meiry (1968) developed a model for the otoliths relating the perceived specific force (combination of inertial and gravitational accelerations) to the actual specific force. They proposed the transfer function:
2 |
and identified values for its parameters, given in the first row of Table 1. With these values, the transfer function is essentially low pass but with a constant reduction in gain at very low frequencies.
Table 1.
Fernandez and Goldberg (1976) measured the afferent firing rate (AFR) in the brains of squirrel monkeys subjected to accelerations at various frequencies and magnitudes. They developed a model of the otoliths containing a fractional exponent, which is difficult to implement practically. Therefore, Hosman (1996) proposed a simplified version in the same form as Eq. 2. Based on this and other research, Telban and Cardullo (2005) suggested parameters for a transfer function in the form of Eq. 2, relating the specific force input to a perceived specific force output. Soyka et al. (2011) used a signal-in-noise model to find a transfer function for the otoliths which optimised the fit to sensory threshold measurements (see Sect. 4 for more information). Suggested otolith parameters from these studies are summarised in the remaining rows of Table 1. The gains have been adjusted to give comparable outputs, since the scaling of the output signal is arbitrary. Bode plots of the otolith transfer function using the different parameters are compared in Fig. 5. For a driving task, the mid-range frequencies (between around and rad/s) are the most important, and in this range the otoliths exhibit a roughly proportional response to accelerations. There are differences in the details of the frequency responses measured in different studies, which highlights the difficulty in achieving repeatable results when using different subjects, equipment and methodologies.
Semicircular canals
The semicircular canals consist of sets of three elliptical cavities which are each filled with fluid (Kandel et al. 2000). Angular motion about any axis causes the fluid to move within these cavities, causing deflections of small hair cells which excite sensory cells. Early models of the SCCs were based on considerations of the physical dynamics of the organs. Steinhausen (1933) used observations of the motion within the SCCs of fish to develop the ‘torsion-pendulum’ model. Young and Oman (1969) adapted this model to include additional ‘adaptation’ terms to match trends seen in experimental results. Fernandez and Goldberg (1971) added a lead term , giving the transfer function:
3 |
which relates the AFR to the angular acceleration of the stimulus.
Fernandez and Goldberg (1971) measured the AFR of squirrel monkeys in response to angular accelerations of various amplitudes and frequencies. Hosman (1996) suggested alternative parameter values based on results from the literature, neglecting the adaptation time constant since it lies outside the bandwidth of interest for driving tasks. Telban and Cardullo (2005) reviewed several relevant studies and suggested slight modifications to the parameters of Eq. 3. They also proposed a simplified transfer function for modelling purposes, which links angular velocity inputs (hence the term) to perceived angular velocity outputs:
4 |
(there is a typographical error in (Telban and Cardullo 2005), with s in the numerator instead of ). This transfer function neglects the short time constants and , which affect frequencies well above the range of normal head movements. The key feature of the transfer function is roll-off below about which means that there is zero response at constant angular acceleration. In the same way as for the otoliths (Soyka et al. 2011), Soyka et al. (2012) chose time constants to optimise the fit to sensory threshold measurements using a signal-in-noise model. Similarly to Hosman (1996), they neglected the adaptation time constant . SCC parameters found from various studies are summarised in Table 2. As with the otoliths, the gains have been adjusted to give comparable outputs. Bode plots of the SCC transfer function using the different parameters are compared in Fig. 6. At mid-range frequencies the transfer functions have the characteristics of an integrator, hence why Telban and Cardullo (2005) suggested the SCCs measure angular velocity rather than acceleration. In contrast to the otolith dynamics, the agreement between the different studies is much higher. This may be because these studies based their work on similar models of the physical dynamics of the SCCs, although the transfer function found from sensory thresholds (Soyka et al. 2012) also agrees well with the others at mid-range frequencies.
Table 2.
Study | |||||
---|---|---|---|---|---|
Fernandez and Goldberg (1971) | 5.73 | 80 | 0.049 | 5.70 | 0.005 |
Hosman (1996) | 5.73 | (80) | 0.110 | 5.90 | 0.005 |
Telban and Cardullo (2005) | 5.73 | 80 | (0.060) | 5.73 | (0.005) |
Soyka et al. (2012) | 2.2 | () | 0.014 | 2.16 | 0.005 |
Parameters which the authors have suggested may be neglected are given in brackets
Somatosensors
During driving, the information provided by the visual and vestibular systems is complemented by the response of various receptors of the somatosensory system (Kandel et al. 2000). A particular group of receptors provide proprioception, which is the sensing of joint angles and movements and muscle displacements and forces. These receptors are particularly important in allowing the driver to sense the angle and torque of the steering wheel, which can be used by experienced drivers to sense the characteristics of the contact between the tyre and the road. Proprioceptors are also used to sense the displacements and forces of the foot pedals. The following subsections discuss the properties of the muscle spindles, which measure muscle displacement, and the Golgi tendon organs, which measure muscle force. Other somatosensors which may play a role are skin receptors and joint receptors which give information on touch and joint angle (Collins et al. 2005; Proske and Gandevia 2009), and graviceptors which respond to the motion of fluid within the body (Vaitl et al. 2002). While these somatosensors may give the driver useful information, such as the contact forces between the body and the seat, the nature of these stimuli means they are difficult to measure and quantify, and as such the existing literature does not lend itself to application within driver models.
Muscle spindles
Muscle spindles are sensors which detect the length and rate of change of length of the muscles. They produce two separate signals, one dependent on muscle velocity and length (type Ia afferent) and one dependent on muscle length only (type II afferent) (Kandel et al. 2000). An empirical linear model of the muscle spindle response, based on measurements taken in cats, was formulated by Poppele and Bowman (1970), with the Ia and II afferent responses to muscle displacements given by:
5 |
6 |
More complicated nonlinear models have also been developed which can predict the afferent responses accurately under a wide variety of conditions (Maltenfort and Burke 2003; Mileusnic et al. 2006).
Golgi tendon organs
Golgi tendon organs (GTOs) respond to the forces in the muscles. They share a nerve with the Ia afferent response of the muscle spindles, giving a response known as a type Ib afferent (Kandel et al. 2000). A linear model of the GTOs was first proposed by Houck and Simon (1967), again based on measurements in cats. Their model was stated as a transfer function between muscle force and Ib afferent response by Prochazka (1999):
7 |
A nonlinear model of the GTOs has also been developed (Mileusnic and Loeb 2006) and has been found to describe the static and dynamic properties of the GTOs accurately.
Time delays
As shown in Fig. 3, there are various ways in which delays are introduced between sensory stimuli being applied to a driver and the driver’s control response being measured. Delay sources include receptor dynamics, nerve conduction, neural processing and neuromuscular dynamics. Various techniques have been used in the literature to measure delays in human response to sensory stimulation. The simplest of these is to apply a stimulus and measure the time taken for a physical response (such as pressing a button) to be recorded. Some studies have used more sophisticated methods of applying stimuli, such as galvanic vestibular stimulation (GVS) which bypasses the vestibular organs by applying an electrical stimulus directly to the nerves (Fitzpatrick and Day 2004). Other methods have been used to detect responses at other points in the process, such as measuring the VOR to identify the reflexive delay, using magnetoencephalography (MEG, Hämäläinen et al. 1993) or electroencephalography (EEG) to measure electrical impulses within the brain or using electromyography (EMG) to record electrical activity in the muscles.
When interpreting sensory time delays measured in different studies using different techniques, it is important to consider which of the delay components shown in Fig. 3 are included in the measurement in each case. The aim of this section is to use results from the literature to estimate the total delay between stimulus and response for each sensory system. However, it can be difficult to separate the effects of pure time delays from lags due to the dynamics of the sensors and muscles and the time taken for signals to rise above noise levels (Soyka et al. 2013). Nevertheless, results from the literature can be used to find an approximate estimate of the order of magnitude of time delays in human sensory systems.
EMG has been used to measure the response of the muscle spindles to applied muscle stretches, finding delays of 25–30 ms for the Ia afferent and 40 ms for the II afferent (Matthews 1984). Bigler (2013) combined these with measured nerve conduction delays (Trojaborg and Sindrup 1969; Kandel et al. 2000) to give delays of 34 ms and 48 ms for the Ia and II afferents. As the Ib afferent response of the GTOs shares the same nerve as the Ia muscle spindle response, the time delay for the Ib afferent may be the same as the Ia muscle spindle response. However, these values do not include any neural processing time, so the actual sensor delays are likely to be larger.
Reaction times for drivers’ responses to simulated wind gusts have been measured in a driving simulator (Wierwille et al. 1983). Mean delays of 0.56 s without motion feedback and 0.44 s with motion feedback were found. These measurements encompass the complete process between stimulus application and physical response shown in Fig. 3, including all delays, lags and noise. Therefore, they can be considered as upper bounds for the delays in the visual system and combined visual–vestibular systems during driving. MEG has been used to record neural responses to visual stimuli and delays of 140–190 ms have been found (Kawakami et al. 2002; Lam et al. 2000), although it is unclear how much neural processing is carried out before and after this response is measured. Vestibular reflex delays have been measured by actively stimulating vestibular nerves using GVS and measuring the latency to the onset of the VOR (Aw et al. 2006; Tabak et al. 1997). Delays of 5–9 ms have been found, showing that the conduction of vestibular reflex signals is very fast.
There is a growing body of evidence, reviewed by Barnett-Cowan (2013) that despite the very fast conduction of vestibular reflex signals, vestibular processing can take much longer than the processing of other sensory signals. Vestibular delays have been found to be significantly longer than visual delays when measuring brain responses using EEG (Barnett-Cowan et al. 2010) and when measuring overall reaction times (Barnett-Cowan and Harris 2009). Barnett-Cowan et al. (2010) measured impulses in the brain 100 ms and 200 ms after visual and vestibular stimuli, respectively, with a further 135 ms until a button was pressed in both cases. This gives visual and vestibular delays of 235 ms and 335 ms; however, Barnett-Cowan (2013) suggested that these delays may include the time taken for the stimuli to rise above threshold levels (as modelled by Soyka et al. 2013) so they may be overestimates.
The visual and vestibular delays measured by Barnett-Cowan et al. (2010) are significantly lower than those found in a driving simulator by Wierwille et al. (1983). Furthermore, Barnett-Cowan et al. (2010) measured larger vestibular delays than visual delays, whereas Wierwille et al. (1983) found that adding vestibular stimuli significantly reduced the overall delay. This may indicate that sensory delays are dependent on the conditions in which the stimuli are applied. Delays due to nerve conduction and sensory and neuromuscular dynamics are a result of biochemical processes which are unlikely to depend significantly on the precise nature of the task carried out. However, it is likely that neural processing time is affected by the complexity of the task and the presence of distracting information and stimuli. Studies have investigated the intermittent nature of cognitive processing (Gawthrop et al. 2011; Johns and Cole 2015), which may play a part in increasing reaction times with increased mental load.
Rather than passively responding to stimuli as in many of these studies, drivers actively control the motion of the vehicle. It is difficult to measure time delays during an active control task, as response times are affected by the closed-loop dynamics. Some insight can be gained by looking at studies which have identified visual and vestibular delays during closed-loop pilot control tasks (Ellerbroek et al. 2008; Nieuwenhuizen et al. 2013; Zaal et al. 2009c, a, 2012, 2013). In general, vestibular delays have been found to be lower than visual delays, with vestibular delays between around 0.05–0.23 s and visual delays between around 0.18–0.32 s. These values seem consistent with the values measured in passive conditions; however, due to the large variability in measurements it is difficult to say whether delays are longer in active or passive conditions. Delays have been found to increase in the presence of additional stimuli (Zaal et al. 2009a) and in real flight compared with a simulator (Zaal et al. 2012). This indicates that perceptual delays are higher during multimodal conditions.
Perception thresholds
Due to limits of human sensory organs and noise caused by spontaneous neuron firing, sensory systems have thresholds below which stimuli cannot be perceived. Perception thresholds are defined as the smallest stimulus which can be detected, and these are commonly measured by asking subjects to distinguish something about the stimulus, such as its direction. In reality, these thresholds are generally not precise, but a smooth transition from 0 to 100 probability of detection over a range of values. This cumulative probability distribution is known as a ‘psychometric function’ (Boring 1917) and is often modelled as a cumulative normal distribution. Variations on the ‘up–down’ method (Levitt 1970) are commonly used to measure perception thresholds, and depending on the method used the thresholds measured correspond to different probabilities of detection, generally between 65 and 80 .
The ‘just noticeable difference’ (JND) is defined as the smallest change in amplitude from a reference stimulus which is required before the difference between the two stimuli is noticed. From experiments on the perception of lifted masses, Weber (1834) found that the JND in mass was proportional to the reference mass. This result has been found to be applicable for many perceptual systems and has become known as ‘Weber’s law’ with the constant of proportionality known as the ‘Weber fraction’. Figure 7 shows how the JND varies with stimulus intensity for a stimulus following Weber’s law.
Many of the published measurements of perception thresholds were carried out under passive, unimodal conditions, meaning that the test subjects were exposed only to the one stimulus of interest and they did not perform any task other than perceiving the stimulus. However, during driving multiple senses are being stimulated simultaneously in different axes, and the driver is carrying out an active control task. Groen et al. (2006) defined the ‘indifference threshold’ as the threshold for perception of a stimulus in the presence of other congruent or incongruent stimuli. JNDs are a special case of indifference thresholds, when the background stimulus is in the same axis and modality as the stimulus which is being detected. Another special case of the indifference threshold is for congruent stimuli from two different sensory modalities (e.g. visual and vestibular systems), where the indifference threshold marks out a ‘coherence zone’ of stimuli which are perceived as consistent with each other.
Threshold models
The simplest model of sensory thresholds is a ‘dead zone’ where the perceived amplitude is zero. There are two possible methods for modelling this, as shown in Fig. 8. Method 2 is the most applicable of these, as method 1 implies that the perceived amplitude would be smaller than the actual amplitude, even above the perception threshold. The dead zone model is useful for simplicity; however, it assumes that the psychometric function is a step function, and it cannot be used directly to model JNDs.
Recent studies have suggested that sensory thresholds arise primarily as a result of noise in the sensory channels and the brain. Soyka et al. (2011, 2012) developed models of translational and rotational motion perception thresholds based on additive noise (AN) applied to the outputs of the otolith and SCC transfer functions. The perception thresholds were found as the minimum stimulus amplitude required for the output to exceed the noise level. Both studies found good fits to experimental results, although the transfer functions had to be adjusted slightly from those found in the literature (see Sect. 2). This model predicts the frequency dependence of perception thresholds and is valid for arbitrary motion inputs rather than solely sinusoidal motion. A similar principle was used by Bigler (2013) to model JNDs as well as perception thresholds, by adding signal-dependent noise (SDN) as well as AN to the output of the sensor transfer function (Todorov 2005). This sensor model is shown in Fig. 9.
Passive threshold measurements
Thresholds and JNDs have been measured in passive conditions for a variety of stimuli. Soyka et al. (2011, 2012) showed that sensory thresholds could be predicted by finding when the output of the sensory transfer function rises above a specific noise amplitude; therefore, this model can be used in reverse to infer noise amplitudes from sensory threshold measurements. In the following subsections, noise amplitudes are found in this way for the different senses using sensory threshold measurements from the literature. These measurements have all been taken under passive unimodal conditions; therefore, since thresholds have been found to increase under active or multimodal conditions (see Sect. 4.3) the noise amplitudes found in this section can be considered to be lower bounds. For each sensory system, the signal-in-noise model of Soyka et al. (2011, 2012) has been used to identify the additive noise amplitudes using two different transfer functions: (i) a published sensor transfer function from considerations of the sensory dynamics and (ii) a sensor transfer function optimised to fit threshold data. It is unclear which of the two transfer functions is more appropriate for driver modelling. The parameters derived from sensory threshold measurements may describe the behaviour at low amplitudes better; however, they may not completely match the dynamic behaviour of the sensory system. Noise amplitudes are given in units with a * symbol at the end, to indicate that the noise is added to the stimuli filtered by the sensory transfer functions.
Visual thresholds
Various studies have measured perception thresholds and JNDs for the visual perception of self-motion. A difficulty in interpreting these results with any certainty is that they may well be dependent on the characteristics of the visual scene, such as the relative motion of stationary reference objects in the visual field, so it is not clear how generally applicable the results are. However, it may still be possible to find some useful information about the performance limits of the visual system.
A driving simulator display was used by Bigler (2013) to measure yaw angle and lateral displacement thresholds. The display was not calibrated to give full-scale visual feedback so the absolute values of the measured thresholds may not be at the correct scale; however, the frequency response should not depend on the display scaling. The results are shown in Fig. 10. The visual transfer function given in Eq. 1 was used with the model of Soyka et al. (2011, 2012) to give predicted thresholds, shown by the solid lines in Fig. 10. The model fits the thresholds very well, which is not surprising considering that the visual transfer function was found by fitting parameters to these results. The additive noise levels found are 0.0011 rad/s* for the yaw angular velocity and 0.032 m/s* for the lateral velocity.
A fit to the data was also found for a simple model of the visual system dynamics, with unity transfer functions between actual and perceived yaw and sway velocities. The fit using this model is shown by the dotted lines in Fig. 10, and the noise values found were 0.0013 rad/s* for the yaw angular velocity and 0.035 m/s* for the sway velocity. Visual JNDs have been measured for a range of yaw velocities, and Weber fractions of 7 % (de Bruyn and Orban 1988), 10 % (dos Santos Buinhas et al. 2013) and 11 % (Nesti et al. 2014b) have been found. No studies have been found which measure visual JNDs for lateral motion.
A few studies have investigated the limits of visual perception of motion in the longitudinal direction. Reinterpretation of the data collected by Bremmer and Lappe (1999) gives a JND in displacement in the longitudinal direction of 450 mm, with a reference displacement of 4 m. This gives a Weber fraction of 10 ; however, extrapolating from measurements taken for this relatively short displacement of 4 m may be inaccurate. Monen and Brenner (1994) determined the smallest step increase in forward velocity necessary for the difference to be perceived within half a second and found a large Weber fraction of around 50 %.
Thresholds of visual perception involved in feedforward control have not been measured explicitly. Authié and Mestre (2012) measured JNDs in path curvature, finding a Weber fraction of approximately 11 %. Bigler (2013) used the results of Legge and Campbell (1981), who found the angular resolution of the retina to be around 1.5 arc min, to calculate additive and multiplicative noise variances for visual perception of road path geometry ahead of the vehicle. However, these results were found by asking subjects to indicate when they could detect a change in position of a small dot, which is likely to be significantly easier than picking out the full road geometry from a complicated visual scene.
Otolith thresholds
Perception thresholds have been measured extensively for translational accelerations in the horizontal plane. Measurements have been carried out in the longitudinal (X) and lateral (Y) directions, and the thresholds have been seen to be similar in both directions (Benson et al. 1986); therefore, they are considered together. Thresholds have also been measured in the vertical (Z) direction (Nesti et al. 2014a); however, this is not so relevant for the car driver’s control task.
The ‘up–down’ method (Levitt 1970) was used to measure thresholds in several studies, with participants being subjected to sinusoidal stimuli with amplitudes which changed for each trial (Benson et al. 1986; Kingma 2005; Soyka et al. 2009, 2011; Hosman and Van Der Vaart 1978; Heerspink et al. 2005). Other studies used gradually increasing or decreasing motion amplitudes and asked subjects to indicate when they started or stopped perceiving motion (Hosman and Van Der Vaart 1978; Heerspink et al. 2005). The thresholds for decreasing amplitudes were found to be lower than the thresholds for increasing amplitudes. It was thought that this was because the subjects were already ‘tuned in’ to the signal so were able to pick it out from the noise more easily. In all of these studies, the subjects were moved in only one axis at a time while seated in the dark, so they were focused on the acceleration stimulus without any other distractions.
Thresholds for the discrimination of the direction of sinusoidal accelerations in the horizontal plane from the studies using the up–down method are shown in Fig. 11. It is clear that there is a large variability in results between different studies and even within each study, indicating that perception thresholds are sensitive to differences in experimental methods and participants.
Predicted thresholds are also shown in Fig. 11, found using the signal-in-noise model of Soyka et al. (2011). The transfer function given in Eq. 2 was used with two different sets of parameters from Table 1. The dotted line shows the results using parameters found by Telban and Cardullo (2005) from the dynamics of the otoliths and measurements of brain responses, and the solid line shows the results using parameters optimised by Soyka et al. (2011) to fit the measured thresholds. The threshold model was found using the results of Soyka et al. (2011) only, whereas the noise level for the ‘dynamics’ transfer function was optimised to fit the whole data set. The noise levels found were 0.038 m/s* for the ‘dynamics’ transfer function and 0.015 m/s* for the ‘thresholds’ transfer function. The transfer function found from the threshold measurements fits the results much better than the transfer function found from the sensory dynamics, as the corner frequency at which the thresholds plateau is too low for the ‘dynamics’ transfer function.
Naseri and Grant (2012) measured JND values for sinusoidal accelerations at 0.4 and 0.6 Hz with varying amplitudes. The results were found to fit Weber’s law well, although a dependence on frequency was also seen. A Weber fraction of 5 % was found for the measurements taken at 0.4 Hz, whereas a value of 2 % was found for the measurements taken 0.6 Hz.
In interpreting the results of experiments which measure thresholds of whole body motion, the possibility of multimodal stimuli should be considered. For example, in the case of sinusoidal angular velocity imposed on the test subject, the semicircular canals and various somatosensors may be stimulated simultaneously. Multimodal thresholds and sensory integration are discussed in Sects. 4.3 and 5.
Semicircular canal thresholds
Various studies have measured thresholds for perception of angular velocity, using either the up–down method (Benson et al. 1989; Grabherr et al. 2008; Soyka et al. 2012) or by gradually increasing or decreasing amplitudes (Hosman and Van Der Vaart 1978; Heerspink et al. 2005), in a similar way to the otolith measurements. Measured thresholds from studies using the up–down method are shown in Fig. 12. The data all follow a similar trend, with a fairly low amount of scatter compared to the otolith results. Predicted thresholds are also shown using the signal-in-noise model of Soyka et al. (2012), based on the transfer function given in Eq. 3. The solid line was found using parameters optimised by Soyka et al. (2012) to fit the threshold measurements, and the dotted line was found using the parameters suggested by Telban and Cardullo (2005) for the SCCs, choosing the noise level to fit the measured threshold parameters as well as possible. Both sets of SCC parameters are given in Table 2. The noise levels found were 0.025 rad/s* for the ‘thresholds’ transfer function and 0.023 rad/s* for the ‘dynamics’ transfer function. Both models fit the results well, although the model which was optimised to fit the threshold results matches more closely as expected.
JNDs for angular velocity perception have been measured by Mallery et al. (2010) and dos Santos Buinhas et al. (2013), finding Weber fractions of 3 and 13 %, respectively. The difference between these values may be a result of the fact that Mallery et al. (2010) measured JNDs at larger amplitudes than dos Santos Buinhas et al. (2013). Mallery et al. (2010) also found that the gradient (JND/amplitude) was higher at low amplitudes, and suggested a power law should be used rather than Weber’s law. However, it is debatable whether JNDs for the SCCs should follow a power law, when most other sensory systems have been found to follow Weber’s law.
Somatosensor thresholds
Various studies have measured perception thresholds for the displacements of different limbs; however, Bigler (2013) is thought to be the first to directly measure thresholds for the perception of steering wheel angle, finding the results shown in Fig. 13. These results cannot be used to find noise levels for the somatosensors without making some assumptions about the relationship between steering wheel displacement and the displacements, velocities and forces of the muscles, and further assumptions about the method used to integrate information from the Ia, Ib and II afferents. Further work is necessary to determine appropriate noise levels for the somatosensors.
Newberry et al. (2007) measured JNDs in steering wheel angle and reported a Weber fraction of 14 %. However, this was achieved by fitting a line with zero perception threshold, and a better fit to the data can be achieved by including the effect of a nonzero perception threshold. This gives a good linear fit to the measurements, with a Weber fraction of 9.6 % and a perception threshold of 0.006 rad. The stimulus profile and frequency were not reported by Newberry et al. (2007); however, the extrapolated perception threshold is similar to that measured by Bigler (2013) for stimuli at 1 Hz (see Fig. 13).
To date, no studies appear to have directly measured perception thresholds for steering wheel force or torque. Steering wheel force JNDs were measured by Newberry et al. (2007), and extrapolating from these measurements gives a perception threshold of 0.45 N and a Weber fraction of 9.6 %. It is interesting to note that the Weber fraction found for the GTOs using this method matches the Weber fraction found for the muscle spindles almost exactly, suggesting that there may be a perceptual link between the two sensors. The GTO afferent and the primary muscle spindle afferent share the same nerve conduction path (Kandel et al. 2000), so since JNDs are related to noise along the transmission path this may provide an explanation for the similarity in Weber fraction values.
Active and multimodal thresholds
The studies summarised in Sect. 4.2 were all targeted at measuring thresholds of a single stimulus in isolation, during passive conditions where the subject was concentrating on the stimulus. However, sensory stimuli which occur during driving are very different to the stimuli applied in these controlled studies, so these results may not be directly applicable to driving tasks. Stimuli in driving tasks are perceived under active rather than passive conditions, and there are several stimuli being perceived at once. Therefore, the indifference threshold [threshold in the presence of other stimuli (Groen et al. 2006)] should determine the limits of perception during driving.
By asking subjects to perform a secondary control task in a separate motion axis, it has been found that increasing the mental load on subjects causes an increase in perception thresholds (Hosman and Van Der Vaart 1978; Samji and Reid 1992). It should be noted that in both of these studies the subjects were still actively concentrating on perceiving the motion cues as well as completing the secondary task. Due to the equivalence of translational accelerations and shifts in the gravity vector, the brain can easily be fooled into misinterpreting the two types of motion. Groen and Bles (2004) and Pretto et al. (2014) found that presenting subjects with visual cues simulating a translational acceleration while they were undergoing rotational motion caused threshold of perception of the rotation to increase by factors of 5–6. Pretto et al. (2014) also measured thresholds during an active control task and found that they increased by factors up to 4 for some subjects, but didn’t change at all for others. The participants whose thresholds did not increase during the active driving task reported higher levels of ‘immersion’ in the simulation, indicating that the sense of realism of the simulation was linked to participants’ ability to perceive the motion cues accurately.
Pitch and roll thresholds have been measured with masking vertical motion cues, finding a significant linear increase in pitch and roll thresholds with vertical amplitude (Zaichik et al. 1999; Rodchenko et al. 2000). In contrast to these studies, Valente et al. (2006) found no significant effect of vertical motion amplitude on pitch rate thresholds. In this study, the pitch and vertical motion were applied at the same frequency, which may have caused the motion cues to be perceived as coherent, making it easier to detect the pitch cues.
Groen et al. (2006) analysed the data of Groen and Bles (2004) and showed that indifference thresholds for pitch rotation in the presence of visual longitudinal cues follow the same frequency response as the perception thresholds measured in passive conditions, but are increased by a constant gain. They used this result to hypothesise that the presence of additional sensory stimuli scales perception thresholds by a constant gain, without affecting the frequency response. This is consistent with the models of Soyka et al. (2011, 2012) and Bigler (2013) (shown in Fig. 9), where the threshold is placed after the sensory transfer function and the additional stimuli cause an increase in the noise level. Groen et al. (2006) suggested that the increase in noise level is linearly dependent on the amplitude of the additional stimulus, which is equivalent to Weber’s Law in the special case of the additional stimulus being in the same axis and modality as the measured stimulus.
Recent studies have used parameter identification methods to estimate threshold values during an active control task in the same axis, and thresholds in active conditions have been found to be around 1.6 times larger than thresholds measured in passive conditions (Pool et al. 2012; Valente Pais et al. 2012).
It is evident from the literature that various factors can cause thresholds to increase from values measured in passive conditions, including mental load, the presence of other stimuli and carrying out an active control task. It may therefore not be appropriate to rely on passive threshold measurements to model sensory dynamics during an active driving task.
Coherence zones
The term ‘coherence zone’ was coined by van der Steen (1998) to describe the range of amplitudes of inputs to two sensory systems (such as visual and vestibular systems) which are perceived as consistent with each other, as shown in Fig. 14. The coherence zone can be defined in terms of the point of mean coherence (PMC), coherence zone width (CZW) and gain of mean coherence (GMC) as shown.
Coherence zones between the visual and vestibular systems have been measured at various amplitudes and frequencies (van der Steen 1998; Valente Pais et al. 2010a, b). The GMC was found to decrease with increasing stimulus amplitude, with subjects preferring larger vestibular motion than visual motion at low amplitudes and the opposite being seen at larger amplitudes. Significant differences were found between the values measured in different studies, highlighting the fact that coherence zones are highly dependent on the experimental conditions. Contrary to the results found for perception thresholds, coherence zones were found not to change significantly during an active control task (Valente et al. 2011). This indicates that the perceptual mechanisms behind perception thresholds and coherence zones may not be directly linked, and suggests that coherence zones measured in passive conditions may be applied to conditions where active control tasks are being carried out.
The concept of a coherence zone has been extended to the detection of heading direction (de Winkel et al. 2010) and phase differences (Grant and Lee 2007; Jonik et al. 2011). Jonik et al. (2011) found that inertial motion can lead visual motion by up to 22 without the difference being detected. This result was independent of the stimulus frequency, showing that humans can be considered as phase-error detectors rather than time delay detectors.
Research has shown that, when asked to tune inertial motion to match visual motion, subjects pick higher amplitudes when tuning downwards from high amplitude motion than when tuning upwards from low amplitude motion (Correia et al. 2010). Correia Grácio et al. (2013) defined the ‘optimal zone’ as the area between these ‘upper’ and ‘lower’ optima and found that it lay within the coherence zone. Similar to the PMC, GMC and CZW for coherence zones, the optimal zone was defined in terms of the ‘point of mean optimal gain’ (PMO), ‘gain of mean optimal’ (GMO) and ‘optimal zone width’ (OZW). The GMO was found to decrease at higher amplitudes and at higher frequencies. In contrast to coherence zone measurements, the OZW was found not to vary with amplitude or frequency. By varying the field of view, resolution and depth of the visual scene, Correia Grácio et al. (2014) found that the optimal gain is strongly affected by the ‘quality’ of the visual cues, with more realistic visual scenes giving GMOs closer to 1.
Two approaches to modelling CZWs were compared by dos Santos Buinhas et al. (2013), one matching the perceived intensity of the two stimuli and applying this to averaged JNDs, and one summing the JNDs for the two individual stimuli. Comparison of model predictions with experimental data showed that summing JNDs provides the best fit to the measured data, explaining the results particularly well at lower amplitudes. dos Santos Buinhas et al. (2013) suggested that PMCs could be modelled using Stevens’ power functions of perceived stimulus intensity; however, this method was not experimentally verified.
Sensory integration
The sensory systems described in Sect. 2 provide the central nervous system (CNS) with measurements (or sensory ‘cues’) which can be used to estimate vehicle states while driving. However, these measurements are shaped by the sensor dynamics and also contain additive and signal-dependent noise (as described in Sect. 4). The CNS must therefore carry out sensory integration to give a single estimate of the vehicle states from the noisy, filtered information received from each of the sensors.
In a real-world driving scenario, the driver will be presented with coherent sensory information. Any discrepancies between information from the different sensors are due to sensory noise, or incomplete information available to a particular sensor. However, in some situations the information presented to the different senses may be incoherent or biased, in which case the driver may use a different integration strategy. This is particularly relevant for motion in virtual environments, where the visual, vestibular and somatosensory information presented to the driver may not all accurately reflect the real-world stimuli. An overview of methods and results from investigations of sensory integration in a variety of virtual environments (not specific to driving) is given by Campos and Bülthoff (2012). The following subsections build on this, focusing in more depth on results which suggest how information from the sensory systems summarised in Sect. 2 may be integrated during driving.
Integration of coherent sensory measurements
The simplest model of sensory integration is a linear weighting of the estimates from different sensory systems (Hosman and Stassen 1999). Appropriate weightings can be found using sensory experiments; however, the scope of models with fixed weightings is likely to be limited. For many sensory systems, the CNS has been found to integrate measurements using statistically optimal methods (Ernst and Banks 2002; Oruç et al. 2003; Butler et al. 2010; Seilheimer et al. 2014). These methods are based on Bayes’ theorem (Bayes 1763), which relates the a posteriori probability of condition given observation to the probability of observation given condition , the a priori probability , and the observation probability (which is usually assumed uniform):
8 |
Optimal integration of sensory cues involves choosing from the set of all possible conditions the condition which has the highest probability based on the set of observations from the different sensory channels. For a continuous set of possible conditions a probability density function of can be plotted. Equation 8 shows that depends on an assumption about the probability distribution before the measurements are made, known as a ‘prior’.
There are various ways in which the optimal value of can be chosen, such as the ‘maximum a posteriori’ (MAP) estimate, the ‘minimum mean square error estimate’ (MMSE) and the ‘maximum likelihood estimate’ (MLE) (Clark and Yuille 1990; Vaseghi 2005). However, if the priors and are uniform and the probability distributions are symmetric, these estimates will be identical and can found by maximising the ‘likelihood’ function .
If the probability distributions of the sensory estimates are all Gaussian, the MLE of a property is found by weighting each estimate in proportion to the inverse of its variance (Yuille and Bülthoff 1996):
9 |
The variance of the combined estimate is found from Eq. 10 to be lower than the variances of the individual estimates from the different sensory systems:
10 |
Oruç et al. (2003) showed that a Gaussian prior can be included in the MLE analysis as an additional input, weighted by the inverse of its variance as usual. MacNeilage et al. (2007) used this result to model the integration of visual and inertial cues to disambiguate between an acceleration and a shift in the gravity vector, incorporating priors to model the assumptions that humans are normally in an upright position and that smaller accelerations are more likely than larger ones. Soyka et al. (2015) measured off-centre yaw rotation thresholds and found that SCC and otolith signals were integrated, although the results suggested information from additional sensory systems may also have been used.
Near-optimal Bayesian integration of visual and vestibular information has been measured in several studies (Gu et al. 2008; Butler et al. 2010; Fetsch et al. 2009; Prsa et al. 2012; Drugowitsch et al. 2014; Fetsch et al. 2010). In contrast to these studies, de Winkel et al. (2013) only found results that fit the MLE model for 3 out of 8 participants and Nesti et al. (2015) found that combined visual-inertial thresholds were higher than predicted by a MLE model. Butler et al. (2011) found that participants exhibited optimal visual–vestibular integration 90 % of the time with a stereoscopic visual display compared with 60 % of the time with a binocular display. This suggests that the realism of the visual scene may affect whether or not visual and vestibular information is integrated optimally. Some studies have found a slight over-weighting of one sense with respect to the other, although some of these have found that vestibular cues are weighted higher (Fetsch et al. 2009; Butler et al. 2010) while others have found that visual cues are weighted higher (Prsa et al. 2012). Prsa et al. (2012) suggested that over-weighting of otolith signals and under-weighting of SCC signals may occur when vestibular cues are integrated with visual cues.
In order to develop effective and efficient control strategies for interacting with their surroundings, humans use their experience to develop internal models of themselves and the world around them (Wolpert and Ghahramani 2000). They are able to use learning methods to adapt these models to changes in the environment (Wolpert et al. 2011) such as astronauts entering microgravity (Carriot et al. 2015). Using an internal model, a recursive state estimator can be used to provide new a priori estimates at each time step to give improved estimates of the system states. A common implementation of this method is the Kalman filter (Kalman 1960; Grewal and Andrews 2001). It is assumed that the observer has an internal model of the system given in state-space form:
11 |
The main difference between a driver and a passenger is that the driver has perfect knowledge of the inputs ; however, they are perturbed by process noise . Both driver and passenger measure the outputs , which are perturbed by measurement noise . The new estimate of the states is predicted by propagating the current input and state estimate through the internal model of the system. A correction is then added based on the error between the previous estimated output and measured output , weighted by the ‘Kalman gain’ :
12 |
The time-varying Kalman gain is calculated to give a statistically optimal estimate using MLE, weighting the estimates based on the covariances of the Gaussian noise and . If the covariances are time invariant, a steady-state linear filter can be found to give the optimal state estimates for the system. Various studies have proposed models of visual–vestibular integration based on Kalman filters (Borah et al. 1988; Zupan et al. 2002; Young 2011), and Kalman filters have also been used to model estimation of vehicle states for pilots (Onur 2014) and drivers (Bigler 2013).
One implication of MLE models of human sensory integration is that the observer must have access to estimates of the noise variance for each sensory channel. Ernst and Bülthoff (2004) suggested that the variance may be determined by looking at the responses over a population of independent neurons. Several studies have attempted to build realistic neural models to describe this behaviour (Deneve et al. 1999; Pouget et al. 2000; Barber et al. 2003), and they have found that a close approximation to MLE can be achieved in such a way. Fetsch et al. (2009) studied the integration of visual and vestibular cues to heading angle in humans and monkeys with varying reliability of the visual cues. They found that both humans and monkeys were able to dynamically re-weight the cues between trials, indicating that they were able to obtain a measure of the reliability of each cue.
Integration of biased sensory measurements
While MLE is an optimal method of combining measurements from noisy sensory channels with the same mean, if the signals are biased such that their means are no longer coherent, using MLE will cause the bias to carry through into the ‘optimal’ sensory estimate as seen in Fig. 15 (Ernst and Luca 2011). There will always be differences between the measurements from different noisy sensory channels; however, it is impossible to separate these differences into those which occur because of stochastic variations about the mean and those which are a result of biases in the sensory channels without prior knowledge of the values of these biases. It has been found that the CNS may ignore the discrepancies and integrate the biased sensory measurements using the MLE method if conflicts are small (Scarfe and Hibbard 2011; Butler et al. 2010, 2014) or if the conflicting information is presented in different motion axes (Kaliuzhna et al. 2015). de Winkel et al. (2015) found that over half of subjects integrated visual and inertial heading information regardless of the size of the bias. However, other studies have found evidence of various strategies for reducing bias in perceived signals (Körding et al. 2007; Landy et al. 1995; Burge et al. 2010; Zaidel et al. 2011).
When presented with two different sensory cues, the CNS must decide whether or not they are coherent (originating from the same source). If they are coherent, the difference between them can be assumed to be a result of stochastic variations and the cues can be combined using MLE. If not, the cues should be treated separately, treating the situation as a ‘cue conflict’. Körding et al. (2007) proposed a model using Bayes’ rule to decide whether or not two cues are coherent based on a prior describing the likelihood of the cues coming from the same source. They validated the model using experimental results; however, Seilheimer et al. (2014) noted that Körding et al. (2007) did not vary the reliability of the cues, so it is still uncertain whether their model is valid in all cases. A similar Bayesian model incorporating priors was proposed by Knill (2007). They found that the weight applied to a cue shrunk as the size of the conflict increased, but it did not decrease to zero. However, other studies have found that under some circumstances humans will ‘veto’ a cue that does not fit with the other sensory measurements (Girshick and Banks 2009; Landy et al. 1995).
Ghahramani et al. (1997) proposed an additional stage of ‘cue calibration’ before cues are fully integrated, where the difference between the estimates is reduced. The values of estimates and are calibrated by adding and , given by:
13 |
This improves ‘internal consistency’ (Burge et al. 2010), ensuring that the estimates from different sensory systems agree with each other, although it does not necessarily improve ‘external accuracy’ (the overall accuracy of the CNS’s combined estimate). If , calibration can achieve full internal consistency by adjusting both estimates to the same value; otherwise, a smaller reduction in the difference between the estimates is found.
Several studies have shown that vision dominates the other senses under certain conditions (Rock and Victor 1964; Ernst and Banks 2002), so a model of ‘visual capture’ has been proposed where vision completely dominates the combined estimate (Ernst and Banks 2002; Ernst and Bülthoff 2004). In such a case, the visual estimate does not change and the other estimate adapts to be the same as the visual estimate, giving for the visual channel and for the other channel. Alternatively, Ghahramani et al. (1997) proposed that the calibration stage follows a similar weighting structure to the integration process, based on reliability. He hypothesised that each calibration constant is proportional to the variance of the sensory estimate . Burge et al. (2010) tested this model in an experiment on visual-haptic estimation of slant and reported strong evidence in favour of this reliability-based calibration.
Reliability-based calibration does not make physical sense as a method for reducing sensory bias, however, as the reliability of a cue is independent of its bias (Ernst and Luca 2011). A sensory estimate could have low variance (high reliability) and high bias, or a high variance (low reliability) but low bias. For example, in Fig. 15, the cue with the higher variance has the lower bias. Testing a ‘cue veto’ model of integration of biased sensory estimates, Girshick and Banks (2009) found that the vetoed cue was not necessarily the cue with the highest variance. Zaidel et al. (2011) compared reliability-based calibration with fixed-ratio calibration, where the calibration constants were assumed to be learned from past experience. Fixed-ratio calibration fitted their results better than reliability-based calibration, with higher weighting placed on the visual estimate. The sum of the calibration constants and was found to be less than 1, so full internal consistency was not achieved. Zaidel et al. (2011) also explained how the presence of fixed-ratio calibration could cause erroneous indications of reliability-based calibration to appear using the methods of Burge et al. (2010). It therefore seems that with biased sensory information humans may use fixed calibration constants based on past experience, rather than change the weightings based on cue reliability.
Linear cue calibration for visual–vestibular integration has been observed in several studies although, as with coherent measurements, there is disagreement about which sense is more highly weighted. Visual dominance was found by Rader et al. (2011), whereas the vestibular system was found to dominate by Harris et al. (2000). Ohmi (1996) found that visual cues dominated when conflicts were small, but vestibular cues dominated when conflicts were large. Zacharias and Young (1981) found that vestibular cues dominated visual cues at higher frequencies, so the dominant sensory system may depend on the frequency content of the tasks carried out.
Experimental studies have shown that when a consistent conflict is observed between the visual and vestibular systems, the perceived motion will eventually drift towards the visual estimate (Ishida et al. 2008). van der Steen (1998) proposed a model of the ‘optokinetic influence’, where the visual estimate ‘attracts’ the vestibular estimate over a transient period, modelling the onset of visual self-motion (‘vection’). This is modelled by passing the difference between the visual and vestibular estimates through a low-pass filter given by:
14 |
giving the optokinetic influence which is then added to the vestibular output as shown in Fig. 16. An implication of this model is that pre-filtering the vestibular cues by the inverse of the vestibular dynamics, or conversely pre-filtering the visual cues by the vestibular dynamics, should cause the visual and vestibular cues to be perceived as coherent even though they differ substantially. Wentink et al. (2009) tested this hypothesis using subjective feedback from experiments in a simulator and found that pre-filtering the vestibular cues by the inverse of the vestibular dynamics did indeed result in coherent perception. However, pre-filtering the visual cues by the vestibular dynamics produced cues which were perceived as coherent for only half the motion conditions.
Zacharias (1977) and Zacharias and Young (1981) developed a detailed empirical model of visual–vestibular integration under cue-conflict conditions, shown in Fig. 17. Borah et al. (1988) developed an adaptive version of the model of Zacharias (1977), using a slightly modified weighting function then multiplying the visual estimate by the gain K and combining it with the vestibular estimate using a Kalman filter. Telban and Cardullo (2005) adapted the model of Zacharias (1977), using some of the modifications suggested by Borah et al. (1988) and including the optokinetic influence modelled by van der Steen (1998). They ran simulations to find the response to velocity step inputs to the visual system with and without corresponding vestibular cues and found that it was possible to reproduce latencies measured in previous studies on humans. However, further validation work is needed to determine whether this model is more generally applicable.
Wright et al. (2005) subjected participants to vertical motion with conflicting visual and vestibular information, playing back recordings of the visual surroundings of the apparatus to give a realistic scene. Visual and vestibular cues were presented with different amplitudes and in some cases out of phase with one another. The results were found to be incompatible with linear weighting conflict models and the more complicated model of Zacharias (1977), as for high visual amplitudes the visual perception was found to dominate, independent of the vestibular amplitude. More research is clearly needed to develop a model which can describe integration of biased sensory estimates under a wide range of conditions.
Discussion
Key results from the literature on human sensory dynamics have been presented in Sects. 2 to 5. In this section, these results are summarised and discussed with a view to understanding and modelling driver steering and speed control.
Results for the human sensory systems which are most relevant to driver modelling are summarised in Table 3. Transfer functions are presented which have either been found from models of the sensory dynamics and measurements of brain activity or inferred from sensory threshold measurements. Using the transfer functions found from sensory threshold data may give more accurate results near the limits of perception; however, they may not capture all of the dynamic behaviour of the sensory system.
Table 3.
System | Input | From sensory dynamics | From perception thresholds | Weber fraction (%) | Sensor delay (ms) | ||
---|---|---|---|---|---|---|---|
Transfer function | Noise | Transfer function | Noise | ||||
Visual feedback | Yaw angular velocity | 1 | 0.0013 (rad/s*) | 0.0011 (rad/s*) | 7–11 | 100–560 | |
Visual feedback | Lateral velocity | 1 | 0.035 (m/s*) | 0.032 (m/s*) | 7–11 | 100–560 | |
Visual feedback | Longitudinal velocity | 1 | – | – | – | 10–50 | 100–560 |
Visual feedforward | Target path | Preview model | – | – | – | 11 | 100–560 |
Otoliths | Acceleration | 0.038 (m/s*) | 0.015 (m/s*) | 2–5 | 5–440 | ||
SCCs | Angular velocity | 0.023 (rad/s*) | 0.025 (rad/s*) | 3–13 | 5–440 | ||
Muscle spindles (Type Ia) | Arm muscle displacement | – | – | – | 10 | >34 | |
Muscle spindles (Type II) | Arm muscle displacement | – | – | – | 10 | >48 | |
GTOs | Arm muscle force | – | – | – | 10 | >34 |
For the key sensory systems involved in driving, transfer functions between the input stimulus and the sensory response are given, either from considerations of the sensory dynamics or from perception threshold measurements. Noise levels have been calculated from sensory threshold measurements, as well as Weber fractions showing how thresholds increase with stimulus amplitude. Estimates of sensory delays are also included
Noise magnitudes have been inferred from sensory threshold measurements using the signal-in-noise model of Soyka et al. (2011, 2012). These were found from passive threshold measurements taken for one sensory stimulus at a time; however, thresholds have been found to increase in active conditions and in the presence of other sensory stimuli by factors between 1.5 and 6 (Hosman and Van Der Vaart 1978; Samji and Reid 1992; Zaichik et al. 1999; Rodchenko et al. 2000; Groen and Bles 2004; Valente Pais et al. 2012). This means that the noise values shown in Table 3 should be considered as lower bounds. Most sensory systems have been found to approximate Weber’s law, with JNDs increasing with stimulus amplitude; therefore, Weber fractions have been included in Table 3. This increase in sensory noise with stimulus amplitude can be modelled by including signal dependent as well as additive noise (Todorov 2005; Bigler 2013).
Estimates of sensor delays are also given in Table 3 for each system, comprising of all components of the time delay between stimulus application and physical response. However, there is still some uncertainty about the precise values, as it is thought that delays in neural processing may be dependent on the exact nature of the stimuli and the task being carried out. It is unclear whether delays increase or decrease during active conditions; however, they have been found to increase with additional stimuli in multimodal conditions.
For many types of stimuli, coherent sensory information has been found to be integrated in a statistically optimal fashion (Ernst and Banks 2002; Oruç et al. 2003; Butler et al. 2010, 2011; Seilheimer et al. 2014; Gu et al. 2008; Prsa et al. 2012; Drugowitsch et al. 2014). Humans build up internal models of themselves and their surroundings (Wolpert and Ghahramani 2000) and a Kalman filter can be used to model optimal sensory integration using internal models (Kalman 1960; Grewal and Andrews 2001; Borah et al. 1988; Zupan et al. 2002; Young 2011; Onur 2014). For incoherent sensory information, when conflicting information is presented to the different sensory channels, sensory integration is less well understood. A variety of different models have been proposed; however, no overwhelming evidence has been found in favour of any of them. Consideration of how humans integrate incoherent or biased sensory measurements may be important when studying drivers in virtual environments; however, in normal driving the senses should be in agreement.
Since sensory parameters have been found to change under active or multimodal conditions, it may not be appropriate to apply the results shown in Table 3 directly to a driver model. It is very difficult to measure sensory parameters directly during realistic driving conditions. However, by developing a model of driver control behaviour incorporating sensory dynamics, parametric identification methods could be used to gain some insight into the performance of sensory systems while driving. Parametric identification procedures have been described by Ljung (1999) and Zaal et al. (2009b) and applied to driver models by Keen and Cole (2012) and Odhams and Cole (2014). A number of studies have been carried out at Delft University of Technology to identify pilot control strategies under different conditions (Pool et al. 2008; Zaal et al. 2013, 2009c, 2010, 2009a, 2012; Nieuwenhuizen et al. 2013; Ellerbroek et al. 2008; Drop et al. 2013). Since the pilot control task is similar to a driver steering control task, the results of Zaal et al. (2009c) have been used to validate an initial concept for a driver model incorporating sensory dynamics (Nash and Cole 2015).
It is hoped that the information presented in this literature review will inform and motivate future researchers to consider the influence of sensory dynamics in driving tasks. The differences highlighted here between active and passive measurements mean that we do not advocate direct incorporation of results from the sensory perception literature into driver models. Rather, identification methods used in recent aerospace studies seem to show more promise, and there is clear scope for applying similar techniques to studying drivers. The danger in such an approach is that identification of a large number of sensory parameters may become infeasible. Therefore, care must be taken to increase the complexity of sensory models slowly and use carefully designed experiments to isolate different features of sensory perception during driving. This review should serve as a guide for potential areas of investigation and a reference to compare with new results.
Conclusion
The results summarised in this literature review give an insight into various different sensory systems, and how they can be used to model driver control behaviour. Sensory transfer functions have been studied extensively, and there is little disagreement between different studies. Sensory integration is reasonably well understood under normal conditions; however, there is little agreement on how humans cope with conflicting sensory information. Studies have shown that sensory thresholds increase under active and multimodal conditions, but further research is necessary to determine how and why this happens. Time delays also increase during multimodal conditions; however, it is not clear whether they vary during active control tasks. There is a great deal of scope for improvement in the available knowledge on human sensory perception during active control tasks, so future research should focus in this area. It is hoped that the information in this review will prove useful in developing more sophisticated driver steering and speed control models which take account of the driver’s sensory dynamics.
Acknowledgments
The authors wish to thank the anonymous reviewer who provided many useful suggestions for improving the paper, particularly regarding visual perception.
Footnotes
This work was supported by the UK Engineering and Physical Sciences Research Council (EP/P505445/1) (studentship for Nash).
Contributor Information
Christopher J. Nash, Email: cn320@cam.ac.uk
David J. Cole, Email: djc13@cam.ac.uk
References
- Abbink Da, Mulder M, Van Der Helm FCT, Mulder M, Boer ER. Measuring neuromuscular control dynamics during car following with continuous haptic feedback. IEEE Trans Syst Man Cybern Part B Cybern. 2011;41(5):1239–1249. doi: 10.1109/TSMCB.2011.2120606. [DOI] [PubMed] [Google Scholar]
- Authié CN, Mestre DR. Path curvature discrimination: dependence on gaze direction and optical flow speed. PloS One. 2012;7(2):e31,479. doi: 10.1371/journal.pone.0031479. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Aw ST, Todd MJ, Halmagyi GM. Latency and initiation of the human vestibuloocular reflex to pulsed galvanic stimulation. J Neurophysiol. 2006;96(2):925–930. doi: 10.1152/jn.01250.2005. [DOI] [PubMed] [Google Scholar]
- Barber MJ, Clark JW, Anderson CH. Neural representation of probabilistic information. Neural comput. 2003;15(8):1843–1864. doi: 10.1162/08997660360675062. [DOI] [PubMed] [Google Scholar]
- Barnett-Cowan M, Nolan H, Butler JS, Foxe JJ, Reilly RB, Bülthoff HH. Reaction time and event-related potentials to visual, auditory and vestibular stimuli. J Vis. 2010;10(7):e1400. doi: 10.1167/10.7.1400. [DOI] [Google Scholar]
- Barnett-Cowan M. Vestibular perception is slow: a review. Multisens Res. 2013;26(4):387–403. [PubMed] [Google Scholar]
- Barnett-Cowan M, Harris LR. Perceived timing of vestibular stimulation relative to touch, light and sound. Exp Brain Res. 2009;198(2–3):221–231. doi: 10.1007/s00221-009-1779-4. [DOI] [PubMed] [Google Scholar]
- Bayes T. An essay towards solving a problem in the doctrine of chances. Philos Trans R Soc Lond. 1763;53:370–418. doi: 10.1098/rstl.1763.0053. [DOI] [PubMed] [Google Scholar]
- Benson AJ, Spencer MB, Stott JR. Thresholds for the detection of the direction of whole-body, linear movement in the horizontal plane. Aviat Space Environ Med. 1986;57:1088–1096. [PubMed] [Google Scholar]
- Benson AJ, Hutt EC, Brown SF. Thresholds for the perception of whole body angular movement about a vertical axis. Aviat Space Environ Med. 1989;60:205–213. [PubMed] [Google Scholar]
- Bigler RS, Cole DJ (2011) A review of mathematical models of human sensory dynamics relevant to the steering task. In: Iwnicki S, Goodall R, Mei TX (eds) The International Association for Vehicle System Dynamics, Manchester Metropolitan University, Manchester, UK
- Bigler RS (2013) Automobile driver sensory system modeling. Ph.d. thesis, Cambridge University
- Boer ER (1996) Tangent point oriented curve negotiation. In: Proceedings of conference on intelligent vehicles, IEEE 617:7–12. doi:10.1109/IVS.1996.566341
- Boer ER. Car following from the driver’s perspective. Transp Res Part F Traffic Psychol Behav. 1999;2(4):201–206. doi: 10.1016/S1369-8478(00)00007-3. [DOI] [Google Scholar]
- Borah J, Young LR, Curry RE. Optimal estimator model for human spatial orientation. Ann N Y Acad Sci. 1988;545(1):51–73. doi: 10.1111/j.1749-6632.1988.tb19555.x. [DOI] [PubMed] [Google Scholar]
- Boring E. A chart of the psychometric function. Am J Psychol. 1917;28(4):465–470. doi: 10.2307/1413891. [DOI] [Google Scholar]
- Bottoms DJ. The interaction of driving speed, steering difficulty and lateral tolerance with particular reference to agriculture. Ergonomics. 1983;26(2):123–139. doi: 10.1080/00140138308963324. [DOI] [Google Scholar]
- Bremmer F, Lappe M. The use of optical velocities for distance discrimination and reproduction during visually simulated self motion. Exp Brain Res. 1999;127(1):33–42. doi: 10.1007/s002210050771. [DOI] [PubMed] [Google Scholar]
- Bronstein AM, Hood JD. The cervico-ocular reflex in normal subjects and patients with absent vestibular function. Brain Res. 1986;373(1–2):399–408. doi: 10.1016/0006-8993(86)90355-0. [DOI] [PubMed] [Google Scholar]
- Browning NA, Grossberg S, Mingolla E. A neural model of how the brain computes heading from optic flow in realistic scenes. Cogn Psychol. 2009;59(4):320–356. doi: 10.1016/j.cogpsych.2009.07.002. [DOI] [PubMed] [Google Scholar]
- Burge J, Girshick AR, Banks MS. Visual-haptic adaptation is determined by relative reliability. J Neurosci Off J Soc Neurosci. 2010;30(22):7714–7721. doi: 10.1523/JNEUROSCI.6427-09.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Butler JS, Smith ST, Campos JL, Bülthoff HH. Bayesian integration of visual and vestibular signals for heading. J Vis. 2010;10(11):23. doi: 10.1167/10.11.23. [DOI] [PubMed] [Google Scholar]
- Butler JS, Campos JL, Bülthoff HH, Smith ST. The role of stereo vision in visual-vestibular integration. Seeing Perceiving. 2011;24(5):453–470. doi: 10.1163/187847511X588070. [DOI] [PubMed] [Google Scholar]
- Butler JS, Campos JL, Bülthoff HH. Optimal visual-vestibular integration under conditions of conflicting intersensory motion profiles. Exp Brain Res. 2014;233(2):587–597. doi: 10.1007/s00221-014-4136-1. [DOI] [PubMed] [Google Scholar]
- Campos JL, Bülthoff HH. Multimodal integration during self-motion in virtual reality. In: Murray MM, Wallace MT, editors. The neural bases of multisensory processes, chap 30. Boca Raton: CRC Press; 2012. [PubMed] [Google Scholar]
- Carpenter RHS. Neurophysiology. 4. London: Arnold; 1984. [Google Scholar]
- Carriot J, Jamali M, Cullen KE. Rapid adaptation of multisensory integration in vestibular pathways. Front Syst Neurosci. 2015;9(59):1–5. doi: 10.3389/fnsys.2015.00059. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chattington M, Wilson M, Ashford D, Marple-Horvat DE. Eye-steering coordination in natural driving. Exp Brain Res. 2007;180(1):1–14. doi: 10.1007/s00221-006-0839-2. [DOI] [PubMed] [Google Scholar]
- Clark JJ, Yuille AL. Data fusion for sensory information processing systems. Norwell: Kluwer Academic Publishers; 1990. [Google Scholar]
- Cloete SR, Wallis G. Limitations of feedforward control in multiple-phase steering movements. Exp Brain Res. 2009;195(3):481–487. doi: 10.1007/s00221-009-1813-6. [DOI] [PubMed] [Google Scholar]
- Cole DJ, Pick AJ, Odhams A. Predictive and linear quadratic methods for potential application to modelling driver steering control. Veh Syst Dyn. 2006;44(3):259–284. doi: 10.1080/00423110500260159. [DOI] [Google Scholar]
- Cole DJ. A path-following driver-vehicle model with neuromuscular dynamics, including measured and simulated responses to a step in steering angle overlay. Veh Syst Dyn. 2012;50(4):573–596. doi: 10.1080/00423114.2011.606370. [DOI] [Google Scholar]
- Collins DF, Refshauge KM, Todd G, Gandevia SC. Cutaneous receptors contribute to kinesthesia at the index finger, elbow, and knee. J Neurophysiol. 2005;94(3):1699–1706. doi: 10.1152/jn.00191.2005. [DOI] [PubMed] [Google Scholar]
- Correia Grácio BJ, van Paasen MM, Mulder M (2010) Tuning of the lateral specific force gain based on human motion perception in the Desdemona simulator . In: AIAA modeling and simulation technologies conference, August, p e8094. doi:10.2514/6.2010-8094
- Correia Grácio BJ, Valente Pais AR, van Paassen MM, Mulder M, Kelly LC, Houck JA. Optimal and coherence zone comparison within and between flight simulators. J Aircr. 2013;50(2):493–507. doi: 10.2514/1.C031870. [DOI] [Google Scholar]
- Correia Grácio BJ, Bos JE, van Paassen MM, Mulder M. Perceptual scaling of visual and inertial cues: effects of field of view, image size, depth cues, and degree of freedom. Exp Brain Res. 2014;232(2):637–646. doi: 10.1007/s00221-013-3772-1. [DOI] [PubMed] [Google Scholar]
- de Bruyn B, Orban GA. Human velocity and direction discrimination measured with random dot patterns. Vis Res. 1988;28(12):1323–1335. doi: 10.1016/0042-6989(88)90064-8. [DOI] [PubMed] [Google Scholar]
- de Winkel K, Soyka F, Barnett-Cowan M, Bülthoff HH, Groen EL, Werkhoven P. Integration of visual and inertial cues in the perception of angular self-motion. Exp Brain Res. 2013;231(2):209–218. doi: 10.1007/s00221-013-3683-1. [DOI] [PubMed] [Google Scholar]
- de Winkel KN, Katliar M, Bülthoff HH. Forced fusion in multisensory heading estimation. Plos One. 2015;10(5):e0127104. doi: 10.1371/journal.pone.0127104. [DOI] [PMC free article] [PubMed] [Google Scholar]
- de Winkel K, Correia Grácio BJ, Groen EL, Werkhoven P (2010) Visual inertial coherence zone in the perception of heading. In: AIAA modeling and simulation technologies conference, American Institute of Aeronautics and Astronautics, Reston, Virigina, August, p e7916. doi:10.2514/6.2010-7916
- Defazio K, Wittman D, Drury CG. Effective vehicle width in self-paced tracking. Appl Ergon. 1992;23(6):382–386. doi: 10.1016/0003-6870(92)90369-7. [DOI] [PubMed] [Google Scholar]
- Deneve S, Latham PE, Pouget A. Reading population codes: a neural implementation of ideal observers. Nat Neurosci. 1999;2(8):740–745. doi: 10.1038/11205. [DOI] [PubMed] [Google Scholar]
- Dichgans J, Brandt T (1978) Visual–vestibular interaction: effects on self-motion perception and postural control. In: Held R, Leibowitz HW, Teuber H (eds) Perception. Springer Berlin Heidelberg, Berlin, Heidelberg, pp 755–804. doi:10.1007/978-3-642-46354-9_25
- Donges E. A two-level model of driver steering behavior. Hum Factors J Hum Factors Ergon Soc. 1978;20(6):691–707. [Google Scholar]
- dos Santos Buinhas L, Correia Grácio BJ, Valente Pais AR, van Paassen MM, Mulder M (2013) Modeling coherence zones in flight simulation during yaw motion. In: AIAA modeling and simulation technologies (MST) conference, American Institute of Aeronautics and Astronautics, Reston, Virginia. doi:10.2514/6.2013-5223
- Drop FM, Pool DM, Damveld HJ, van Paassen MM, Mulder M. Identification of the feedforward component in manual control with predictable target signals. IEEE Trans Cybern. 2013;43(6):1936–1949. doi: 10.1109/TSMCB.2012.2235829. [DOI] [PubMed] [Google Scholar]
- Drugowitsch J, DeAngelis GC, Klier EM, Angelaki DE, Pouget A. Optimal multisensory decision-making in a reaction-time task. ELife. 2014;3:e03005. doi: 10.7554/eLife.03005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Einstein A. Über die Möglichkeit einer neuen Prüfung des Relativitätsprinzips. Ann der Phys. 1907;328(6):197–198. doi: 10.1002/andp.19073280613. [DOI] [Google Scholar]
- Ellerbroek J, Stroosma O, Mulder M, van Paassen MM. Role identification of yaw and sway motion in helicopter yaw control tasks. J Aircr. 2008;45(4):1275–1289. doi: 10.2514/1.34513. [DOI] [Google Scholar]
- Ernst MO, Banks MS. Humans integrate visual and haptic information in a statistically optimal fashion. Nature. 2002;415(6870):429–433. doi: 10.1038/415429a. [DOI] [PubMed] [Google Scholar]
- Ernst MO, Bülthoff HH. Merging the senses into a robust percept. Trends Cogn Sci. 2004;8(4):162–169. doi: 10.1016/j.tics.2004.02.002. [DOI] [PubMed] [Google Scholar]
- Ernst MO, Di Luca M. Multisensory perception: from integration to remapping. Sensory cue integration. Oxford: Oxford University Press; 2011. pp. 224–250. [Google Scholar]
- Fernandez C, Goldberg JM. Physiology of peripheral neurons innervating semicircular canals of the squirrel monkey. Parts I to III. J Neurophysiol. 1971;34(4):661–675. doi: 10.1152/jn.1971.34.4.661. [DOI] [PubMed] [Google Scholar]
- Fernandez C, Goldberg JM. Physiology of peripheral neurons innervating otolith organs of the squirrel monkey. Parts I to III. J Neurophysiol. 1976;39(5):970–984. doi: 10.1152/jn.1976.39.5.970. [DOI] [PubMed] [Google Scholar]
- Fetsch CR, Turner AH, DeAngelis GC, Angelaki DE. Dynamic reweighting of visual and vestibular cues during self-motion perception. J Neurosci Off J Soc Neurosci. 2009;29(49):15601–15612. doi: 10.1523/JNEUROSCI.2574-09.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fetsch CR, Deangelis GC, Angelaki DE. Visual-vestibular cue integration for heading perception: applications of optimal cue integration theory. Eur J Neurosci. 2010;31(10):1721–1729. doi: 10.1111/j.1460-9568.2010.07207.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fitzpatrick RC, Day BL. Probing the human vestibular system with galvanic stimulation. J Appl Physiol (Bethesda, Md : 1985) 2004;96(6):2301–2316. doi: 10.1152/japplphysiol.00008.2004. [DOI] [PubMed] [Google Scholar]
- Flach JM. Control with an eye for perception: precursors to an active psychophysics. Ecol Psychol. 1990;2(2):83–111. doi: 10.1207/s15326969eco0202_1. [DOI] [Google Scholar]
- Gawthrop P, Loram I, Lakie M, Gollee H. Intermittent control: a computational theory of human control. Biol Cybern. 2011;104(1–2):31–51. doi: 10.1007/s00422-010-0416-4. [DOI] [PubMed] [Google Scholar]
- Ghahramani Z, Wolpert D, Jordan MI. Computational models of sensorimotor integration. Adv Psychol. 1997;119:117–147. doi: 10.1016/S0166-4115(97)80006-4. [DOI] [Google Scholar]
- Gianna C, Heimbrand S, Gresty M. Thresholds for detection of motion direction during passive lateral whole-body acceleration in normal subjects and patients with bilateral loss of labyrinthine function. Brain Res Bull. 1996;40(5–6):443–449. doi: 10.1016/0361-9230(96)00140-2. [DOI] [PubMed] [Google Scholar]
- Gibson JJ. Percept Vis World. Boston: Houghton Mifflin; 1950. [Google Scholar]
- Girshick AR, Banks MS. Probabilistic combination of slant information: weighted averaging and robustness as optimal percepts. J Vis. 2009;9(9):e8. doi: 10.1167/9.9.8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gordon DA. Static and dynamic visual fields in human space perception. Josa. 1965;372(10):1296–1303. doi: 10.1364/JOSA.55.001296. [DOI] [PubMed] [Google Scholar]
- Gordon DA. Experimental isolation of the driver’s visual input. Hum Factors J Hum Factors Ergon Soc. 1966;8(2):129–138. doi: 10.1177/001872086600800203. [DOI] [PubMed] [Google Scholar]
- Gordon T, Lidberg M. Automated driving and autonomous functions on road vehicles. Veh Syst Dyn. 2015;53(7):958–994. doi: 10.1080/00423114.2015.1037774. [DOI] [Google Scholar]
- Grabherr L, Nicoucar K, Mast FW, Merfeld DM. Vestibular thresholds for yaw rotation about an earth-vertical axis as a function of frequency. Exp Brain Res. 2008;186(4):677–681. doi: 10.1007/s00221-008-1350-8. [DOI] [PubMed] [Google Scholar]
- Grant PR, Lee PT. Motion-visual phase-error detection in a flight simulator. J Aircr. 2007;44(3):927–935. doi: 10.2514/1.25807. [DOI] [Google Scholar]
- Grewal MS, Andrews AP. Kalman filtering: theory and practice using MATLAB. 2. New York: Wiley; 2001. [Google Scholar]
- Groen EL, Wentink M, Valente Pais AR, Mulder M, van Paassen MM (2006) Motion perception thresholds in flight simulation. In: AIAA modeling and simulation technologies conference and exhibit, American Institute of Aeronautics and Astronautics, Reston, Virigina, p e6254. doi:10.2514/6.2006-6254
- Groen EL, Bles W. How to use body tilt for the simulation of linear self motion. J Vestib Res Equilib Orientat. 2004;14(5):375–385. [PubMed] [Google Scholar]
- Grossberg S, Mingolla E, Viswanathan L. Neural dynamics of motion integration and segmentation within and across apertures. Vis Res. 2001;41(19):2521–2553. doi: 10.1016/S0042-6989(01)00131-6. [DOI] [PubMed] [Google Scholar]
- Grunwald AJ, Merhav SJ. Vehicular control by visual field cues-analytical model and experimental validation. IEEE Trans Syst Man Cybern. 1976;6(12):835–845. doi: 10.1109/TSMC.1976.4309480. [DOI] [Google Scholar]
- Gu Y, Angelaki DE, Deangelis GC (2008) Neural correlates of multisensory cue integration in macaque MSTd. Nat Neurosci 11(10):1201–1210. doi:10.1038/nn.2191 [DOI] [PMC free article] [PubMed]
- Hämäläinen M, Hari R, Ilmoniemi RJ, Knuutila J, Lounasmaa OV. Magnetoencephalography-theory, instrumentation, and applications to noninvasive studies of the working human brain. Rev Mod Phys. 1993;65(2):413–497. doi: 10.1103/RevModPhys.65.413. [DOI] [Google Scholar]
- Harris LR, Jenkin M, Zikovitz DC. Visual and non-visual cues in the perception of linear self-motion. Exp Brain Res. 2000;135(1):12–21. doi: 10.1007/s002210000504. [DOI] [PubMed] [Google Scholar]
- Heerspink H, Berkouwer W, Stroosma O, van Paassen MM, Mulder M, Mulder B (2005) Evaluation of vestibular thresholds for motion detection in the SIMONA research simulator. In: AIAA modeling and simulation technologies conference and exhibit, American Institute of Aeronautics and Astronautics, Reston, Virigina, p e6502. doi:10.2514/6.2005-6502
- Herrin GD, Neuhardt JB. An empirical model for automobile driver horizontal curve negotiation. Hum Factors. 1974;16(2):129–133. [Google Scholar]
- Hosman RJ (1996) Pilot’s perception and control of aircraft motions. Ph.d. Thesis, Delft University of Technology
- Hosman RJ, Stassen H. Pilot’s perception in the control of aircraft motions. Control Eng Pract. 1999;7(11):1421–1428. doi: 10.1016/S0967-0661(99)00111-2. [DOI] [PubMed] [Google Scholar]
- Hosman RJ, Van Der Vaart JC. Vestibular models and thresholds of motion perception: results of tests in a flight simulator. Tech. rep. Delft: Delft University of Technology; 1978. [Google Scholar]
- Houck JA, Simon W. Responses of Golgi tendon organs to forces applied to muscle tendon. J Neurophysiol. 1967;30(6):1466–1481. doi: 10.1152/jn.1967.30.6.1466. [DOI] [PubMed] [Google Scholar]
- Ishida M, Fushiki H, Nishida H, Watanabe Y. Self-motion perception during conflicting visual–vestibular acceleration. J Vestib Res Equilib orientat. 2008;18:267–272. [PubMed] [Google Scholar]
- Johansson G. Visual motion perception. Sci Am. 1975;232(6):76–88. doi: 10.1038/scientificamerican0675-76. [DOI] [PubMed] [Google Scholar]
- Johns TA, Cole DJ (2015) Measurement and mathematical model of a driver’s intermittent compensatory steering control. Vehicle system dynamics (In press) NVSD110,074. doi:10.1080/02699931.2011.628301
- Jonik P, Valente Pais AR, van Paassen MM, Mulder M (2011) Phase coherence zones in flight simulation. In: AIAA modeling and simulation technologies conference, American Institute of Aeronautics and Astronautics, Reston, Virigina, August, p e6555. doi:10.2514/6.2011-6555
- Kaliuzhna M, Prsa M, Gale S, Lee SJ, Blanke O. Learning to integrate contradictory multisensory self-motion cue pairings. J Vis. 2015;15(1):e10. doi: 10.1167/15.1.10. [DOI] [PubMed] [Google Scholar]
- Kalman RE. A new approach to linear filtering and prediction problems. J Basic Eng. 1960;82(1):35–45. doi: 10.1115/1.3662552. [DOI] [Google Scholar]
- Kandel E, Schwartz J, Jessell T. Principles of neural science. 4. New York: McGraw-Hill; 2000. [Google Scholar]
- Kandil FI, Rotter A, Lappe M. Driving is smoother and more stable when using the tangent point. J Vis. 2009;9(2009):11.1–11.11. doi: 10.1167/9.1.11. [DOI] [PubMed] [Google Scholar]
- Kandil FI, Rotter A, Lappe M. Car drivers attend to different gaze targets when negotiating closed vs. open bends. J Vis. 2010;10(2010):24.1–24.11. doi: 10.1167/10.4.24. [DOI] [PubMed] [Google Scholar]
- Kawakami O, Kaneoke Y, Maruyama K, Kakigi R, Okada T, Sadato N, Yonekura Y. Visual detection of motion speed in humans: spatiotemporal analysis by fMRI and MEG. Hum Brain Mapp. 2002;118:104–118. doi: 10.1002/hbm.10033. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Keen SD, Cole DJ. Application of time-variant predictive control to modelling driver steering skill. Veh Syst Dyn. 2011;49(4):527–559. doi: 10.1080/00423110903551626. [DOI] [Google Scholar]
- Keen SD, Cole DJ. Bias-free identification of a linear model-predictive steering controller from measured driver steering behavior. IEEE Trans Syst Man Cybern Part B Cybern. 2012;42(2):434–443. doi: 10.1109/TSMCB.2011.2167509. [DOI] [PubMed] [Google Scholar]
- Kingma H. Thresholds for perception of direction of linear acceleration as a possible evaluation of the otolith function. BMC Ear Nose Throat Disord. 2005 doi: 10.1186/1472-6815-5-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Knill DC. Robust cue integration: a Bayesian model and evidence from cue-conflict studies with stereoscopic and figure cues to slant. J Vis. 2007;7(7):e5. doi: 10.1167/7.7.5. [DOI] [PubMed] [Google Scholar]
- Koenderink JJ. Optic flow. Vis Res. 1986;26(1):161–179. doi: 10.1016/0042-6989(86)90078-7. [DOI] [PubMed] [Google Scholar]
- Kondoh T, Yamamura T, Kitazaki S, Kuge N, Boer ER. Identification of visual cues and quantification of drivers’ perception of proximity risk to the lead vehicle in car-following situations. J Mech Syst Transp Logist. 2008;1(2):170–180. doi: 10.1299/jmtl.1.170. [DOI] [Google Scholar]
- Körding KP, Beierholm U, Ma WJ, Quartz S, Tenenbaum JB, Shams L. Causal inference in multisensory perception. PloS One. 2007;2(9):e943. doi: 10.1371/journal.pone.0000943. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lam K, Kaneoke Y, Gunji A, Yamasaki H, Matsumoto E, Naito T, Kakigi R. Magnetic response of human extrastriate cortex in the detection of coherent and incoherent motion. Neuroscience. 2000;97(1):1–10. doi: 10.1016/S0306-4522(00)00037-3. [DOI] [PubMed] [Google Scholar]
- Land MF (1998) The visual control of steering. In: Harris LR, Jenkin M (eds) Vision and action, Cambridge University Press, Cambridge, UK, pp 163–180
- Land MF, Horwood J. Which parts of the road guide steering? Nature. 1995;377(6547):339–430. doi: 10.1038/377339a0. [DOI] [PubMed] [Google Scholar]
- Land MF, Lee DN. Where we look when we steer. Nature. 1994;369(6483):742–744. doi: 10.1038/369742a0. [DOI] [PubMed] [Google Scholar]
- Land MF, Tatler BW. Steering with the head. The visual strategy of a racing driver. Current Biol CB. 2001;11(15):1215–1220. doi: 10.1016/S0960-9822(01)00351-7. [DOI] [PubMed] [Google Scholar]
- Landy MS, Maloney LT, Johnston EB, Young M. Measurement and modeling of depth cue combination: in defense of weak fusion. Vis Res. 1995;35(3):389–412. doi: 10.1016/0042-6989(94)00176-M. [DOI] [PubMed] [Google Scholar]
- Lappe M, Bremmer F, Van Den Berg aV. Perception of self-motion from visual flow. Trends Cogn Sci. 1999;3(9):329–336. doi: 10.1016/S1364-6613(99)01364-9. [DOI] [PubMed] [Google Scholar]
- Lappi O, Lehtonen E, Pekkanen J, Itkonen T. Beyond the tangent point: gaze targets in naturalistic driving. J Vis. 2013;13(13):11. doi: 10.1167/13.13.11. [DOI] [PubMed] [Google Scholar]
- Legge GE, Campbell F. Displacement detection in human vision. Vis Res. 1981;21(2):205–213. doi: 10.1016/0042-6989(81)90114-0. [DOI] [PubMed] [Google Scholar]
- Lehtonen E, Lappi O, Kotkanen H, Summala H. Look-ahead fixations in curve driving. Ergonomics. 2013;56(1):34–44. doi: 10.1080/00140139.2012.739205. [DOI] [PubMed] [Google Scholar]
- Levitt H. Transformed up–down method in psychoacoustics. J Acoust Soc Am. 1970;49(2b):467–477. doi: 10.1121/1.1912375. [DOI] [PubMed] [Google Scholar]
- Ljung L. System identification: theory for the user. 2. Upper Saddle River: Prentice Hall; 1999. [Google Scholar]
- Lot R, Dal Bianco N. Lap time optimisation of a racing go-kart. Veh Syst Dyn. 2015;3114(January):1–21. [Google Scholar]
- Macadam CC. Application of an optimal preview control for simulation of closed-loop automobile driving. IEEE Trans Syst Man CybernSMC. 1981;11(6):393–399. doi: 10.1109/TSMC.1981.4308705. [DOI] [Google Scholar]
- Macadam CC. Understanding and modeling the human driver. Veh Syst Dyn. 2003;40(1–3):101–134. doi: 10.1076/vesd.40.1.101.15875. [DOI] [Google Scholar]
- MacNeilage PR, Banks MS, Berger DR, Bülthoff HH. A Bayesian model of the disambiguation of gravitoinertial force by visual cues. Exp Brain Res. 2007;179(2):263–290. doi: 10.1007/s00221-006-0792-0. [DOI] [PubMed] [Google Scholar]
- Mallery RM, Olomu OU, Uchanski RM, Militchin VA, Hullar TE. Human discrimination of rotational velocities. Exp Brain Res. 2010;204(1):11–20. doi: 10.1007/s00221-010-2288-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Maltenfort MG, Burke RE. Spindle model responsive to mixed fusimotor inputs and testable predictions of beta feedback effects. J Neurophysiol. 2003;89(5):2797–2809. doi: 10.1152/jn.00942.2002. [DOI] [PubMed] [Google Scholar]
- Mars F. Driving around bends with manipulated eye-steering coordination. J Vis. 2008;8(11):10.1–10.11. doi: 10.1167/8.11.10. [DOI] [PubMed] [Google Scholar]
- Matthews PB. Evidence from the use of vibration that the human long-latency stretch reflex depends upon spindle secondary afferents. J Physiol. 1984;348:383–415. doi: 10.1113/jphysiol.1984.sp015116. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mileusnic MP, Brown IE, Lan N, Loeb GE. Mathematical models of proprioceptors. I. Control and transduction in the muscle spindle. J Neurophysiol. 2006;96(4):1772–1788. doi: 10.1152/jn.00868.2005. [DOI] [PubMed] [Google Scholar]
- Mileusnic MP, Loeb GE. Mathematical models of proprioceptors. II. Structure and function of the Golgi tendon organ. J Neurophysiol. 2006;96(4):1789–1802. doi: 10.1152/jn.00869.2005. [DOI] [PubMed] [Google Scholar]
- Mingolla E. Neural models of motion integration and segmentation. Neural Netw Off J Int Neural Netw Soc. 2003;16(5–6):939–945. doi: 10.1016/S0893-6080(03)00099-6. [DOI] [PubMed] [Google Scholar]
- Monen J, Brenner E. Detecting changes in one’s own velocity from the optic flow. Perception. 1994;23:681–690. doi: 10.1068/p230681. [DOI] [PubMed] [Google Scholar]
- Nakayama K. Biological image motion processing: a review. Vis Res. 1985;25(5):625–660. doi: 10.1016/0042-6989(85)90171-3. [DOI] [PubMed] [Google Scholar]
- Naseri AR, Grant PR. Human discrimination of translational accelerations. Exp Brain Res. 2012;218(3):455–464. doi: 10.1007/s00221-012-3035-6. [DOI] [PubMed] [Google Scholar]
- Nash CJ, Cole DJ (2015) Development of a novel model of driver-vehicle steering control incorporating sensory dynamics. In: 24th International symposium on dynamics of vehicles on roads and tracks, Taylor & Francis, Graz, Austria
- Nesti A, Barnett-Cowan M, MacNeilage PR, Bülthoff HH. Human sensitivity to vertical self-motion. Exp Brain Res. 2014;232(1):303–314. doi: 10.1007/s00221-013-3741-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nesti A, Beykirch KA, Pretto P, Bülthoff HH. Self-motion sensitivity to visual yaw rotations in humans. Exp Brain Res. 2014;233(3):861–869. doi: 10.1007/s00221-014-4161-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nesti A, Beykirch Ka, Pretto P, Bülthoff HH. Human discrimination of head-centred visual-inertial yaw rotations. Exp Brain Res. 2015 doi: 10.1007/s00221-015-4426-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Newberry AC, Griffin MJ, Dowson M. Driver perception of steering feel. Proc Inst Mech Eng Part D J Automob Eng. 2007;221(4):405–415. doi: 10.1243/09544070JAUTO415. [DOI] [Google Scholar]
- Nieuwenhuizen FM, Mulder M, van Paassen MM, Bülthoff HH. Influences of simulator motion system characteristics on pilot control behavior. J Guid Control Dyn. 2013;36(3):667–676. doi: 10.2514/1.59257. [DOI] [Google Scholar]
- Odhams AMC, Cole DJ (2004) Models of driver speed choice in curves. In: 7th international symposium on advanced vehicle control (AVEC 04), pp 1–6
- Odhams aMC, Cole DJ. Application of linear preview control to modelling human steering control. Proc Inst Mech Eng Part D J Automob Eng. 2009;223(7):835–853. doi: 10.1243/09544070JAUTO1039. [DOI] [Google Scholar]
- Odhams AM, Cole DJ. Identification of the steering control behaviour of five test subjects following a randomly curving path in a driving simulator. Int J Veh Auton Syst. 2014;12(1):44. doi: 10.1504/IJVAS.2014.057863. [DOI] [Google Scholar]
- Ohmi M. Egocentric perception through interaction among many sensory systems. Cogn Brain Res. 1996;5(1–2):87–96. doi: 10.1016/S0926-6410(96)00044-4. [DOI] [PubMed] [Google Scholar]
- Onur C (2014) Developing a computational model of the pilot’s best possible expectation of aircraft state given vestibular and visual cues. Master’s thesis, Georgia Institute of Technology
- Oruç I, Maloney LT, Landy MS. Weighted linear cue combination with possibly correlated error. Vis Res. 2003;43(23):2451–2468. doi: 10.1016/S0042-6989(03)00435-8. [DOI] [PubMed] [Google Scholar]
- Peng H (2002) Evaluation of driver assistance systems—a human centered approach. In: Proceedings of 6th symposium on advanced vehicle control
- Pick AJ, Cole DJ. Dynamic properties of a driver’s arms holding a steering wheel. Proc Inst Mech Eng Part D J Automob Eng. 2007;221(12):1475–1486. doi: 10.1243/09544070JAUTO460. [DOI] [Google Scholar]
- Pick AJ, Cole DJ. A mathematical model of driver steering control including neuromuscular dynamics. J Dyn Syst Meas Control. 2008;130(3):031004. doi: 10.1115/1.2837452. [DOI] [Google Scholar]
- Plöchl M, Edelmann J. Driver models in automobile dynamics application. Veh Syst Dyn. 2007;45(7–8):699–741. doi: 10.1080/00423110701432482. [DOI] [Google Scholar]
- Pool DM, Mulder M, van Paassen MM, Van Der Vaart JC. Effects of peripheral visual and physical motion cues in roll-axis tracking tasks. J Guid Control Dyn. 2008;31(6):1608–1622. doi: 10.2514/1.36334. [DOI] [Google Scholar]
- Pool DM, Valente Pais AR, De Vroome AM, van Paassen MM, Mulder M. Identification of nonlinear motion perception dynamics using time-domain pilot modeling. J Guid Control Dyn. 2012;35(3):749–763. doi: 10.2514/1.56236. [DOI] [Google Scholar]
- Poppele RE, Bowman RJ. Quantitative description of linear behavior of mammalian muscle spindles. J Neurophysiol. 1970;33(1):59–72. doi: 10.1152/jn.1970.33.1.59. [DOI] [PubMed] [Google Scholar]
- Pouget A, Dayan P, Zemel R. Information processing with population codes. Nat Rev Neurosci. 2000;1(2):125–132. doi: 10.1038/35039062. [DOI] [PubMed] [Google Scholar]
- Pretto P, Nesti A, Nooij S, Losert M, Bülthoff HH (2014) Variable roll-rate perception in driving simulation. In: Driving simulation conference,vol 3(1), pp 1–7
- Prochazka A. Quantifying proprioception. Prog Brain Res. 1999;123:133–142. doi: 10.1016/S0079-6123(08)62850-2. [DOI] [PubMed] [Google Scholar]
- Prokop G. Modeling human vehicle driving by model predictive online. Optimization. 2001 [Google Scholar]
- Proske U, Gandevia SC. The kinaesthetic senses. J Physiol. 2009;587(17):4139–4146. doi: 10.1113/jphysiol.2009.175372. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Prsa M, Gale S, Blanke O. Self-motion leads to mandatory cue fusion across sensory modalities. J Neurophysiol. 2012;108(8):2282–2291. doi: 10.1152/jn.00439.2012. [DOI] [PubMed] [Google Scholar]
- Rader AA, Oman CM, Merfeld DM. Perceived tilt and translation during variable-radius swing motion with congruent or conflicting visual and vestibular cues. Exp Brain Res. 2011;210(2):173–184. doi: 10.1007/s00221-011-2612-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Raudies F, Neumann H. A review and evaluation of methods estimating ego-motion. Comput Vis Image Underst. 2012;116(5):606–633. doi: 10.1016/j.cviu.2011.04.004. [DOI] [Google Scholar]
- Reymond G, Kemeny A, Droulez J, Berthoz A. Role of lateral acceleration in curve driving: driver model and experiments on a real vehicle and a driving simulator. Hum Factors J Hum Factors Ergon Soc. 2001;43(3):483–495. doi: 10.1518/001872001775898188. [DOI] [PubMed] [Google Scholar]
- Riemersma JB. Visual control during straight road driving. Acta Psychol (Amst) 1981;48(1–3):215–225. doi: 10.1016/0001-6918(81)90063-9. [DOI] [PubMed] [Google Scholar]
- Ritchie ML, Mccoy WK, Welde WL. A study of the relation between forward velocity and lateral acceleration in curves during normal driving. Hum Factors J Hum Factors Ergon Soc. 1968;10(3):255–258. doi: 10.1177/001872086801000307. [DOI] [PubMed] [Google Scholar]
- Robertshaw KD, Wilkie RM. Does gaze influence steering around a bend? J Vis. 2008;8(4):18.1–18.13. doi: 10.1167/8.4.18. [DOI] [PubMed] [Google Scholar]
- Rock I, Victor J. Vision and touch: an experimentally created conflict between the two senses. Science. 1964;143(3606):594–596. doi: 10.1126/science.143.3606.594. [DOI] [PubMed] [Google Scholar]
- Rodchenko V, Boris S, White A (2000) In-flight estimation of pilots’ acceleration sensitivity thresholds. In: Modeling and simulation technologies conference, American Institute of Aeronautics and Astronautics, Reston, Virigina, p e4292. doi:10.2514/6.2000-4292
- Samji A, Reid LD. The detection of low-amplitude yawing motion transients in a flight simulator. IEEE Trans Syst Man Cybern. 1992;22(2):300–306. doi: 10.1109/21.148432. [DOI] [Google Scholar]
- Scarfe P, Hibbard PB. Statistically optimal integration of biased sensory estimates. J Vis. 2011;11(7):e12. doi: 10.1167/11.7.12. [DOI] [PubMed] [Google Scholar]
- Seilheimer RL, Rosenberg A, Angelaki DE. Models and processes of multisensory cue combination. Curr Opin Neurobiol. 2014;25:38–46. doi: 10.1016/j.conb.2013.11.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sharp RS, Valtetsiotis V. Optimal preview car steering control. Veh Syst Dyn Suppl. 2001;35:101–117. [Google Scholar]
- Shinar D, Mcdowell ED, Rockwell TH. Eye movements in curve negotiation. Hum Factors J Hum Factors Ergon Soc. 1977;19(1):63–71. doi: 10.1177/001872087701900107. [DOI] [PubMed] [Google Scholar]
- Soyka F, Robuffo Giordano P, Beykirch KA, Bülthoff HH. Predicting direction detection thresholds for arbitrary translational acceleration profiles in the horizontal plane. Exp Brain Res. 2011;209(1):95–107. doi: 10.1007/s00221-010-2523-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Soyka F, Giordano PR, Barnett-Cowan M, Bülthoff HH. Modeling direction discrimination thresholds for yaw rotations around an earth-vertical axis for arbitrary motion profiles. Exp Brain Res. 2012;220(1):89–99. doi: 10.1007/s00221-012-3120-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Soyka F, Bülthoff HH, Barnett-Cowan M. Temporal processing of self-motion: modeling reaction times for rotations and translations. Exp Brain Res. 2013;228(1):51–62. doi: 10.1007/s00221-013-3536-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Soyka F, Bülthoff HH, Barnett-Cowan M. Integration of semi-circular canal and otolith cues for direction discrimination during eccentric rotations. Plos One. 2015;10(8):e0136,925. doi: 10.1371/journal.pone.0136925. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Soyka F, Teufel H, Beykirch KA, Robuffo Giordano P, Butler JS, Nieuwenhuizen FM, Bülthoff HH (2009) Does jerk have to be considered in linear motion simulation? In: Proceedings of the AIAA modeling and simulation technologies conference, pp 1381–1388. doi:10.2514/6.2009-6245
- Steen J, Damveld HJ, Happee R, van Paassen MM, Mulder M (2011) A review of visual driver models for system identification purposes. In: 2011 IEEE international conference on systems, man, and cybernetics, pp 2093–2100. doi:10.1109/ICSMC.2011.6083981
- Steinhausen W. Über die Beobachtung der Cupula in den Bogengangsampullen des Labyrinths des lebenden Hechts. Pflügers Archiv für die Gesamte Physiologie des Menschen und der Tiere. 1933;232(1):500–512. doi: 10.1007/BF01754806. [DOI] [Google Scholar]
- Tabak S, Collewijn H, Boumans LM, Van Der Steen J. Gain and delay of human vestibulo-ocular reflexes to oscillation and steps of the head by a reactive torque helmet. Acta Oto-laryngol. 1997;117(6):785–795. doi: 10.3109/00016489709114203. [DOI] [PubMed] [Google Scholar]
- Telban RJ, Cardullo F (2005) Motion cueing algorithm development: human-centered linear and nonlinear approaches. NASA technical report. Nasa Langley Research Center, Hampton, VA, USA
- Thommyppillai M, Evangelou S, Sharp RS. Car driving at the limit by adaptive linear optimal preview control. Veh Syst Dyn. 2009;47(12):1535–1550. doi: 10.1080/00423110802673109. [DOI] [Google Scholar]
- Timings JP, Cole DJ. Minimum maneuver time calculation using convex optimization. J Dyn Syst Meas Control. 2013;135(3):e031015. doi: 10.1115/1.4023400. [DOI] [Google Scholar]
- Timings JP, Cole DJ. Robust lap-time simulation. Proc Inst Mech Eng Part D J Automob Eng. 2014;228(10):1200–1216. doi: 10.1177/0954407013516102. [DOI] [Google Scholar]
- Todorov E. Stochastic optimal control and estimation methods adapted to the noise characteristics of the sensorimotor system. Neural Comput. 2005;17(5):1084–1108. doi: 10.1162/0899766053491887. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Trojaborg W, Sindrup EH. Motor and sensory conduction in different segments of the radial nerve in normal subjects. J Neurol Neurosurg Psychiatry. 1969;32(4):354–359. doi: 10.1136/jnnp.32.4.354. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ullman S. Interpretation visual motion. Oxford: Massachusetts Inst of Technology Pr; 1979. [Google Scholar]
- Ungoren A, Peng H. An adaptive lateral preview driver model. Veh Syst Dyn. 2005;43(4):245–259. doi: 10.1080/00423110412331290419. [DOI] [Google Scholar]
- Vaitl D, Mittelstaedt H, Saborowski R, Stark R, Baisch F. Shifts in blood volume alter the perception of posture: Further evidence for somatic graviception. Int J Psychophysiol. 2002;44(1):1–11. doi: 10.1016/S0167-8760(01)00184-2. [DOI] [PubMed] [Google Scholar]
- Valente Pais AR, Mulder M, van Paassen MM, Wentink M, Groen EL (2006) Modeling human perceptual thresholds in self-motion perception. In: AIAA modeling and simulation technologies conference and exhibit, American Institute of Aeronautics and Astronautics, Reston, Virigina, p e6626. doi:10.2514/6.2006-6626
- Valente Pais AR, van Paassen MM, Mulder M, Wentink M (2010b) Perception of combined visual and inertial low-frequency yaw motion. In: AIAA modeling and simulation technologies conference, American Institute of Aeronautics and Astronautics, Reston, Virigina, August, pp 1–10. doi:10.2514/6.2010-8093
- Valente Pais AR, van Paassen MM, Mulder M, Wentink M (2011) Effect of performing a boundary-avoidance tracking task on the perception of coherence between visual and inertial cues. In: AIAA modeling and simulation technologies conference, American Institute of Aeronautics and Astronautics, Reston, Virigina, August, p e6324. doi:10.2514/6.2011-6324
- Valente Pais AR, van Paassen MM, Mulder M, Wentink M. Perception coherence zones in flight simulation. J Aircr. 2010;47(6):2039–2048. doi: 10.2514/1.C000281. [DOI] [Google Scholar]
- Valente Pais AR, Pool DM, de Vroome AM, van Paassen MM, Mulder M. Pitch motion perception thresholds during passive and active tasks. J Guid Control Dyn. 2012;35(3):904–918. doi: 10.2514/1.54987. [DOI] [Google Scholar]
- Valko Y, Lewis RF, Priesol aJ, Merfeld DM. Vestibular labyrinth contributions to human whole-body motion discrimination. J Neurosci. 2012;32(39):13,537–13,542. doi: 10.1523/JNEUROSCI.2157-12.2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- van der Steen HFAM (1998) Self-motion perception. Ph.d. thesis, Delft University of Technology, The Netherlands
- Vaseghi S. Advanced digital signal processing and noise reduction. 3. Chichester: Wiley; 2005. [Google Scholar]
- Wallis G, Chatziastros A, Bülthoff HH. An unexpected role for visual feedback in vehicle steering control. Curr Biol. 2002;12(4):295–299. doi: 10.1016/S0960-9822(02)00685-1. [DOI] [PubMed] [Google Scholar]
- Weber E. Annotationes anatomicae et physiologicae. Leipzig: CF Koehler; 1834. [Google Scholar]
- Wentink M, Correia Grácio BJ, Bles W (2009) Frequency dependence of allowable differences in visual and vestibular motion cues in a simulator. In: AIAA modeling and simulation technologies conference, American Institute of Aeronautics and Astronautics, Reston, Virigina, August, p e6248. doi:10.2514/6.2009-6248
- Wierwille WW, Casali JG, Repa BS. Driver steering reaction time to abrupt-onset crosswinds, as measured in a moving-base driving simulator. Hum Factors. 1983;25(1):103–116. doi: 10.1177/001872088302500110. [DOI] [PubMed] [Google Scholar]
- Wolpert DM, Diedrichsen J, Flanagan JR. Principles of sensorimotor learning. Nat Rev Neurosci. 2011;12:739–751. doi: 10.1038/nrn3112. [DOI] [PubMed] [Google Scholar]
- Wolpert DM, Ghahramani Z. Computational principles of movement neuroscience. Nat Neurosci Suppl. 2000;3:1212–1217. doi: 10.1038/81497. [DOI] [PubMed] [Google Scholar]
- Wright WG, DiZio P, Lackner JR. Vertical linear self-motion perception during visual and inertial motion: more than weighted summation of sensory inputs. J Vestib Res Equilib Orientat. 2005;15(4):185–195. [PubMed] [Google Scholar]
- Young LR. Optimal estimator models for spatial orientation and vestibular nystagmus. Exp Brain Res. 2011;210(3–4):465–476. doi: 10.1007/s00221-011-2595-1. [DOI] [PubMed] [Google Scholar]
- Young LR, Meiry JL. A revised dynamic otolith model. Aerosp Med. 1968;39:606–608. [PubMed] [Google Scholar]
- Young LR, Oman CM. Model for vestibular adaptation to horizontal rotation. Aerosp Med. 1969;40(10):1076–1080. [PubMed] [Google Scholar]
- Yuille AL, Bülthoff HH. Bayesian decision theory and psychophysics. In: Richards DCK, Whitman, editors. Perception as Bayesian inference. Cambridge: Cambridge University Press; 1996. pp. 123–162. [Google Scholar]
- Zaal PMT, Pool DM, de Bruin J, Mulder M, van Paassen MM. Use of pitch and heave motion cues in a pitch control task. J Guid Control Dyn. 2009;32(2):366–377. doi: 10.2514/1.39953. [DOI] [Google Scholar]
- Zaal PMT, Pool DM, Chu QP, Mulder M, van Paassen MM, Mulder JA. Modeling human multimodal perception and control using genetic maximum likelihood estimation. J Guid Control Dyn. 2009;32(4):1089–1099. doi: 10.2514/1.42843. [DOI] [Google Scholar]
- Zaal PMT, Pool DM, Mulder M, van Paassen MM. Multimodal pilot control behavior in combined target-following disturbance-rejection tasks. J Guid Control Dyn. 2009;32(5):1418–1428. doi: 10.2514/1.44648. [DOI] [Google Scholar]
- Zaal PMT, Pool DM, Mulder M, van Paassen MM, Mulder JA. Identification of multimodal pilot control behavior in real flight. J Guid Control Dyn. 2010;33(5):1527–1538. doi: 10.2514/1.47908. [DOI] [Google Scholar]
- Zaal PMT, Pool DM, Paassen MV, Mulder M. Comparing multimodal pilot pitch control behavior between simulated and real flight. J Guid Control Dyn. 2012;35(5):1456–1471. doi: 10.2514/1.56268. [DOI] [Google Scholar]
- Zaal PMT, Nieuwenhuizen FM, van Paassen MM, Mulder M. Modeling human control of self-motion direction with optic flow and vestibular motion. IEEE Trans Syst Man Cybern. 2013;43(2):544–556. doi: 10.1109/TSMCB.2012.2212188. [DOI] [PubMed] [Google Scholar]
- Zacharias GL (1977) Motion sensation dependence on visual and vestibular cues. Ph.d. thesis, Massachusetts Institute of Technology, Cambridge, MA
- Zacharias GL, Caglayan AK, Sinacori JB. A model for visual flow-field cueing and self-motion estimation. IEEE Trans Syst Man Cybern SMC. 1985;15(3):385–389. doi: 10.1109/TSMC.1985.6313373. [DOI] [Google Scholar]
- Zacharias GL, Young LR. Influence of combined visual and vestibular cues on human perception and control of horizontal rotation. Exp Brain Res. 1981;41(4):159–171. doi: 10.1007/BF00236605. [DOI] [PubMed] [Google Scholar]
- Zaichik L, Rodchenko V, Rufov I, Yashin Y, White A (1999) Acceleration perception. In: Modeling and simulation technologies conference and exhibit, American Institute of Aeronautics and Astronautics, Reston, Virigina, p e4334. doi:10.2514/6.1999-4334
- Zaidel A, Turner AH, Angelaki DE. Multisensory calibration is independent of cue reliability. J Neurosci Off J Soc Neurosci. 2011;31(39):13949–13962. doi: 10.1523/JNEUROSCI.2732-11.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zupan LH, Merfeld DM, Darlot C. Using sensory weighting to model the influence of canal, otolith and visual cues on spatial orientation and eye movements. Biol Cybern. 2002;86(3):209–230. doi: 10.1007/s00422-001-0290-1. [DOI] [PubMed] [Google Scholar]