Skip to main content
PLOS Computational Biology logoLink to PLOS Computational Biology
. 2011 Nov 3;7(11):e1002253. doi: 10.1371/journal.pcbi.1002253

Learning the Optimal Control of Coordinated Eye and Head Movements

Sohrab Saeb 1,*, Cornelius Weber 2, Jochen Triesch 1
Editor: Jörn Diedrichsen3
PMCID: PMC3207939  PMID: 22072953

Abstract

Various optimality principles have been proposed to explain the characteristics of coordinated eye and head movements during visual orienting behavior. At the same time, researchers have suggested several neural models to underly the generation of saccades, but these do not include online learning as a mechanism of optimization. Here, we suggest an open-loop neural controller with a local adaptation mechanism that minimizes a proposed cost function. Simulations show that the characteristics of coordinated eye and head movements generated by this model match the experimental data in many aspects, including the relationship between amplitude, duration and peak velocity in head-restrained and the relative contribution of eye and head to the total gaze shift in head-free conditions. Our model is a first step towards bringing together an optimality principle and an incremental local learning mechanism into a unified control scheme for coordinated eye and head movements.

Author Summary

Human beings and many other species redirect their gaze towards targets of interest through rapid gaze shifts known as saccades. These are made approximately three to four times every second, and larger saccades result from fast and concurrent movement of the animal's eyes and head. Experimental studies have revealed that during saccades, the motor system follows certain principles such as respecting a specific relationship between the relative contribution of eye and head motor systems to total gaze shift. Various researchers have hypothesized that these principles are implications of some optimality criteria in the brain, but it remains unclear how the brain can learn such an optimal behavior. We propose a new model that uses a plausible learning mechanism to satisfy an optimality criterion. We show that after learning, the model is able to reproduce motor behavior with biologically plausible properties. In addition, it predicts the nature of the learning signals. Further experimental research is necessary to test the validity of our model.

Introduction

Active perception of the visual world necessitates frequent redirection of our gaze. Such visual orientation behavior comprises multi-segment control of different motor systems, i.e. the coordinated movement of several parts of the body including the eyes, the head, and the torso. The coordinated movements of the eyes and the head during fast gaze shifts are called saccadic eye and head movements and are usually investigated in two conditions: head-restrained and head-free.

In the head-restrained condition, head movement is limited so that the gaze shifts rely only on eye movements. These eye movements, known also as eye-only saccades, possess certain physical properties. The relationship between the duration, peak velocity and the amplitude of saccades is known as the main sequence [1]. This relationship is stereotyped: the duration increases linearly with the saccadic amplitude, while the peak velocity increases linearly for low amplitudes and undergoes a soft saturation for larger amplitudes [2][4]. The velocity profiles of saccadic eye movements are smooth and symmetric for small amplitudes, while they become skewed for larger amplitudes [5], [6].

In the head-free condition, the head is allowed to accompany the eye in visual orienting. These movements are usually composed of two phases: in the first phase, the gaze is rapidly shifted to the target using both the eyes and the head. Once the gaze reaches the target, the second phase starts. In the second phase, the head continues moving in the same direction as in the first phase, but the eyes move backwards with the same velocity as the head. As a result, the gaze remains stabilized on the target. The general belief is that the vestibulo-ocular reflex (VOR) has a fundamental role in generating the coordination of eye and head during the second phase (see [7] for a review).

When the head is free to move, the kinematic characteristics of saccadic eye movements change dramatically compared to the head-restrained condition. As the gaze shift amplitude increases, the eye movement amplitude approaches its limits, and the head contribution becomes more prominent. Therefore, the eye's position and velocity is not determined only based on the current gaze error, but it also depends on concurrent head position and velocity. Furthermore, the eye's peak velocity declines in the head-free condition, its duration increases and its velocity profiles change [8][14].

Previous Studies

Previous computational studies on saccadic eye and head movements revolve around two questions. The first question concerns the optimality principles underlying the kinematic characteristics, and the second one is about the neural architecture that generates appropriate control signals for driving eye and head muscles. Researchers dealing with both questions consider linear eye and head plants, which are equivalent to linear differential equations describing the mechanical properties of eye and head motor systems. Such models are considered sufficient for modeling the oculomotor and the head motor system dynamics [15], [16].

Optimality principles

During saccadic eye and head movements, visual information is not properly transmitted to the brain either due to motion blur or because of neural suppression induced by higher regions [17]. Therefore, saccadic gaze shifts should be as fast as possible in order to increase the amount of time the image is stabilized on the retina. This has been a fundamental assumption of many studies that were aiming at finding the optimality principles underlying the kinematic characteristics of saccades.

Early studies proposed that the saccade trajectories are optimized in such a way that they minimize the time to reach the target [18]. This assumption, known as the minimum-time principle, leads to a bang-bang control solution [19], for which the resulting velocity profiles are not biologically plausible [20]. Therefore, additional assumptions are necessary.

A key assumption suggested by Harris and Wolpert was that there exists additive white noise in the neural command, whose instantaneous power (variance) is proportional to that of the command signal [21]. Due to this assumption, the variance of the final eye position increases as one tries to decrease the saccadic duration by recruiting larger command signals. Therefore, in addition to the saccadic duration, the variance of the eye position should also be minimized. Because of this property, this principle is also called the minimum-variance principle. As a result of these two assumptions, a trade-off emerges between the speed and the accuracy of saccades, and the optimal solution to this trade-off is a trajectory that is biologically realistic [22].

Kardamakis and Moschovakis suggested another optimality principle based on the minimum-effort rule and optimal control theory [23]. This principle was used to obtain optimal control signals for both oculomotor and head motor systems in coordinated eye and head movements. The minimum-effort rule obliges that the squared sum of the eye and the head torque signals integrated over the movement period be minimized in order to obtain the optimal control signal. The optimization process uses boundary conditions for the gaze position and is only applied to the first phase of coordinated eye and head movements. In the head-restrained condition, this method achieves unimodal velocity profiles with shorter acceleration and longer deceleration phases compatible with many experimental findings [5], [6]. For the head-free condition, the contribution of eye and head to total gaze shift obtained by this method is biologically realistic, and the eye-velocity profiles become double-peaked as in the experiments [24].

Architectures

The functional architectures suggested for the control of saccadic eye and head movements can be categorized into two groups: architectures for head-restrained and head-free control problems. Since the goal of our study is to solve these two control problems using a single architecture, we review the existing models based on their structure and the optimization method they use rather than the control problem they are supposed to solve. From this perspective, models can be categorized into feedback models, which use gaze feedback to control saccades, and independent control schemes which do not need a gaze feedback.

The first gaze feedback model was suggested by Laurutis and Robinson [25] that was an extension of position control models of the saccadic system [26], [27] to incorporate gaze feedback signals. This model was thereafter used and extended by others [9], [28], [29]. The gaze error signal used in feedback models is internally estimated, such that no visual feedback is necessary. This is mainly due to two reasons. First, vision is impaired during fast gaze shifts, and second, there is a retinal processing delay of about 40–50 ms which can make the controller unstable [30]. Therefore, the gaze feedback signal can be regarded as an internal feedback. A more recent model by Chen-Harris and colleagues estimates the gaze feedback in a more elaborate way [31]. The internal feedback in this model consists of two forward models: a forward model of the oculomotor plant that predicts the state of the eye, and a forward model of the target motion that predicts the state of the target. This feedback used together with the absolute target position provides a signal to drive an optimized feedback controller which is based on the minimum-variance principle of Harris and Wolpert [22], and requires re-optimization for each saccadic duration.

The independent eye and head control models rely on the dynamics of their burst generator (BG) units rather than a gaze feedback in order to generate the control signals [12], [32], [33]. These BG units are per se closed loop controllers that use efference copies of eye and head motor command signals. Although the eye and the head control circuits have independent dynamics, there are ways through which they can influence each other. Independent control models usually assume that the relative contribution of eye and head components to the gaze shift is known beforehand. However, a recent neural model suggested by Kardamakis and colleagues [33] is able to reproduce realistic contribution of eye and head using the communication between the two circuits, and without such an assumption. The parameters of this model are either set according to experimental findings or optimized using a genetic algorithm.

Our Contribution

The optimality principle studies by Harris and Wolpert [21] and Kardamakis and Moschovakis [23] have not provided any incremental learning mechanism for their optimization procedure. In fact, the optimization procedures used in these studies are based on Pontryagin's extremum principle [34], which requires boundary conditions at the initial and the final time of the saccadic movement, and provides a global analytical solution rather than a local adaptation mechanism. In the model suggested by Kardamakis and Moschovakis, the cost was evaluated for several values of gaze shift duration and the model parameters were eventually set to satisfy the trade-off between the effort and the duration. It may be speculated that such a solution can be a result of evolution, nevertheless, numerous experimental results indicate that saccadic eye and head movements are constantly adapted [35][37].

The neural control architectures that have been proposed to generate the eye and the head control signals do not use any optimality criteria to tune their parameters. The parameters of such models are usually hand-tuned or adjusted by a global optimization algorithm in a way that the model's response fits to the experimental data. The only exception we found is the model by Chen-Herris and colleagues [31] that relies on its internal feedback process to generate neural command signals.

As an alternative control scheme, we introduce an open-loop neural architecture. We try to obtain an adaptation mechanism that on the one hand can be implemented by the brain circuitry (see Discussion), and on the other hand minimizes a cost function. To this end, we suggest a cost function that does not directly depend on the saccadic duration, and therefore allows for a gradient descent based solution without any need to define boundary conditions. The control pathway of our model is feedforward, and is constantly calibrated by an adaptation mechanism that implicitly evaluates the optimality of the controller with respect to the cost function and induces parameter changes via a local learning rule. Therefore, our model can be regarded as a first step towards bringing together an optimality principle and an incremental local learning mechanism into a unified control scheme. It extends our previous model of eye-only saccade generation [38] to coordinated eye and head movements.

Methods

The model architecture consists of two pathways: feedforward control and adaptation. The feedforward control pathway comprises a spatiotemporal map that performs spatial-to-temporal transformation as shown in Figure 1. The adaptation pathway is based on the learning rules derived from a cost function (see Adaptation).

Figure 1. Model Architecture.

Figure 1

The input, left, consists of one column of delay units per oculocentric position of the target. The read-out neurons (gray units; one for eye and one for head control) are linear and each weight parameter Inline graphic is adapted locally by the corresponding adaptation unit. The solid lines indicate the control signal pathway and the dashed lines represent the adaptation signal pathway.

Spatial-to-Temporal Transformation

Saccades are produced by a precisely timed pattern of activity within the motor neurons innervating the eye and the head muscle systems. However, the desired gaze shift is represented spatially in areas such as the superior colliculus [39], [40]. This is called the spatial-to-temporal transformation problem (STTP) [41].

Here, we suggest a spatiotemporal map to perform such a transformation. This map comprises several columns (delay lines), each one including a number of neurons as shown in Figure 1. There is one column per oculocentric position of the target, i.e. the desired gaze shift amplitude. Only one visual dimension (e.g. the horizontal position) is modeled. The activity of the neurons in the columns is only dependent on the desired gaze shift amplitude and the progress of time. The spatial-to-temporal transformation is accomplished when these activities are integrated by two read-out neurons (gray units in Figure 1) to create the neural control signals that drive the eye and the head plants.

When an object triggers the initiation of a saccade, a single column corresponding to the desired gaze shift amplitude is activated. The activation of a column means that a wave of activity propagates through the neurons of that column, starting from the first neuron. The firing rate of each neuron changes as a Gaussian function. Given column Inline graphic is activated at time Inline graphic, this propagation can be formulated as:

graphic file with name pcbi.1002253.e004.jpg (1)

where Inline graphic represents the instantaneous firing rate of the neuron Inline graphic in column Inline graphic, Inline graphic is the sampling period, Inline graphic is the variance, and Inline graphic scales the height of the activity peak.

The two linear read-out neurons integrate the activity of the spatiotemporal map by means of weighted connections. This linear combination forms the neural command signals, Inline graphic and Inline graphic, needed to drive the eye and head plants:

graphic file with name pcbi.1002253.e013.jpg (2)
graphic file with name pcbi.1002253.e014.jpg (3)

Inline graphic and Inline graphic represent the weighted connections between neuron Inline graphic in column Inline graphic and the eye and head read-out neurons, respectively. Inline graphic is the total number of columns and Inline graphic is the number of neurons in each column. Since we allow the neural command signals, Inline graphic and Inline graphic, to become negative, we consider them as the difference between the firing rates of the agonist and the antagonist motoneurons [42] driving each plant.

The response of the eye plant is the eye position in head coordinates, Inline graphic, and the response of the head plant is the head position in body coordinates, Inline graphic. The details of these plant models as well as their corresponding responses are given in Text S1.

Adaptation

The adaptation mechanism modifies the connection weights of the neural controller through several trials, such that it approaches an optimal behavior. Since the optimal behavior is determined by a cost function, adaptation implies the minimization of that cost function.

Before introducing the cost function, let us define the gaze error as:

graphic file with name pcbi.1002253.e025.jpg (4)

where Inline graphic is the target object position in body coordinates, and Inline graphic and Inline graphic as defined before. For simplicity, we have assumed that the axes of eye and head rotation are perfectly aligned.

We define a cost function that addresses the following objectives:

  1. The gaze should reach the target as soon as possible and then stand still on the target position. Therefore, the cost function should depend on the absolute value of the gaze error, Inline graphic. This dependency can be established via any arbitrary function of Inline graphic, three examples shown in Figure 2. Convex functions such as a quadratic function do not seem a good choice since they do not penalize small gaze errors. We will proceed with the absolute value function because it results in more compatibility with neurophysiological observations, as we will see in the Discussion.

  2. The power of the neural control signal should be constrained. This assumption may be viewed as a regularization [43]. It also addresses the problem of signal-dependent noise [21], as it reduces the variability of the neural control signal by preventing its power from becoming too large. Since the neural control signal is linearly dependent on the weight values, the cost function should depend on the absolute values of the weight parameters. Thus, large values of these parameters will be penalized regardless of their sign.

Figure 2. The first term of the cost function as a function of Inline graphic.

Figure 2

Three example functions (quadratic, absolute value and square root) are shown here.

Accounting for these objectives, we formulate the cost function as:

graphic file with name pcbi.1002253.e032.jpg (5)

The time integral starts at saccade onset Inline graphic, and Inline graphic has a sufficiently large value so that the integral covers the whole movement duration. Inline graphic and Inline graphic are positive coefficients determining the contribution of the eye and head weight limiting terms to the total cost, respectively. We set Inline graphic since this value leads to the results which have the most similarity to the experimental data.

It is worth noting that the integration time Inline graphic also covers part of the fixation period. This property of the proposed cost function facilitates the derivation of weight adaptation rules in case of delayed visual error, as studied on humans [44] and on macaque monkeys [45]. These studies show that a delayed visual error signal, up to several hundred milliseconds, is still able to induce saccadic adaptation.

The adaptable parameters of our model are the weights projecting from the spatiotemporal map to the two read-out neurons. We use a gradient descent method for minimizing the cost function. Using this method, the weight update rules are obtained as (see Text S1):

graphic file with name pcbi.1002253.e039.jpg (6)
graphic file with name pcbi.1002253.e040.jpg (7)

where Inline graphic and Inline graphic are adaptation rates, Inline graphic is the signum function, and:

graphic file with name pcbi.1002253.e044.jpg (8)
graphic file with name pcbi.1002253.e045.jpg (9)

The functions Inline graphic and Inline graphic represent the impulse responses of eye and head plants, respectively.

The block diagram representation of the adaptation mechanism is shown in Figure 1 (gray area). This representation is inspired by Equations 6–9 in the following way: the signals Inline graphic and Inline graphic can be regarded as the responses of the forward models of the eye and the head plants, respectively. These forward models basically have the same impulse response as the eye and head plants while receiving a copy of the neural activity in the columns as input. The responses of these forward models are multiplied by the sign of the gaze error and then integrated over Inline graphic (see Equations 6 and 7). The resulting signals act on the same connection of the neuron that has stimulated the adaptation units. This influence is shown by a dashed arrow in Figure 1.

Results

We consider two conditions: the head-restrained condition, where we set the head plant gain Inline graphic (see Text S1) to zero; and the head-free condition where we set Inline graphic to its normal value, Inline graphic. In biology, it is hypothesized that a neural gate prevents a common gaze shift command from reaching the neck circuitry when head-restrained saccades are desired [46].

For each condition, the learning procedure continues until the model reached a stable response. The simulation time step was 1 ms and Inline graphic was set to 0.002.

We used Inline graphic and Inline graphic as the free parameters of our model to find the best match between the model behavior and experimental data. To this end, we used a genetic algorithm (GA) as described in Text S1. For the head-restrained condition, the GA fitness function was defined as the sum of squared errors (SSE) between the main sequence plots of the model and of the experiments [4]. The highest fitness value was found for Inline graphic. One should note that the value of Inline graphic has no effect in the head-restrained condition since Inline graphic in this case. For the head-free condition, the fitness function was set as the SSE between the relative eye/head contribution of the simulated and of the experimental results, with eye position initialized at zero. The best parameters found in this case were Inline graphic and Inline graphic.

Head-Restrained Condition

With the best model parameters found by the GA, we simulated the learning procedure (Equation 6) for different target object positions. The integration time was set to Inline graphic, which was enough for learning saccadic eye movements for all amplitudes. We compared the simulation results to the experimental data obtained by Harwood and colleagues on human subjects performing horizontal eye movements [4]. This comparison was made between the main sequence plots, as shown in Figure 3.

Figure 3. Comparing the main sequence plots of the proposed model to experimental data, in head-restrained condition.

Figure 3

(A) Peak velocity and (B) duration of saccades versus their amplitudes. The solid lines represent the model results after learning, and the crosses are experimental data taken from an experiment on human subjects [4].

The resulting neural control signals and their corresponding plant responses for three target object positions, Inline graphic, Inline graphic, and Inline graphic, are depicted in Figure 4. These signals comprise two main phases: the saccadic phase during which the control signal is strong; and the fixation phase when it has a roughly constant but slightly oscillating positive value. The mean value of the neural control signal in the fixation period is proportional to the target position, and the small oscillations lead to slight eye drifts that are negligible because of their low contribution to the cost function. In fact, the eye plant filters out the high frequency inputs so that the eyes do not follow these oscillations. The decrease of the firing rate at the end of the plot is a boundary effect. No matter how long the integration time Inline graphic is, this effect is always observed at the final time.

Figure 4. Model behavior after learning for saccades to targets at Inline graphic, Inline graphic, and Inline graphic in head-restrained condition.

Figure 4

(A) Optimized neural command signals of the eye defined as the difference between agonist and antagonist neural commands. (B) Eye position (eccentricity) in head coordinates. Target positions are shown by dashed lines.

The general form of the optimized neural control signals shown in Figure 4 resembles the firing patterns of abducens nucleus motoneurons in monkey responsible for saccadic eye movements, shown in Figure 5: a fast increase in the firing rate is followed by a slow decrease (the burst phase), then follows an oscillatory steady state that maintains the fixation (the tonic phase). During fixation, in both model and experimental data, the sustained tonic firing rate is proportional to the eye position. However, one should note that the firing rate patterns shown in Figure 4 are differential, i.e. they are obtained as the difference between the activity of agonist and antagonist motor neurons, while the ones shown in Figure 5 are not.

Figure 5. Experimental data from concurrent recording of motor neurons activities and eye position during head-restrained gaze shifts.

Figure 5

(A) Firing pattern of an abducens nucleus (ABN) motor neuron during saccades with different amplitudes, coded by different colors. (B) The resulting change in the eye position. Both neural activity and eye position signals are vertically shifted such that they have zero initial values. Dashed lines show target (final) eye positions. Data are obtained from an experiment on rhesus monkeys [70], and are provided by M. Van Horn and K. Cullen.

Without changing the model parameters we tested if the model is capable of reproducing realistic velocity profiles. For this, we simulated the learning process for the amplitudes ranging from Inline graphic to Inline graphic. The velocity profiles corresponding to different saccadic amplitudes are shown in Figure 6. For small amplitudes, the profiles are smooth and almost symmetric, while for larger amplitudes they become skewed. The main reason for the former symmetry is that the effect of weight updating mechanisms (Equations 6 and 7) on the saccadic velocity is symmetric when the second term of the cost function (Equation 5) is small enough. This effect becomes biased against large weights when the second weight regularization term grows as a result of an increase in target eccentricity. The same trend is observed in experimental results, an example is presented in a study by Collewijn and colleagues [6] (see Figure 2 of this paper). It is worth noting that to make these large eye-only saccades possible in such experiments, for each saccadic amplitude A the saccade is made from −A/2 to +A/2 relative to the central fixation point on the horizontal meridian. For instance, a Inline graphic saccade is made by moving the eyes from Inline graphic to Inline graphic in head coordinates.

Figure 6. Adapted eye velocity profiles during the head-restrained condition for target positions from Inline graphic to Inline graphic.

Figure 6

For comparison to experimental results, see for example Figure 2 of [6].

Head-Free Condition

For the head-free condition, we again used the parameter values obtained by the GA and compared our results to experimental data from a study on rhesus monkeys [10]. The integration time Inline graphic was set to 2 seconds for allowing the model to learn slow head movements. We let the model learn the gaze shifts for object positions ranging from Inline graphic to Inline graphic with a step size of Inline graphic, and for different initial eye positions.

Experimental studies have revealed that the relative contribution of eye and head to total gaze shift varies depending on the gaze shift amplitude [10]. To see if our model is able to reproduce these observations, we have defined two quantities in compliance with the mentioned studies: first, the eye contribution to the gaze shift, which is defined as the amplitude of the eye movement that occurs between the eye movement onset and gaze movement end. Second, the head contribution to the gaze shift that indicates the head movement amplitude within this period. These two quantities are sketched in Figure 7 for object positions ranging from Inline graphic to Inline graphic and initial eye position equal to zero. In both model and experimental data, the head contribution keeps increasing while the eye contribution undergoes a soft saturation as a function of gaze shift amplitude. This behavior is also evident in the eye and head velocity profiles in Figure 8. While the head peak velocity increases proportionally with the gaze shift amplitude, the eye peak velocity saturates for very large gaze shifts (Inline graphic).

Figure 7. Relative contribution of eye and head to total gaze shift for different gaze shift amplitudes.

Figure 7

(A) Eye contribution calculated as the relative displacement of the eye from the beginning until the end of gaze shift. (B) Head contribution calculated in the same way. Dots are experimental data from a study on rhesus monkeys making horizontal gaze shifts [10]; and green circles are model simulation results.

Figure 8. Velocity profiles generated by the proposed model, for different gaze shift amplitudes in head-free condition.

Figure 8

(A) Eye, (B) head, and (C) gaze velocity profiles. The color codes for different gaze shift amplitudes in degrees (see the legends).

The two phases of coordinated eye and head movements, i.e. the rapid gaze shift phase and the VOR-like behavior, are evident in the position plots shown in Figure 9. These two phases can also be observed in the eye velocity profiles (Figure 8A), where the eye velocity is positive during the first phase and negative during the second.

Figure 9. Eye, head and gaze positions for a Inline graphic gaze shift.

Figure 9

The two phases of coordinated eye and head movements, rapid gaze shift and VOR-like behavior, are evident.

According to our model, the main reason for the observed increase of the head contribution compared to that of the eye is the existence of slower poles in the head plant that require more time to produce a considerable response. For very low gaze shift amplitudes (Inline graphic) the eyes rapidly catch the target before the head plant accelerates; therefore the head contribution is almost zero. For larger gaze shift amplitudes, the head has enough time to accelerate since the eye plant saturates due to the cost on its neural command signal. Thus, the increase in the head contribution gradually dominates the increase of the eye contribution, leading to the results shown in Figure 7. Compared to the head-restrained condition, the gaze shift duration is longer for the same gaze shift amplitudes. This difference increases almost linearly by increasing the amplitude (Figure 10; also compare Figure 8C with Figure 6), which is compatible with experimental results [47].

Figure 10. Gaze shift duration in the proposed model, for head-restrained compared to head-free conditions.

Figure 10

The dashed line shows the head restrained and the solid line the head-free condition. The duration is in general higher in the head-free condition, and the difference increases by increasing the gaze shift amplitude. Circles show sampled data, and the lines are fitted by linear regression using the least squares approach. The correlation coefficients (r) and the slopes (s) are Inline graphic and Inline graphic for the solid line and Inline graphic and Inline graphic for the dashed line.

Experimental studies have also shown that the relative contribution of eye and head to total gaze shift depends on the initial eye position in head coordinates [10], [29], [30], [48]. In fact, gaze shifts with identical amplitudes can be constructed of eye and head movements having a variety of amplitudes. To check for the ability of our model to reproduce this behavior with the same set of free parameter values, we ran a second set of simulations. In these simulations, three gaze shift amplitudes (Inline graphic, Inline graphic, and Inline graphic) were learned for different initial eye positions (Inline graphic, Inline graphic, Inline graphic and Inline graphic). We compared the results of our simulation to experimental data obtained from a study on rhesus monkeys [10] in Figure 11. In both model and experimental results, when the eyes are initially deviated away from the movement direction (negative initial eye positions), the head contributes less - and consequently the eyes contribute more - compared to the situation where the initial eye position is deviated in the direction of the gaze shift (positive initial eye positions). In terms of our model, this behavior can be explained by looking at the neural command signals that are necessary in each situation: if we consider no contribution from the head, the final eye position will only depend on the initial eye position, such that the eye movements starting from more positive initial positions end up with higher final positions. This requires an overall larger neural command signal compared to negative initial eye positions, and according to the proposed cost function, larger command signals impose higher costs. To decrease this cost, the head should contribute more when the initial eye position is more positive.

Figure 11. Eye and head contribution to the gaze shift as a function of initial eye position.

Figure 11

(A) Eye and (B) head contribution obtained for the gaze shift amplitudes of Inline graphic (triangles), Inline graphic (circles) and Inline graphic (squares). Main plots show model results after learning, and insets illustrate the mean value of experimental data extracted from a study on rhesus monkeys [10]. The linear fits conducted both on the model and on the experimental data are obtained by linear regression using the least squares approach. The correlation coefficient (r) and the slope (s) of each line is as follows. For the model data, Panel A: Inline graphic; Inline graphic; Inline graphic. Panel B: Inline graphic; Inline graphic; Inline graphic. For the experimental data, Panel A: Inline graphic; Inline graphic; Inline graphic. Panel B: Inline graphic; Inline graphic; Inline graphic.

Discussion

Using the architecture shown in Figure 1 and considering a simple cost function defined by Equation 5, we were able to reproduce the fundamental characteristics of coordinated eye and head movements in both head-restrained and head-free conditions. The proposed optimality principle has some similarities, as well as differences, to existing principles [22], [23]. A point-by-point comparison between our model and other principles including the minimum-time, minimum-variance, and minimum-effort is given in Table 1.

Table 1. Comparing the proposed model to other models in various aspects.

Feature Minimum-Time [18] Minimum-Variance [21] Minimum-Effort [23] Our Model
Main Sequence
Realistic Velocity Profiles in Head-Fixed Condition
Eye Fixation in Head-Fixed Condition
Eye-Head Coordination
Neural Implementation
VOR-like behavior in Head-Free Condition
No Boundary Conditions
Incremental Learning
Generalized to other tasks
Double-Peaked Eye Velocity Profiles in Head-Free Condition

A substantial difference of the new cost function from previous ones is that it does not directly penalize the gaze shift duration. Instead, it punishes the total gaze error integrated over an arbitrary time interval (T) that is large enough to encompass the gaze shift period. This has two benefits: first, it allows for the application of the gradient descent method, since the total gaze error can be expressed directly in terms of an unknown neural command signal (see Text S1). This is not possible for the other principles due to the fact that there exists no closed-form expression of the gaze shift duration in terms of the neural command. The incorporation of gradient descent into optimization means that the optimization process turns into an incremental learning process, which can be regarded as a step forward in the direction of a biologically realistic implementation.

The second advantage of the arbitrary integration time in Equation 5 is that it also covers part of the post-saccadic response. This implies that our model is also able to generate the motor commands needed immediately after the gaze shift, whereas the previous models have only attempted to explain the gaze shift phase. In the head-restrained condition, this is just before the visual feedback from the target is re-established to keep the eye position still on the target. For the head-free condition, the model is able to reproduce a VOR-like behavior, where the eyes move back toward their central position in head and at the same time the head continues moving such that the gaze remains stabilized on the target.

As pointed out in Table 1, the proposed optimality principle, along with minimum-effort, is able to reproduce not only the main sequence behavior in the head-restrained condition, but also the coordination of eye and head during head-free gaze shifts, whereas minimum-variance is not. Nevertheless, the minimum-variance principle has been successfully generalized to other motor control tasks such as arm movements [21]. Apart from minimum-time, all of the models are able to generate biologically realistic velocity profiles for eye-only saccades, but only the minimum-effort model is capable of reproducing double-peaked eye-velocity profiles [23] that are observed experimentally during head-free gaze shifts [24].

In the simulation results, the value of Inline graphic in the head-free condition (Inline graphic) is considerably larger than in the head-restrained condition (Inline graphic). This implies that the resulting eye controller weights Inline graphic and consequently the average amplitude of the eye command signal Inline graphic should be smaller in the head-free condition. This effect might be related to the hypothesis that the head velocity signal inhibits the gain of the saccadic BG units [12].

The learning mechanism introduced by Equations 6 and 7 necessitates the existence of eye and head internal models that provide Inline graphic and Inline graphic. These forward models respond to the activity of individual neurons in the spatiotemporal map. In addition, since vision is impaired during saccades, there should exist another internal forward model which provides the sign of the gaze error to the adaptation mechanism (see the adaptation unit in Figure 1) using efference copies of the current neural control signals, Inline graphic and Inline graphic, as input.

The cerebellum is widely regarded as a neural substrate where internal models of the motor system are located (see [49] for a review), and the most convincing neurophysiological data for internal models has been obtained for eye movements [50]. Bastian suggests that the cerebellum performs feedforward correction on the movement based on the error assigned to the previous movement [51]. Interestingly, an experimental study by Soetedjo and Fuchs indicates that the complex spike activity of Purkinje cells (P-cells) in the vermis of the oculomotor cerebellum signals the sign (direction) but not the magnitude of the gaze error during saccade adaptation [52], a finding which is consistent with the adaptation mechanisms of our model. Furthermore, several studies have revealed that cerebellar lesions permanently annihilate the adaptive capabilities of saccadic eye movements [36], [53][55], which suggest that the saccadic system is constantly calibrated by the cerebellum. Specifically, the study of Buettner and Straube showed that bilateral lesions in the cerebellar vermis lead to hypometric saccades [54]. This effect can be reproduced in our model by eliminating the adaptation signal (the first term in Equation 6). In such a situation, the weight decay term will decrease the weight values, leading to saccades that are smaller than the desired gaze shift.

Based on the mentioned studies about the cerebellum, we speculate that the adaptation signals affecting the feedforward controller are likely to be produced by the cerebellar vermis. This assumption, however, requires several parallel implementations of the eye and head forward models for each weighted connection in the feedforward control pathway. The existence of several parallel microzones in the cerebellum that receive inputs via different sets of mossy fibers and project their outputs via distinct P-cells [56] offers a possible neural basis for that, but further investigations on the exact functionality of these microzones are necessary.

So far we speculated on possible neural substrates responsible for the adaptation. Now we look for possible neural substrates that are maintaining the open-loop (feedforward) control of saccadic gaze shifts. Takemura and colleagues analyzed the relationship between the firing patterns of the P-cells in the ventral paraflocculus (VPFL) area of the cerebellum and the following ocular responses [57]. They used a second-order linear regression method to reconstruct these signals based on three aspects of the eye movement: position, velocity and acceleration. This second-order linear method was able to reproduce temporal firing patterns of VPFL neurons. When a single set of coefficients was used for different visual stimuli in order to reconstruct the firing pattern of the cells in the VPFL, the best fits were found for P-cells in this area. This observation implies that there is a linear relationship between the firing pattern of P-cells in the cerebellar VPFL and the eye kinematics. Hence, the cerebellar VPFL is a possible candidate for the neural controller of our model. More specifically, we can consider the neural delay line structure as a model of the granular layer and the read-out neuron as a P-cell, in accordance with the cerebellar models which assume the granular layer as a basis for the spatiotemporal representation of the input signals and the P-cell layer as a layer that receives weighted projections from the granular layer [58], [59].

Another possible candidate for the open-loop neural controller is the superior colliculus (SC). It has been assumed that the caudo-rostral spread of activation emerging among the build-up cells of the SC is caused by an internal feedback signal during saccadic eye movements [28], [60], [61]. One of the most important predictions of these models is that interrupting this spread should delay the arrival of the activity at the rostral SC, and the eye should reach the target with delay. However, a lesion experiment performed on the SC does not support this idea: Aizawa and Wurtz observed that instead of delaying the reach time, the lesion results in a curved trajectory that does not end at the target position [62]. Motivated by this observation, Nakahara and colleagues suggested a computational model of the SC in which the spread of activity is a mere epiphenomenon of the asymmetric connections within the SC [63]. This suggestion supports our assumption that the neural activity propagation in the delay lines is a self-reliant process which does not depend on any external feedback, and makes the SC a strong candidate for the spatiotemporal map in our model. Furthermore, experiments have revealed that within the projections from the SC to BG neurons, stronger connections are correlated with larger saccade amplitudes [64]. This supports the assumption that the spatiotemporal transformation in the saccadic systems relies on the SC-BG projections. Nevertheless, in a model of the saccade generating system suggested by Optican and Quaia, the neuronal activity does not spread on the SC. Instead, together with a saccade velocity feedback signal, it causes a wave of activity on the cerebellar fastigial nucleus (FOR) that drives the BG neurons [65], [66].

When comparing model simulation results to experimental data, the reader should note that part of the data is obtained from monkeys while the other part is captured from human subjects (see the figure captions). This discrepancy is primarily due to the fact that appropriate data were not available for either monkeys or human subjects in order to make precise comparisons between the model and the primates behavior. In fact, the monkeys gaze shift behavior is very similar to that of humans, and there are only slight differences which are due to different mechanical properties of the oculomotor system between the two species [67]. These differences, however, are negligible in our study as they do not have a severe impact on the proposed computational principles.

The aim of this study was to keep the proposed computational model as simple as possible, and to extend it only if there was some aspect which could not be addressed by the simple model. Therefore, the brainstem BGs and the motoneurons are not distinguished in our model, and the read-out neurons in Figure 1 are a simplified representation of the brainstem-motoneuron circuitry. Figure 12 illustrates a summary of our speculation on possible neural substrates responsible for the control and the learning of saccades. Indeed, more experimental investigations are needed to clarify the contribution of the cerebellum or the SC to the open-loop control of saccadic eye movements. As for adaptation, the signals produced by the adaptation unit of our model should be compared to signals that are transmitted from the cerebellar vermis to the brainstem when saccades are executed.

Figure 12. A possible simplified biological interpretation of the model architecture.

Figure 12

The cerebellar VPFL and the motor layer of superior colliculus (SC) are candidates for the open-loop control of saccades while the cerebellar vermis is possibly responsible for providing the adaptation signals.

Future Work

The present model only addresses the generation of saccadic gaze shifts along one spatial axis, requiring one column (delay line) for every target position along this axis. A naïve approach to generalize the model would be to introduce one column for every oculocentric position. However, this would require a very large quantity of neurons. As an alternative approach, one can introduce two separate 1-D controllers for the horizontal and vertical components of a gaze shift. Such an approach has been successfully implemented in [68]. Another open issue is the neural implementation of the forward models used by the neural controller, and a model that describes how the parameters of such forward models are adapted. To this end, one could use the temporal sequence learning approach [69] to perform forward model learning. Finally, the proposed open-loop controller can be generalized to involve other ballistic motor control tasks beyond coordinated eye and head movements, by finding appropriate cost functions that underly those tasks.

Supporting Information

Text S1

Text S1 supplies detailed information on Eye and Head Plant Models, linear models of the eye and head dynamics used throughout this study; Gradient Descent Optimization, the optimization method used to derive the learning rules (Equations 6–9); Adaptive Learning Rate Method, a method for accelerating the learning procedure; and Implementation, describing the genetic algorithm method used for optimizing the free parameters of the proposed model.

(PDF)

Acknowledgments

The authors would like to thank Mark R. Harwood from the City University of New York, and also Marion Van Horn and Kathleen Cullen from McGill University, for providing experimental data in Figures 3 and 5, respectively. The authors also thank the reviewers for their valuable and constructive comments.

Footnotes

The authors have declared that no competing interests exist.

This work was supported by the German Federal Ministry of Education and Research within the ‘Bernstein Focus: Neurotechnology’ through research grant 01GQ0840, by EU projects ‘Plasticity and Learning in Cortical Networks’ (PLICON) and ‘Intrinsically Motivated Cumulative Learning Robots’ (IM-CLeVeR), and by the Hertie Foundation. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Bahill AT, Clark MR, Stark L. The main sequence, a tool for studying human eye movements. Math Biosci. 1975;24:191–204. [Google Scholar]
  • 2.Smit AC, Van Gisbergen JAM, Cools AR. A parametric analysis of human saccades in different experimental paradigms. Vision Res. 1987;27:1745–1762. doi: 10.1016/0042-6989(87)90104-0. [DOI] [PubMed] [Google Scholar]
  • 3.Becker W. The Neurobiology of Saccadic Eye Movements. Amsterdam: Elsevier; 1989. Metrics. pp. 13–67. [PubMed] [Google Scholar]
  • 4.Harwood HR, Laura EM, Harris CM. The spectral main sequence of human saccades. J Neurosci. 1999;19:9098–9106. doi: 10.1523/JNEUROSCI.19-20-09098.1999. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Van Opstal AJ, Gisbergen JAM. Skewness of saccadic velocity profiles: A unifying parameter for normal and slow saccades. Vision Res. 1987;27:731–745. doi: 10.1016/0042-6989(87)90071-x. [DOI] [PubMed] [Google Scholar]
  • 6.Collewijn H, Erkelens CJ, Steinman RM. Binocular coordination of human horizontal saccadic eye movements. J Physiol. 1988;404:157–182. doi: 10.1113/jphysiol.1988.sp017284. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Proudlock FA, Gottlob I. Physiology and pathology of eye-head coordination. Prog Retin Eye Res. 2007;26:486–515. doi: 10.1016/j.preteyeres.2007.03.004. [DOI] [PubMed] [Google Scholar]
  • 8.Bizzi E, Kalil RE, Tagliasco V. Eye-head coordination in monkeys: evidence for centrally patterned organization. Science. 1971;173:452–454. doi: 10.1126/science.173.3995.452. [DOI] [PubMed] [Google Scholar]
  • 9.Guitton D, Volle M. Gaze control in humans: eye-head coordination during orienting movements to targets within and beyond the oculomotor range. J Neurophysiol. 1987;58:427–459. doi: 10.1152/jn.1987.58.3.427. [DOI] [PubMed] [Google Scholar]
  • 10.Freedman EG, Sparks DL. Eye-head coordination during head-unrestrained gaze shifts in rhesus monkey. J Neurophysiol. 1997;77:2328–2348. doi: 10.1152/jn.1997.77.5.2328. [DOI] [PubMed] [Google Scholar]
  • 11.Zambarbieri D, Schmid R, Versino M, Beltrami G. Eye-head coordination toward auditory and visual targets in humans. J Vestib Res. 1997;7:251–263. [PubMed] [Google Scholar]
  • 12.Freedman EG. Interactions between eye and head control signals can account for movement kinematics. Biol Cybern. 2001;84:453–462. doi: 10.1007/PL00007989. [DOI] [PubMed] [Google Scholar]
  • 13.Einhuser W, Schumann F, Bardins S, Bartl K, Bning G, et al. Human eye-head co-ordination in natural exploration. Network. 2007;18:267–297. doi: 10.1080/09548980701671094. [DOI] [PubMed] [Google Scholar]
  • 14.Hardiess G, Gillner S, Mallot HA. Head and eye movements and the role of memory limitations in a visual search paradigm. J Vision. 2008;8:1–13. doi: 10.1167/8.1.7. [DOI] [PubMed] [Google Scholar]
  • 15.Van Opstal AJ, Van Gisbergen JAM, Eggermont JJ. Reconstruction of neural control signals for saccades based on an inverse method. Vision Res. 1985;25:789–801. doi: 10.1016/0042-6989(85)90187-7. [DOI] [PubMed] [Google Scholar]
  • 16.Bizzi E, Dev P, Morasso P, Polit A. E_ect of load disturbances during centrally initiated movements. J Neurophysiol. 1978;41:542–556. doi: 10.1152/jn.1978.41.3.542. [DOI] [PubMed] [Google Scholar]
  • 17.Land MF, Tatler BW. Looking and Acting: Vision and Eye Movements in Natural Behaviour. New York: Oxford University Press; 2009. 320 [Google Scholar]
  • 18.Enderle JD, Wolfe JW. Time-optimal control of saccadic eye movements. IEEE Trans Biomed Eng. 1987;34:43–55. doi: 10.1109/tbme.1987.326014. [DOI] [PubMed] [Google Scholar]
  • 19.Sonneborn LM, Van Vleck FS. The bang-bang principle for linear control systems. J Soc Ind Appl Math A. 1964;2:151–159. [Google Scholar]
  • 20.Harris CM. On the optimal control of behaviour: A stochastic perspective. J Neurosci Meth. 1998;83:73–88. doi: 10.1016/s0165-0270(98)00063-6. [DOI] [PubMed] [Google Scholar]
  • 21.Harris CM, Wolpert DM. Signal-dependent noise determines motor planning. Nature. 1998;394:780–784. doi: 10.1038/29528. [DOI] [PubMed] [Google Scholar]
  • 22.Harris CM, Wolpert DM. The main sequence of saccades optimizes speed-accuracy trade-off. Biol Cybern. 2006;95:21–29. doi: 10.1007/s00422-006-0064-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Kardamakis AA, Moschovakis AK. Optimal control of gaze shifts. J Neurosci. 2009;29:7723–7730. doi: 10.1523/JNEUROSCI.5518-08.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Freedman EG, Sparks DL. Coordination of the eyes and head: Movement kinematics. Exp Brain Res. 2000;131:22–32. doi: 10.1007/s002219900296. [DOI] [PubMed] [Google Scholar]
  • 25.Laurutis VP, Robinson DA. The vestibulo-ocular reex during human saccadic eye movements. J Physiol. 1986;373:209–233. doi: 10.1113/jphysiol.1986.sp016043. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Zee DS, Optican LM, Cook JD, Robinson DA, Engel WK. Slow saccades in spinocerebellar degeneration. Arch Neurol. 1976;33:243–251. doi: 10.1001/archneur.1976.00500040027004. [DOI] [PubMed] [Google Scholar]
  • 27.Van Gisbergen JAM, Robinson DA, Gielen S. A quantitative analysis of generation of saccadic eye movements by burst neurons. J Neurophysiol. 1981;45:417–442. doi: 10.1152/jn.1981.45.3.417. [DOI] [PubMed] [Google Scholar]
  • 28.Guitton D, Munoz DP, Galiana HL. Gaze control in the cat: studies and modeling of the coupling between orienting eye and head movements in different behavioral tasks. J Neurophysiol. 1990;64:509–531. doi: 10.1152/jn.1990.64.2.509. [DOI] [PubMed] [Google Scholar]
  • 29.Goossens HHLM, Van Opstal AJ. Human eye-head coordination in two dimensions under different sensorimotor conditions. Exp Brain Res. 1997;114:542–560. doi: 10.1007/pl00005663. [DOI] [PubMed] [Google Scholar]
  • 30.Freedman EG. Coordination of the eyes and head during visual orienting. Exp Brain Res. 2008;190:369–387. doi: 10.1007/s00221-008-1504-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Chen-Harris H, Joiner WM, Ethier V, Zee DS, Shadmehr R. Adaptive control of saccades via internal feedback. J Neurosci. 2008;28:2804–2813. doi: 10.1523/JNEUROSCI.5300-07.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Phillips JO, Ling L, Fuchs AF, Siebold C, Plorde JJ. Rapid horizontal gaze movement in the monkey. J Neurophysiol. 1995;73:1632–1652. doi: 10.1152/jn.1995.73.4.1632. [DOI] [PubMed] [Google Scholar]
  • 33.Kardamakis AA, Grantyn A, Moschovakis AK. Neural network simulations of the primate oculomotor system. V. Eye-head gaze shifts. Biol Cybern. 2010;102:209–225. doi: 10.1007/s00422-010-0363-0. [DOI] [PubMed] [Google Scholar]
  • 34.Pontryagin LS, Boltyanskii VG, Gamkrelidze RV, Mishchenko EF. The Mathematical Theory of Optimal Processes. New York: John Wiley and Sons; 1962. The Maximum Principle. [Google Scholar]
  • 35.Dichgans J, Bizzi E, Morasso P, Tagliasco V. Mechanisms underlying recovery of eye-head coordination following bilateral labyrinthectomy in monkeys. Exp Brain Res. 1973;18:548–562. doi: 10.1007/BF00234137. [DOI] [PubMed] [Google Scholar]
  • 36.Barash S, Melikyan A, Sivakov A, Zhang M, Glickstein Mea. Saccadic dysmetria and adaptation after lesions of the cerebellar cortex. J Neurosci. 1999;19:10931–10939. doi: 10.1523/JNEUROSCI.19-24-10931.1999. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Hopp J, Fuchs A. The characteristics and neuronal substrate of saccadic eye movement plasticity. Prog Neurobiol. 2004;72:27–53. doi: 10.1016/j.pneurobio.2003.12.002. [DOI] [PubMed] [Google Scholar]
  • 38.Saeb S, Weber C, Triesch J. A neural model for the adaptive control of saccadic eye movements. 2009. In: Proceedings of International Joint Conference on Neural Networks; 14–19 June 2009; Atlanta, Georgia, United States.
  • 39.Freedman EG, Sparks DL. Activity of cells in the deeper layers of the superior colliculus of the rhesus monkey: Evidence for a gaze displacement command. J Neurophysiol. 1997;78:1669–1690. doi: 10.1152/jn.1997.78.3.1669. [DOI] [PubMed] [Google Scholar]
  • 40.Klier EM, Wang H, Crawford JD. The superior colliculus encodes gaze commands in retinal coordinates. Nat Neurosci. 2001;4:627–632. doi: 10.1038/88450. [DOI] [PubMed] [Google Scholar]
  • 41.Kalesnykas RP, Sparks DL. The primate superior colliculus and the control of saccadic eye movements. Neuroscientist. 1996;2:284–292. [Google Scholar]
  • 42.Patestas MA, Gartner LP. A Textbook of Neuroanatomy. Malden, MA: Blackwell Publishing; 2006. pp. 282–303. [Google Scholar]
  • 43.Wang L, Gordon MD, Zhu J. Regularized least absolute deviations regression and an efficient algorithm for parameter tuning. 2006. pp. 690–700. In: Sixth IEEE International Conference on Data Mining; 18–22 December 2006; Hong Kong, China.
  • 44.Fujita M, Amagai A, Minakawa F, Aoki M. Selective and delay adaptation of human saccades. Cognitive Brain Res. 2002;13:41–52. doi: 10.1016/s0926-6410(01)00088-x. [DOI] [PubMed] [Google Scholar]
  • 45.Shafer JL, Noto CT, Fuchs AF. Temporal characteristics of error signals driving saccadic gain adaptation in the macaque monkey. J Neurophysiol. 2000;84:88–95. doi: 10.1152/jn.2000.84.1.88. [DOI] [PubMed] [Google Scholar]
  • 46.Oommen BS, Stahl JS. Overlapping gaze shifts reveal timing of an eye-head gate. Exp Brain Res. 2005;167:276–286. doi: 10.1007/s00221-005-0036-8. [DOI] [PubMed] [Google Scholar]
  • 47.Tomlinson RD, Bahra PS. Combined eye-head gaze shifts in the primate. I. Metrics. J Neurophysiol. 1986;56:1542–1557. doi: 10.1152/jn.1986.56.6.1542. [DOI] [PubMed] [Google Scholar]
  • 48.Populin LC, Tollin DJ, Weinstein JM. Human gaze shifts to acoustic and visual targets. Ann NY Acad Sci. 2002;956:468473. doi: 10.1111/j.1749-6632.2002.tb02857.x. [DOI] [PubMed] [Google Scholar]
  • 49.Wolpert DM, Miall RC, Kawato M. Internal models in the cerebellum. Trends Cogn Sci. 1998;2:338–347. doi: 10.1016/s1364-6613(98)01221-2. [DOI] [PubMed] [Google Scholar]
  • 50.Kawato M. Internal models for motor control and trajectory planning. Curr Opin Neurobiol. 1999;9:718–727. doi: 10.1016/s0959-4388(99)00028-8. [DOI] [PubMed] [Google Scholar]
  • 51.Bastian AJ. Learning to predict the future: the cerebellum adapts feedforward movement control. Curr Opin Neurobiol. 2008;16:645–649. doi: 10.1016/j.conb.2006.08.016. [DOI] [PubMed] [Google Scholar]
  • 52.Soetedjo R, Fuchs AF. Complex spike activity of Purkinje cells in the oculomotor vermis during behavioral adaptation of monkey saccades. J Neurosci. 2006;26:7741–7755. doi: 10.1523/JNEUROSCI.4658-05.2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.Optican LM, Robinson DA. Cerebellar-dependent adaptive control of primate saccadic system. J Neurophysiol. 1980;44:1058–1076. doi: 10.1152/jn.1980.44.6.1058. [DOI] [PubMed] [Google Scholar]
  • 54.Buettner U, Straube A. The effect of cerebellar midline lesions on eye movements. Neuroophthalmology. 1995;15:7582. [Google Scholar]
  • 55.Winograd-Gurvich CT, Georgiou-Karistianis N, Evans A, Millist L, Bradshaw JL, et al. Hypometric primary saccades and increased variability in visually-guided saccades in huntingtons disease. Neuropsychologia. 2003;41:1683–1692. doi: 10.1016/s0028-3932(03)00096-4. [DOI] [PubMed] [Google Scholar]
  • 56.Dean P, Porrill J, Ekerot CF, Jrntell H. The cerebellar microcircuit as an adaptive _lter: experimental and computational evidence. Nat Rev Neurosci. 2010;11:30–43. doi: 10.1038/nrn2756. [DOI] [PubMed] [Google Scholar]
  • 57.Takemura A, Inoue Y, Gomi H, Kawato M, Kawano K. Change in neuronal _ring patterns in the process of motor command generation for the ocular following response. J Neurophysiol. 2001;86:1750–1763. doi: 10.1152/jn.2001.86.4.1750. [DOI] [PubMed] [Google Scholar]
  • 58.Medina JF, Garcia KS, Nores WL, Taylor NM, Mauk MD. Timing mechanisms in the cerebellum: testing predictions of a large-scale computer simulation. J Neurosci. 2000;20:5516–5525. doi: 10.1523/JNEUROSCI.20-14-05516.2000. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Yamazaki T, Tanaka S. A spiking network model for passage-of-time representation in the cerebellum. Eur J Neurosci. 2007;26:2279–2292. doi: 10.1111/j.1460-9568.2007.05837.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 60.Wurtz RH, Optican LM. Superior colliculus cell types and models of saccade generation. Curr Opin Neurobiol. 1994;4:857–861. doi: 10.1016/0959-4388(94)90134-1. [DOI] [PubMed] [Google Scholar]
  • 61.Grossberg S, Roberts K, Aguilar M, Bullock D. A neural model of multimodal adaptive saccadic eye movement control by superior colliculus. J Neurosci. 1997;17:9706–9725. doi: 10.1523/JNEUROSCI.17-24-09706.1997. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62.Aizawa H, Wurtz RH. Reversible inactivation of monkey superior colliculus. I. Curvature of saccadic trajectory. J Neurophysiol. 1998;79:2082–2096. doi: 10.1152/jn.1998.79.4.2082. [DOI] [PubMed] [Google Scholar]
  • 63.Nakahara H, Morita K, Wurtz RH, Optican LM. Saccade-related spread of activity across superior colliculus may arise from asymmetry of internal connections. J Neurophysiol. 2006;96:765–774. doi: 10.1152/jn.01372.2005. [DOI] [PubMed] [Google Scholar]
  • 64.Moschovakis AK, Kitama T, Dalezios Y, Petit J, Brandi AM, et al. An anatomical substrate for the spatiotemporal transformation. J Neurosci. 1998;18:10219–10229. doi: 10.1523/JNEUROSCI.18-23-10219.1998. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 65.Lefvre P, Quaia C, Optican LM. Distributed model of control of saccades by superior colliculus and cerebellum. Neural Netw. 1998;11:1175–1190. doi: 10.1016/s0893-6080(98)00071-9. [DOI] [PubMed] [Google Scholar]
  • 66.Optican LM, Quaia C. Distributed model of collicular and cerebellar function during saccades. Ann NY Acad Sci. 2002;956:164–177. doi: 10.1111/j.1749-6632.2002.tb02817.x. [DOI] [PubMed] [Google Scholar]
  • 67.Shadmehr R, Orban de Xivry JJ, Xu-Wilson M, Shih TY. Temporal discounting of reward and the cost of time in motor control. J Neurosci. 2010;30:10507–10516. doi: 10.1523/JNEUROSCI.1343-10.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 68.Kuniharu A, Keller EL. A model of the saccade-generating system that accounts for trajectory variations produced by competing visual stimuli. Biol Cybern. 2005;92:21–37. doi: 10.1007/s00422-004-0526-y. [DOI] [PubMed] [Google Scholar]
  • 69.Porr B, Von Ferber C, Wörgötter F. ISO learning approximates a solution to the inversecontroller problem in an unsupervised behavioral paradigm. Neural Comput. 2003;15:865–884. doi: 10.1162/08997660360581930. [DOI] [PubMed] [Google Scholar]
  • 70.Sylvestre PA, Cullen KE. Quantitative analysis of abducens neuron discharge dynamics during saccadic and slow eye movements. J Neurophysiol. 1999;82:2612–2632. doi: 10.1152/jn.1999.82.5.2612. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Text S1

Text S1 supplies detailed information on Eye and Head Plant Models, linear models of the eye and head dynamics used throughout this study; Gradient Descent Optimization, the optimization method used to derive the learning rules (Equations 6–9); Adaptive Learning Rate Method, a method for accelerating the learning procedure; and Implementation, describing the genetic algorithm method used for optimizing the free parameters of the proposed model.

(PDF)


Articles from PLoS Computational Biology are provided here courtesy of PLOS

RESOURCES