Skip to main content
Physiological Reports logoLink to Physiological Reports
. 2016 May 26;4(10):e12775. doi: 10.14814/phy2.12775

Information fusion control with time delay for smooth pursuit eye movement

Menghua Zhang 1, Xin Ma 1,, Bin Qin 1, Guangmao Wang 1, Yanan Guo 1, Zhigang Xu 2, Yafang Wang 3, Yibin Li 1
PMCID: PMC4886162  PMID: 27230904

Abstract

Smooth pursuit eye movement depends on prediction and learning, and is subject to time delays in the visual pathways. In this paper, an information fusion control method with time delay is presented, implementing smooth pursuit eye movement with prediction and learning as well as solving the problem of time delays in the visual pathways. By fusing the soft constraint information of the target trajectory of eyes and the ideal control strategy, and the hard constraint information of the eye system state equation and the output equation, optimal estimations of the co‐state sequence and the control variable are obtained. The proposed control method can track not only constant velocity, sinusoidal target motion, but also arbitrary moving targets. Moreover, the absolute value of the retinal slip reaches steady state after 0.1 sec. Information fusion control method elegantly describes in a function manner how the brain may deal with arbitrary target velocities, how it implements the smooth pursuit eye movement with prediction, learning, and time delays. These two principles allowed us to accurately describe visually guided, predictive and learning smooth pursuit dynamics observed in a wide variety of tasks within a single theoretical framework. The tracking control performance of the proposed information fusion control with time delays is verified by numerical simulation results.

Keywords: Eye movement, information fusion control, learning, prediction, smooth pursuit, time delays

Introduction

Primates have to move their eyes to acquire accurate information about small moving targets due to their narrow foveal vision (Shibata et al. 2005). Smooth pursuit eye movements ensure that image velocity stays within a range that is best for visual acuity and visibility (Kowler 2011; Adams et al. 2015; Ono 2015). The main purpose of smooth pursuit eye movements is to minimize the retinal slip, which is generated from the difference between the eye velocity and the target velocity (Shibata et al. 2005; Zambrano et al. 2010). Once the eye velocity catches up to the target velocity, the retinal slip reduces to zero. In studies of the primate smooth pursuit system, the smooth pursuit gain (SPG), which is defined as the ratio of eye velocity to the target velocity, is often used for evaluating the system performance (Marino et al. 2007; Jansson and Medvedev 2011). Recent experiments in humans and monkeys suggest that with a constant velocity or a sinusoidal target motion, the SPG is almost 1.0 (Robinson et al. 1986). However, due to the time delays in the visual pathways, the high SPG of the smooth pursuit system cannot be achieved merely by visual negative feedback methods.

If the target velocity can be predicted, the visual delays can be reduced or even cancelled (Whittaker and Eaholtz 1982; Wells and Barnes 1998; Fukushima et al. 2002). It has been known for a long time that the smooth pursuit system is able to predict the target motion. The first clear evidence for prediction during the smooth pursuit eye movement came from the studies of tracking of repetitive motions, in which the eye was shown to reverse direction in time with, and sometimes shortly before, the target (Dodge et al. 1930; Westheimer 1954). And after that, the predictive tracking has got more attention. Prediction was attributed to special circuitry that came into play only for periodic motions, allowing the pursuit system to learn, and then generate repetitive oculomotor patterns (Dallos and Jones 1963; Barnes and Asselman 1991).To cancel the visual delays, Robinson et al. (1986) proposed a model working as a feed‐forward control. But the model cannot achieve zero‐delay tracking of sinusoidal. Based on Pavel's proposal, a predictive mechanism contained an adaptive filter was integrated into a model of the human smooth pursuit system (Koken et al. 1996). The model provided a fairly good qualitative and mostly also a fairly good quantitative description of human tracking of the various stimuli. Bardshwa et al. (1997) used a Kalman filter for prediction. A target‐selective adaptive control model that performs zero‐delay tracking was proposed (Bahill and McDonald 1983). The above‐mentioned models assumed prior knowledge of the target dynamics and, thus, they avoided addressing how unknown target motion can be tracked accurately.

Some studies investigated horizontal and vertical tracking of moving targets and the vertical tracking was found to be inferior to horizontal tracking at all age levels (Grönqvist et al. 2006). Infants, at 1 month of age, can exhibit smooth pursuit, but only at the speed of 10°/sec or less and with a low gain (Roucoux et al. 1983). The gain of smooth pursuit improves substantially between 2 and 3 months (von Hofsten and Rosander 1997). At 5 months, this ability approaches that of adults. These studies demonstrate that the primate smooth pursuit develops with experience. Based on this, Zambrano et al. (2010) added a memory‐based internal model that stores the model parameters related to the target dynamics to Shibata's model. After the learning phase, the prediction time decreased significantly. The model had the capability to learn the experienced values of the target velocity for ramp and sinusoidal signals, but only in the case of target dynamics already experienced by the system can add the learning component. A model relies on two Kalman filters: (1): one processing visual information about retinal input; and (2) one maintaining a dynamic internal memory of target motion was developed (Orban de Xivry et al. 2013).

However, all of the aforementioned models can only track constant velocity and sinusoidal target motion. To solve this problem, control theory principles have been used to gain understanding on how the different components of the eye tracking system operate. The eye tracking system is assumed to be described by linear time‐invariant discrete‐time state‐space equations without considering the time delays in the visual pathways (Rivlin et al. 1998; Avni et al. 2008). Therefore, taking predictive, learning and time delays nature into account, an information fusion controller with time delays is proposed in this paper to track arbitrary target trajectories. The main idea of the proposed control method is that ideal control strategy, desired trajectory, and eye system dynamics, are all regarded as measuring information of control strategy.

Primate Smooth Pursuit Eye Movement

Most of the processing in primate vision is devoted to a very small portion of the field of view called “fovea.” The foveal field of view is hardly 2 degrees in extent (Dithcburn 1973), although even within this region there is considerable variation of visual acuity. The movements of the eyes shift the foveal field allowing us high‐resolution vision wherever it is needed. Eye movements have been divided into two categories: smooth pursuit and saccades (Carpenter 1988). In this paper, we are interested especially in smooth pursuit eye movement.

Smooth pursuit eye movements are effective for tracking slow target trajectories. These movements usually have latency of around 100 msec in the visual pathways. Smooth pursuit eye movement occurs when the eye is tracking a smoothly moving target and appears to keep the target image stabilized with respect to retina (Rivlin et al. 1998).

From a neurophysiological point of view, the middle temporal (MT) area and the medial superior temporal (MST) area seem to be intimately involved in smooth pursuit eye movement. In the primate brain, the neural pathways that mediate smooth pursuit eye movement start in the primary visual cortex (V1) and extend to the MT area that serves as the generic visual motion processor (Thier and IIg 2005). The MST area seems to contain the explicit representation of object motion in world–centered coordinates (IIg et al. 2004). Kawawaki et al. (2006) demonstrated that the MST area is responsible for target dynamics prediction. Cortical eye fields are also involved in smooth pursuit (Tian and Lynch 1996); in particular, the frontal eye field can modulate the gain control (Tanaka and Lisberger 2001, 2002) that determines how strongly pursuit will respond to a given motion stimulus. Gain control works as a link between the visual system and the motor system, therefore, motor learning could concern this stage by altering this link (Chou and Lisberger 2004).The cerebellum seems to play a crucial role in supporting the accuracy and adaption of voluntary eye movements. It uses at least two areas for processing signals relevant to smooth pursuit: the flocculus‐paraflocculus complex and the posterior vermis.

Information Fusion Controller with Time Delay

Basic theory of information fusion estimation

Theorem 1 (Wang et al. 2007a,b): Suppose all information about the estimated x ∈ R n can be described as follows:

y^i=Hix+vi,i=1,2,,m (1)

where y^iRmi is the measuring value; HiRmi×n is the information mapping matrix; viRmi is the measuring error; and

Evi=0,EvivjT=Ri,ij0,i=j (2)

with Evi and EvivjT representing the mean and variance of v i, respectively.

If i=1nHiTRi1Hi is nonsingular, then x^ is an optimal fusion estimate of x. Thus, x^ is expressed as:

Ix^|x=i=1nHiTRi1Hi (3)
x^=Ix^|x1i=1nHiTRi1y^i (4)

where Ix^|x denotes the information weight (IW) of x^ about itself.

Theorem 1 has solved the problem of closed‐form expression of linear information fusion estimation. For convenience, the following case is considered.

Case 1: If there exists a unit mapping in H i, i = 1, 2, ···, m, that is, y^j=x+vj, then an explicit expression that is easy to recursively calculate can be obtained:

x^=y^j+Rji=1,ijmHiTRi1y^iHix^ (5)
Ix^|x=Rj1+i=1,ijmHiTRi1Hi (6)

Information fusion control with time delays for smooth pursuit eye movement

Considering the time delays in the visual pathways, the equations depicting the eye dynamics can be modified as follows:

xk+1=Akxk+Bkuk-b (7)
yk=Ckxk (8)

where xk=xx˙T denotes the state vector with x being the eye position and x˙ being the eye velocity, ukR1 is the control vector; It should be pointed out that yk=x˙ is the output vector; k = 1, 2, ···, k f;AkR2×R2, BkR2 and CkR1×R2 are the state matrix, the input vector and the output vector, respectively; b is the length of control lag; x0=00T, uτ=0, τhb,0, h is the sampling period.

Supposing the desired output of the system as yk. Our objective is to control the eye system (1)–(2) in such a way that the output yk tracks the desired output yk as closely as possible with minimum expenditure of control effort. For this, the performance index is chosen as:

J=ykfykfS2+k=0kf1ykykM2+k=0kf1ukN2 (9)

where the first and the second terms on the right‐hand side of (9) represent that the system should track the desired outputs yk, Sk and Mk denote its IW; the third term denotes the requirement of minimizing the controlled quantity, Nk denotes its IW; k f represents the terminal time; Sk, Mk and Nk are positive definite symmetric matrices.

From (8) and (9), it can be obtained that:

yk=yk+mk=Ckxk+mk (10)

with mk representing a white noise with zero mean and variation of M1k.

It can be obtained from (9) that:

0=uk+nk (11)

where nk denotes a white noise with zero mean and variation of N1k.

From the point of information fusion estimation, the information of the eye control problem can be grouped into three groups:

  1. Hard restriction information determined by (7):
    xk+1=Akxk+Bkukb,k=1,2,,kf
  2. Tracking information from the desired trajectory yk:
    yk=Ckxk+mk,k=1,2,,kf
  3. Control restriction information from minimizing uk:
    0=uk+nk,k=0,1,,kfb1

The purpose of information fusion control is to obtain the optimal fusion estimate:

u^k,k=0,1,,kfb1

by fusing all information mentioned above.

Suppose that all information with respect to xk+b+1 has been fused, its optimal fusion estimate x^k+b+1 and IW Pk+b+1 have been obtained at the time of k + b. It should be pointed out that, only the future information has the impact on the present decision, and the present information has no influence on the future decision. Therefore, all information with respect to uk can be listed as follows:

  1. xk+b+1=Ak+bxk+b+Bk+buk
  2. 0=uk+nk
  3. x^k+b+1=xk+b+1+φk+b+1

where φk+b+1 is a white noise with zero mean and variation of P1k+b+1.

Substituting information (1) into information (3), it can concluded that:

x^k+b+1=Ak+bxk+b+Bk+buk+φk+b+1 (12)

Thus, according to (5), we can fuse (12) and information (2) yields:

Iu^k|uk=Nk+BTk+bPk+b+1Bk+b (13)

Based on theorem 1, it can be obtained that:

u^k=Nk+BTk+bPk+b+1Bk+b1BTk+b×Pk+b+1×x^k+b+1Ak+bxk+b (14)

Subsequently, we will discuss how to obtain x^k and its IW Pk by fusing all information about xk. Similarly, suppose that x^k+1 and its IW Pk+1 have been obtained at the time of k. Thus, the information related to xk can be listed as follows:

  1. xk+b+1=Ak+bxk+b+Bk+buk
  2. 0=uk+nk
  3. x^k+1=xk+1+φk+1
  4. yk=Ckxk+mk

Substituting information (2) and (3) into information (1), one has the following:

x^k+1=AkxkBknkb+φk+1 (15)

with φk+1being a white noise with zero mean and variation of P1k+1.(15) can be written as follows:

x^k+1=Akxk+qk (16)

where qk represents a white noise with zero mean and variation of Q1k, which is of the following form:

Q1k=P1k+1+BkN1kbBTk (17)

Based on theorem 1, by fusing (16) and information (4), one has:

Pk=ATkQkAk+CTkMkCk (18)
x^k=P1kATkQkx^k+1+CTkMkyk (19)

The boundary conditions for (18) and (19) are obtained as follows:

x^kf=P1kfCTkfSkfykf (20)
Pkf=I+CTkfSkfCkf (21)

Solution of the eye control problem

Step 1: Set:

x0=x0andu^i=0,i=b,b+1,,1. (22)

Step 2: Compute:

Pkf=I+CTkfSkfCkf (23)
x^kf=P1kfCTkfSkfykf (24)

Step 3: Compute:

Q1k=P1k+1+BkN1kbBTk (25)
Pk=ATkQkAk+CTkMkCk (26)
x^k=P1kATkQkx^k+1+CTkMkyk (27)
k=kf1,,1 (28)

Step 4: Compute:

u^k=Nk+BTk+bPk+b+1Bk+b1BTk+b×Pk+b+1x^k+b+1Ak+bxk+b (29)
xk+1=Akxk+Bkukb (30)
k=0,,kfb1 (31)

Figure 1 shows a block scheme of the proposed control method of the smooth pursuit eye movement.

Figure 1.

Figure 1

Block scheme of the proposed information fusion control method.

Simulation Results

In order to verify that our control method can achieve smooth pursuit with gain one and zero‐latency, two groups of numerical simulation tests are included. More precisely, in the first group, our control method is compared with Zambrano's model (Zambrano et al. 2010). Then, in the second group, its tracking performance for other target trajectories of eyes is illustrated.

Using Matlab/Simulink, the simulation model is built as a discrete system. According to the spectrum of values reported in the neurobiological literature, the time delay is set to 100 msec. The sampling period is determined as 0.01 sec. The parameters of the eye plant are set as:

A=19.96×10409.91×101,B=4.74×1069.46×103,C=[01]x(0)=[00]T,S=M=104,N=1,h=0.01,b=10,kf=550 (32)

Simulation group 1

In this group, we intend to test the presented method's superior tracking performance in comparison with the Zambrano's model. Towards this end, the following two desired trajectories of eyes are considered.

Case 1: sinusoidal motion

y=0.2πsinπhk (33)

Case 2: constant motion

y=0.6 (34)

The derived results are recorded in Figures 2, 3, 4, 5. It is seen that, for both the two cases, both the proposed control method and Zambrano's model achieve excellent steady state tracking result. As can be seen from Figures 2b–5b, for the proposed control method, the retinal slip reaches steady state after 0.1 sec, while more than 0.5 sec for the Zambrano's model. In other words, with the same implementation conditions, the convergence time of the proposed control method is less than that of the Zambrano's model.

Figure 2.

Figure 2

Results of the proposed information fusion control method in the case of a sinusoidal motion of the target.

Figure 3.

Figure 3

Results of the proposed information fusion control method in the case of a constant velocity motion of the target.

Figure 4.

Figure 4

Results of the Zambrano's model in the case of a sinusoidal motion of the target.

Figure 5.

Figure 5

Results of the Zambrano's model in the case of a constant velocity motion of the target.

Simulation group 2

Next, the tracking performance of the designed control method for any other desired trajectories of eyes is validated. To do so, the following three desired trajectories are considered

Case1:y=0.1hk;
Case2:y=0.2πsinπhkeTk;
Case3:y=0.2πcosπhkeTk.

Figures 6, 7, 8 depict the behavior of the proposed control method for different desired trajectories of eyes. From these figures, it can be seen that the tracking performance, including the tracking efficiency and the convergence time, is not degraded obviously by the change of desired trajectories. This merit brings much convenience for the application of the designed control method in practical eye movement, because the eyes need to track arbitrary trajectories in the practical application.

Figure 6.

Figure 6

Results of the proposed information fusion control method with respect to case 1.

Figure 7.

Figure 7

Results of the proposed information fusion control method with respect to case 2.

Figure 8.

Figure 8

Results of the proposed information fusion control method with respect to case 3.

Discussion

Information fusion control method elegantly describes in a function manner how the brain may deal with arbitrary target velocities, how it implements the smooth pursuit eye movement with prediction, learning and time delays. These two principles allowed us to accurately describe visually guided, predictive and learning smooth pursuit dynamics observed in a wide variety of tasks within a single theoretical framework.

Conflict of Interest

None of the authors has conflicts or potential conflicts of interest including relevant financial interests, activities, relationships, and affiliations related to this study.

Zhang M., Ma X., Qin B., Wang G., Guo Y., Xu Z., Wang Y., Li Y.. Information fusion control with time delay for smooth pursuit eye movement. Physiol Rep, 4 (10), 2016, e12775, doi: 10.14814/phy2.12775

Funding Information

This study has been funded with the National High‐tech Research and Development (863 Program) of China under Award No. 2015AA042307, Shandong Provincial Independent Innovation & Achievement Transformation Special Foundation, China, under Awards Nos. 2014ZZCX04302, 2014ZZCX04303, 2015ZDXX0101E01, and the Fundamental Research Funds of Shandong University, China, under Awards Nos. 2015JC027, 2015JC051.

References

  1. Adams, R. A. , Aponte E., Marshall L., and Friston K. J.. 2015. Active inference and oculomotor pursuit: the dynamic causal modeling of eye movements. J. Neurosci. Methods 242:1–14. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Avni, O. , Borrelli F., Katzir G., Rivlin E., and Rotstein H.. 2008. Scanning and tracking with independent cameras – a biologically motivated approach based on model predictive control. Autonomous Robots 24:285–302. [Google Scholar]
  3. Bahill, A. , and McDonald J.. 1983. Model emulates humans smooth pursuit system producing zero‐latency target tracking. Biol. Cybern. 48:213–222. [DOI] [PubMed] [Google Scholar]
  4. Bardshwa, K. J. , Reid I. D., and Murray D. W.. 1997. The active recovery of 3D motion trajectories and their use in prediction. IEEE Trans. Pattern Anal. Mach. Intell. 19:219–234. [Google Scholar]
  5. Barnes, G. R. , and Asselman P. T.. 1991. The mechanism of prediction in human smooth pursuit eye movements. J. Physiol. 439:439–461. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Carpenter, R. H. S. 1988. Movements of the eyes. Pion Limited, London, U.K. [Google Scholar]
  7. Chou, J. H. , and Lisberger S. G.. 2004. The role of the frontal pursuit area in learning in smooth pursuit eye movements. J. Neurophysiol. 24:4124–4133. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Dallos, P. J. , and Jones R. W.. 1963. Learning behavior of the eye fixation control system. IEEE Trans. Autom. Control 8:218–227. [Google Scholar]
  9. Dithcburn, R. W. 1973. Eye‐movements and visual perception. Clarendon, Oxford, U.K. [Google Scholar]
  10. Dodge, R. , Travis R. C., and Fox J. C.. 1930. Optic nystagmus III. Characteristics of the slow phase. Arch. Neurol. Psychiatr. 24:21–34. [Google Scholar]
  11. Fukushima, K. , Yamanobe T., Shinmei Y., and Fukushima J.. 2002. Predictive responses of periarcuate pursuit neurons to visual target motion. Exp. Brain Res. 145:104–120. [DOI] [PubMed] [Google Scholar]
  12. Grönqvist, H. , Gredebäck G., and von Hofsten C.. 2006. Development asymmetries between horizontal and vertical tracking. Vision. Res. 46:1754–1761. [DOI] [PubMed] [Google Scholar]
  13. von Hofsten, C. , and Rosander K.. 1997. The development of smooth pursuit tracking in young infants. Vision. Res. 37:1799–1810. [DOI] [PubMed] [Google Scholar]
  14. IIg, U. J. , Schumann S., and Thier P.. 2004. Posterior parietal cortex neurons encode target motion in world‐centered coordinates. Neuron 43:145–151. [DOI] [PubMed] [Google Scholar]
  15. Jansson, D. , and Medvedev A.. 2011. Dynamic smooth pursuit gain estimation from eye tracking data, 50th IEEE Conference on Decision and Control and European Control Conference (CDC‐ECC), Orlando, FL, USA: 12–15. [Google Scholar]
  16. Kawawaki, D. , Shibata T., Goda N., Doya K., and Kawato M.. 2006. Anterior and superior lateral occipitotemporal cortex responsible for target motion prediction during overt and covert visual pursuit. Neurosci. Res. 54:112–123. [DOI] [PubMed] [Google Scholar]
  17. Koken, P. W. , Jonker H. J. J., and Erkelens C. J.. 1996. A model of the human smooth pursuit system based on an unsupervised adaptive controller. IEEE Trans. Syst., Man, Cybern. A, Syst., Humans 26:275–280. [Google Scholar]
  18. Kowler, E. 2011. Eye movements: the past 25 years. Vision. Res. 51:1457–1483. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Marino, S. , Sessam E., and Lorenzo G. D.. 2007. Quantitative analysis of pursuit ocular movements in Parkinson's disease by using a video‐based eye tracking system. Eur. Neurol. 58:193–197. [DOI] [PubMed] [Google Scholar]
  20. Ono, S. 2015. The neuronal basis of on‐line visual control in smooth pursuit eye movements. Vision. Res. 110:257–264. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Orban de Xivry, J.‐J. , Coppe S., Blohm G., and Lefevre P.. 2013. Kalman filtering naturally accounts for visually guided and predictive smooth pursuit dynamics. J. Neurosci. 33:17301–17313. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Rivlin, E. , Rotstein H., and Zeevi Y. Y.. 1998. Two‐mode control: an oculomotor‐based approach to tracking systems. IEEE Trans. Autom. Control 43:833–840. [Google Scholar]
  23. Robinson, D. , Gordon J., and Gordon S.. 1986. A model of the smooth pursuit eye movement system. Biol. Cybern. 55:43–57. [DOI] [PubMed] [Google Scholar]
  24. Roucoux, A. , Culee C., and Roucoux M.. 1983. Develop of fixation and pursuit eye movement in human infants. Behav. Brain Res. 10:133–139. [DOI] [PubMed] [Google Scholar]
  25. Shibata, T. , Tabata H., Schaal S., and Kawato M.. 2005. A model of smooth pursuit in primates based on learning the target the target dynamics. Neural Networks 18:213–225. [DOI] [PubMed] [Google Scholar]
  26. Tanaka, M. , and Lisberger S. G.. 2001. Regulation of the gain of visually guided smooth pursuit eye movements by frontal cortex. Nature 409:191–194. [DOI] [PubMed] [Google Scholar]
  27. Tanaka, M. , and Lisberger S. G.. 2002. Enhancement of multiple components of pursuit eye movement by microstimulation in the arcuate frontal pursuit area in monkeys. J. Neurophysiol. 87:802–818. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Thier, P. , and IIg U. J.. 2005. The neural basis of smooth pursuit eye movements. Curr. Opin. Neurobiol. 15:645–652. [DOI] [PubMed] [Google Scholar]
  29. Tian, J. R. , and Lynch J. C.. 1996. Corticocortical input to the smooth and saccadic eye movement subregions of the frontal eye field in Cebus monkeys. J. Neurophysiol. 76:2754–2771. [DOI] [PubMed] [Google Scholar]
  30. Wang, Z. , Liu W., and Zhen Z.. 2007a. Design of optimal tracking controller for nonlinear discrete systems with input delays using information fusion estimation method. 2007 IEEE International Conference on Control and Automation, Guangzhou, China: 2535–2538. [Google Scholar]
  31. Wang, Z. , Wang D., and Yang Z.. 2007b. Primary exploration of nonlinear information fusion control theory. Sci. China Ser. F‐Inf. Sci. 50:686–696. [Google Scholar]
  32. Wells, S. G. , and Barnes G. R.. 1998. Fast, anticipatory smooth‐pursuit eye movements appear to depend on a short‐term store. Exp. Brain Res. 120:129–133. [DOI] [PubMed] [Google Scholar]
  33. Westheimer, G. 1954. Eye movement responses to a horizontally moving visual stimulus. AMA Arch. Ophthal. 52:932–941. [DOI] [PubMed] [Google Scholar]
  34. Whittaker, S. , and Eaholtz G.. 1982. Learning patterns of eye motion for foveal pursuit. Invest. Ophthalmol. Vis. Sci. 23:393–397. [PubMed] [Google Scholar]
  35. Zambrano, D. , Falotico E., Manfredi L., and Laschi C.. 2010. A model of the smooth pursuit eye movement with prediction and learning. Appl. Bionics Biomech. 7:109–118. [Google Scholar]

Articles from Physiological Reports are provided here courtesy of Wiley

RESOURCES