Abstract
We introduce a novel, noninvasive retinal eye-tracking system capable of detecting eye displacements with an angular resolution of 0.039 arcmin and a maximum velocity of 300°/s across an 8° span. Our system is designed based on a confocal retinal imaging module similar to a scanning laser ophthalmoscope. It utilizes a 2D MEMS scanner ensuring high image frame acquisition frequencies up to 1.24 kHz. In contrast with leading eye-tracking technology, we measure the eye displacements via the collection of the observed spatial excursions for all the times corresponding a full acquisition cycle, thus obviating the need for both a baseline reference frame and absolute spatial calibration. Using this approach, we demonstrate the precise measurement of eye movements with magnitudes exceeding the spatial extent of a single frame, which is not possible using existing image-based retinal trackers. We describe our retinal tracker, tracking algorithms and assess the performance of our system by using programmed artificial eye movements. We also demonstrate the clinical capabilities of our system with in vivo subjects by detecting microsaccades with angular extents as small as 0.028°. The rich kinematic ocular data provided by our system with its exquisite degree of accuracy and extended dynamic range opens new and exciting avenues in retinal imaging and clinical neuroscience. Several subtle features of ocular motion such as saccadic dysfunction, fixation instability and abnormal smooth pursuit can be readily extracted and inferred from the measured retinal trajectories thus offering a promising tool for identifying biomarkers of neurodegenerative diseases associated with these ocular symptoms.
1. Introduction
The human eye is an optical instrument in constant motion. Even during stable fixation, eye movements exhibit a broad range of magnitudes and frequencies [1,2]. Early research on eye movement was pioneered by Hering [3] and Lamare [4] at the end of the 19th century. Emerging techniques such as mechanical recording and photography were subsequently replaced by suction caps [5], scleral search coils [6], and the dual Purkinje image eye tracker [7]. Notably, the scleral search coil approach has excellent signal-to-noise ratio (SNR) and a typical resolution of 0.25 arcmin [2,6]. However, it is a highly invasive method involving the use of topical anesthesia and specialized contact lenses. Currently, owing to their non-invasive nature and its easiness of use, the most popular eye trackers are video-based devices that utilize anterior eye features. Their typical tracking accuracy ranges within 18–30 arcmin [8,9].
An alternative approach to eye tracking consists on following the path of the eye fundus instead of monitoring the anterior segment. An early fundus eye tracker used a light spot scanned across the optic disc. Because blood vessels in the disc are less reflective than the myelinated nerve fibers, eye motion in the horizontal direction was retrieved by recording the returning light intensity along the scanning line [10]. An improved version of this idea was later developed by Ferguson, who proposed using a dithered light beam in a confocal reflectometer setup for generating error signals occurring with eye movement in both lateral directions [11,12]. The reported accuracy of this retinal tracker was 3 arcmin [13]. Confocal reflectometry was further incorporated into the modern scanning laser ophthalmoscope (SLO) to correct for motion artifacts. The SLO is nowadays routinely used in state-of-the-art optical coherence tomography (OCT) and adaptive optics (AO) systems [13–19] for patient alignment and attempts on adapting it to eye tracking were natural step. Image-based fundus tracking was developed following the introduction of the SLO [20,21]. This approach consists on measuring eye displacements based on the differences between consecutive pairs of SLO images. State-of-the-art SLO systems are designed for high-quality retinal imaging and provide wide-field, densely sampled images. A major disadvantage of SLO is its inherently limited sampling rate, which is typically 10-30 frames per second (fps). One way to overcome this limitation is to estimate eye motion using only a sample region of a full frame, otherwise known as a subframe, thus lowering acquisition times and speeding up the computation of eye displacements. This concept is introduced in the tracking SLO (TSLO) system [22–30]. Here, sampling strip-shaped subframes results in increased eye trace acquisition rates of 960 Hz (1,920 Hz in offline, post-processing mode) [25]. Combined with post-processing digital alignment, subframe usage allows for optical stabilization at the level of 0.2 arcmin and eye motion detection accuracy of 0.04–0.05 arcmin [30]. An ingenious implementation using azimuthally symmetric subframes further allows for the isotropic retrieval of motion in the transverse plane [31,32]. A common feature of image-based tracking systems is their use of a reference frame, the baseline to which all subsequent frames are compared. The performance of the system is critically dependent on an adequate choice of reference frame, itself subject to distortion by motion artifacts [33,34]. More significantly, a stationary reference frame sets the upper limit for the range of measurable eye displacements to the spatial extent of the reference frame and limits the detectable velocities to tens of degrees/s [25,30,35].
Accurately detecting and quantifying eye movements constitute the basis of active stabilization for all in vivo eye imaging applications [13–19]. Tracking eye motion also plays a central role in numerous other research or technology fields, including psychology [36–39], physiology [40] and more recently, virtual reality and entertainment [41]. In neuroscience, the ability to measure the subtle features of eye displacements is of the uttermost significance. The control of eye movement is broadly represented in the cortical and subcortical structures of the brain, the brainstem and the cerebellum. Anomalous ocular motion is thus intrinsically related to altered brain structure [42,43]. Saccadic dysfunction, fixation instability and abnormal smooth pursuit, for example, provide reliable quantitative indicators of neurodegenerative diseases such as Parkinson’s disease [44,45] and Alzheimer’s disease [46]. Increased fixation instability and altered kinematic saccadic parameters are known quantitative biomarkers of multiple sclerosis [47,48]. In a recent clinical study, microsaccades have been shown to provide objective measurements of multiple sclerosis disability level and disease worsening [35]. Oculomotor abnormalities are therefore a sensitive biomarker for diagnosis and disease progression forecast [49]. However, oculomotor changes of neurological significance are often unsubstantial, and their precise measurement poses a challenge beyond the capabilities of existing diagnostic devices.
In this work, we introduce a fast eye tracker capable of registering 1,240 retinal images per second while achieving a retinal displacement estimation accuracy of 0.039 arcmin root mean squared error (RMSE) over a large dynamic range of displacements. A key feature of this device we name FreezEye Tracker (FET) is its functional independence of reference frame. Consequently, the range of measured displacements is not bounded by the size of the acquired frames. Moreover, the eye displacement calculations are performed with inherent immunity to the accumulation of tracking error what is achieved by algorithm using the concept of Key Frames (KFs). In the upcoming sections, we describe in detail the optical design of our device, the engineering of its quantitative algorithms including the explanation of KFs concept, and we assess its tracking performance and accuracy. We demonstrate the capabilities of FET by showing examples of saccadic and fixational eye movements. The performance of FET in terms of accuracy and dynamic range of makes it a tool well suited for clinical diagnostics in ophthalmology and neurology.
2. Methodology
2.1. Optical setup
The FET’s design consists of a retinal scanner with confocal detection inspired by the SLO. In order to achieve kilohertz frame acquisition rates, we acquire relatively small images with 4,432 pixels per frame. The schematic diagram of the FET optical setup is shown in Fig. 1. The illumination source is a 785-nm laser diode (LP785-SAV50, Thorlabs, USA) coupled with a single-mode fiber and collimated to a beam diameter of 0.7 mm (1/e2) by an aspherical lens CL. The pellicle beam splitter BS reflects the beam and directs it onto a 2D scanning mirror with a 1-mm microelectromechanical systems (MEMS) based active aperture (VC3141/5/48.4, VarioS 2D microscanner, Fraunhofer IPMS) . After reflecting off the scanning mirror, the beam passes through a 4f telescope system composed of lenses L4 and L3. The telescope conjugates the scanner’s aperture with a pair of galvanometric mirrors FET PS (GVS002, Thorlabs, USA) which steer the position of the scanning pattern to the selected region of interest in the retina (for example images refer to Fig. 2). The conjugate plane of the MEMS scanning mirror is then imaged onto the eye pupil plane by a second 4f telescope composed of lenses L2 and L1. Lens L2 is mounted on the translation stage MS for the correction of spherical refractive error. The beam reflected off of the retina reverts to the same path, is de-scanned by the MEMS mirror, passes through the pellicle beam splitter, and is collected by lens L5. The pinhole PH in the focal plane of L5 is conjugate with the retinal plane and rejects light out of focus. The signal from the retina is detected by the avalanche photodiode APD (A-Cube-S1500-01, Laser Components, Germany) and is processed by a PC. System synchronization and data acquisition are performed using custom software engineered in the LabVIEW environment (National Instruments, USA).
A critical element of our design for ultrafast image acquisition is the MEMS 2D scanning mirror. Its maximum operating scanning frequencies are 20 kHz and 620 Hz in the fast and slow axes, respectively. The maximum achievable frame rate of 1,240 fps derives from the shortest scanning half-period possible in the slow axis. This scanning frequency is achievable for an aperture size of 1 mm, with larger apertures requiring longer acquisition times. This tradeoff poses a design challenge requiring a compromise between maximum scanning angle, beam size at the cornea, and the throughput parameter T, defined as the ratio of the scanning mirror aperture to the diameter of the beam reflected by the retina [50]. In our design, the relay optics L1–L4 serves the purpose of balancing the system’s magnification. Ideally, for , no light reflected off of the eye is lost in the mirror aperture. The returning light exits the eye pupil with its full aperture in the range 4–7 mm [51]. Assuming scotopic measuring conditions with a 7-mm pupil, a magnification 1/7 yields . However, this case would require a beam 4.9 mm in diameter entering the eye, which would drastically reduce the scanning angles and in turn, the lateral resolution of the images [52].
In order to achieve maximum MEMS scanner deflections, the optical scanning angles were set to ±4.64° and ±4.29° in the fast and slow axes, respectively. For our studies, we have set the FOV to 3.37° × 3.24°, as measured using the USAF 1951 test target. The imaging beam diameter at the cornea was 1.96 mm. The calculated on-axis beam diameter on the retina in this case was 8.5 µm. In our setup, the design throughput was for a 4-mm pupil and for a 7-mm pupil. By design, the confocality of the system was traded off in order to gain sensitivity by using a 100-µm pinhole. The Airy disc diameter at the detection was 36.4 µm and the times-diffraction-limited (TDL) number of the optical system is 2.75 [50].
In order to perform the experiments with human subjects, the system was equipped with a fixation path, separated by a dichroic mirror (DM) from the imaging/tracking path. A halogen lamp was used to illuminate the target (Fig. 1 Inset A) projected onto the retina through a Badal optometer setup [53]. The Badal setup consisted of lenses L7–L8 and mirrors M3–M4, which were mounted on a movable stage BST to correct the subject’s spherical refractive error without magnifying the target. This feature allowed the angular size of the target to be constant, regardless of the refraction in the subjects’ eye.
Additionally, the system is prepared to be merged with other imaging modalities such as SLO or/and OCT that can provide high-resolution imaging of the subject’s retina and additional information such as cyclotorsion measurement [54]. Adding a module with an extra pair of scanners to the imaging modalities connected to FET via a dichroic mirror inserted, for example, between M5 and L6 opens up the possibility for active correction of the acquired images of the eye.
2.2. Algorithms
2.2.1. Retinal motion tracking algorithm
The MEMS scanner sweeps the beam following the path of Lissajous scanning pattern shown in Fig. 3(a). Acquired raw images are then re-sampled to produce the uniformly sampled rectangular images shown in Fig. 3(b). This step is performed using the matrix vector multiplication:
(1) |
where is an rectangular image organized in a single row-by-row vector of size , is the sparse resampling matrix of size , and is the intensity data vector acquired by the APD on the Lissajous scanning path, is a single APD reading, , and K is the number of data points per frame. The matrix is constructed from the Lissajous coordinates so that each row of has s non-zero elements at indices that correspond to the s-closest distances between the Lissajous coordinates and the coordinates in the resulting image.
The algorithm that reconstructs the retinal trajectory operates in two stages. In the first stage, the retinal trajectory is estimated by the N-back algorithm, in which each new frame is aligned with n previous frames as shown in Fig. 3(c-f). This stage consists of two steps: in the first trajectory estimate of the N-back algorithm is used to remove motion artifacts in the frames used in the second iteration of the N-back algorithm. In the second stage, estimation errors in the trajectory reconstructed in the first stage are corrected.
In the N-back algorithm, each new retinal trajectory point is calculated from displacements measured between the most recently acquired frame and N previous frames. In the simplest case, N=1, the displacement computed from only one previous frame is given by:
(2) |
where is the trajectory point in Euclidean space, m is the frame index, and denotes the displacement between frames a and b. We estimate by minimizing a functional criterion which quantifies the quality of the registration of frames and . In particular, we use the enhanced correlation coefficient (ECC) [55] criterion, which readily provides sub-pixel precision. However, alternative criteria can be implemented interchangeably based on hardware and computation latency requirements.
In order to reduce the effect of stochastic noise on the trajectory point calculation, the Eq. (2) can be applied to a number of N previously acquired frames and the result averaged. This defines the full version of the N-back algorithm, where the resulting point is the averaged position calculated from displacements measured using frames from empirically chosen subset B, with some acquired up to half a second earlier:
(3) |
where n is the index of frames calculated in reference to the newly acquired frame, with n=1 for the frame directly preceding the new frame. If the calculated criterion is below a threshold set at 0.8 in our experiments, the corresponding weighting coefficient is set to zero. The drop in criterion value can result from low SNR (e.g., caused by a blink or accidental vignetting of the scanning beam) or a displacement impossible to calculate either due to lack of satisfactory retinal features or movement exceeding the size of the frame. It is worth to emphasize that even if for a given pair of distant frames, basically imaging different retinal regions, the criterion cannot be calculated, the retinal position will still be estimated based on the remaining collection of frames with coinciding regions.
The above procedure can be repeated to take into account the retinal motion, which may geometrically distort the frames acquired during high velocity motion. For example, for a velocity of 1000°/s and 3°- wide frame the last frame line will be displaced by almost 1/3rd of its width with reference to the first line. Therefore, the velocity of retinal motion is estimated from the first run of N-back algorithm and the geometrical correction is applied to each frame as shear mapping in case of horizontal motion and scaling in case of vertical motion. Next, the second run of N-back algorithm is run with the criterion calculated using frames corrected for geometrical distortion. We have empirically selected the subsets and of historical frame indices for the first and second run of the N-back algorithm, respectively.
The alignment of frames described above is prone to small errors in which propagate in trajectory points over time according to Eqs. (2)–(3). Due to the recursive nature of these equations, the trajectory estimation error can be modeled as a random walk process, and therefore for an ideally calibrated system, has a non-stationary zero mean error, and error variance linearly increasing with time (a drift). In order to suppress drift, we use the fact that the eye returns from time to time to the same location and the error accumulated by Eqs. (2)–(3) can be corrected by a new displacement calculation. This technique introduces Key Frames (KFs), which are a subset of all frames in the acquired dataset with translations () that can be calculated (please refer to Fig. 3(g)). This means that the frames and correspond to closely spaced locations on the retina, whereas the time separation between them is not important. Next, the algorithm calculates such corrections for the KFs positions that minimize the error on the calculated (), see red arrow in Fig. 3(g). These corrections for KFs are performed using the multidimensional scaling (MDS) mathematical framework described in Ref. [56]. In other words, a distance matrix is constructed with norms of displacements, , between the KFs which can be calculated. The error minimization for trajectory is performed with the use of a stress function with respect to the KFs positions, :
(4) |
where are the distances computed between the a-th and the b-th KF, and is a weighting coefficient to indicate missing values as described earlier. The KF-trajectory has an inherently low sampling rate, however it has zero mean and stationary error. Since the KF-trajectory has missing values for non-overlapping frames the final retinal trajectory is estimated by casting the N-back trajectory, on the KF-trajectory, using linear interpolation for trajectory points in between KFs, as depicted by yellow arrows in Fig. 3(g).
2.2.2. Eye blink detection
In most cases, the effect of blinks, as well as other incidents leading to the low value of criterion that quantifies the quality of frame registration, is of little significance on trajectory correction in the KF algorithm. It is incorporated in the distance matrix in the weighing coefficients , as described in the previous section. During the eye blinks, the overall intensity of frames drops significantly. Therefore, for the saccade detection algorithm, we remove the frames that correspond to blinks by thresholding the measured mean frame intensity.
2.2.3. Saccade detection and quantification procedure
The design of our saccade detection algorithm is based on the velocity thresholding principle reported in Ref. [57]. In our implementation, the x- and y- components of the detected retinal trajectory are denoised using a 21-point moving average filter spanning 17 ms. The first and second derivatives are computed on the x- and y- trajectory components separately using the finite difference method, and are then used to calculate the magnitudes of the absolute velocity and acceleration.
Saccades are first identified by points of local maxima of absolute velocity, shown as an orange solid circle in Fig. 3(h). The initial boundaries of each saccade are determined on the opposite sides of the detected velocity maximum as the two sample points with a velocity below the threshold value empirically set to 1.24°/s. This value allows the elimination of random noise and associated spurious peak velocities mimicking saccades. Both starting and ending boundaries of the saccade are next expanded by 12 ms to compensate for saccade trimming due to velocity thresholding. The final boundaries are depicted by the green and red solid circles in Fig. 3(h) and Figs. 5–6.
The saccade magnitude is calculated using the Euclidean distance given by the denoised x, y coordinates. Next, the upper and lower boundaries are set to the maximum and minimum values and expanded into bands with a width equal to 5% of the extrema. The average values of the points located in these bands are calculated, and the total saccade magnitude is computed as the difference between them. This procedure is visually depicted in Fig. 3(h) with bands marked with grey rectangles.
2.3. Artificial eye experiments
In order to evaluate the FET system’s tracking performance, a simplified artificial eye composed of a set of X-Y galvanometric scanners (AES), imaging lens (AEO), and a test target (AET) was installed in the system, shown in Fig. 1, inset B. The imaging lens is identical to L1 and is arranged in a 4f system in order to preserve the scanning angles. The scanners were positioned in the eye pupil plane, and a voltage-to-optical-angle calibration was performed using the USAF 1951 test target. For tracking validation, an in vivo human eye fundus image was obtained by a scanning laser ophthalmoscope, printed and used as the test target. A visual comparison of the artificial eye images and the in vivo human eye images is shown in Fig. 2. By steering the AES with a known voltage we introduce controlled movements of the retinal image for calibration. The AES control circuit provides feedback outputs used to monitor the actual position of the galvanometric mirrors and compare them with the FET measurements. The AES is programmed to mimic a diversity of eye movements. Back-and-forth saccade sequences, for example, were generated using the model described in Ref. [58]. Eight 20-second sequences of 200 horizontal back-and-forth saccades and eight 20-second sequences of 200 vertical back-and-forth saccades were imaged. The measurement times of sequences were chosen arbitrarily, resulting in 25,000 collected FET image frames per sequence. The magnitudes of the saccades during each sequence were constant and spanned the range of 1–8° in 1° increments. Furthermore six waveforms of fixational eye movements, 20 s each, were generated according to the model described in Ref. [59]. The artificial eye was programmed to move according to these waveforms during the FET frames acquisition.
2.4. Human eye experiments
For our in vivo measurements, we have enlisted three healthy subjects (age group 25–40 years) with emmetropic vision and no reported or diagnosed fixation problems. The study adhered to the tenets of the Declaration of Helsinki. After the nature of the study was explained to all the participants, they gave their informed consent. Non-scanning beam power measured at the pupil plane was approximately 100 µW, which is significantly below the safety exposure limits [60].
The experiments were conducted in a dimly lit room. The subject’s eye was always randomly chosen for the measurements. Neither mydriatic nor cycloplegic drugs were used. Each subject was directed to place their head in a chin-rest mounted in front of the device, and the line of sight of their eye was aligned with the optical axis of the instrument [61]. Subjects were allowed to blink during the measurement as needed. After each measurement, the subjects were asked to withdraw their head from the chin-rest and rest for at least one minute.
2.4.1. Fixations
The first experimental goal is to demonstrate our system’s capability to image and detect in vivo fixational eye movements. For this purpose, subjects were directed to focus their sight on the center of a fixation target, which consists of a cross-hair and bull’s-eye combination with a diameter which subtends a visual angle of 1.5°. This target is shown as target FT1 in Fig. 1(A) [62]. The target was projected onto the subjects’ retina via a Badal system, which allows for the correction of defocusing without altering the angular magnification of the target. The procedure of finding the subjective far point was followed by optically moving the target away from the eye to the last position before the subjects could perceive a “just noticeable” blur [63]. Initially, in such an alignment, the system images the subjects’ fovea. By using a pair of FET positioning galvanometric mirrors, the operator moves the scanning pattern across different retinal regions. Once the scanning pattern is positioned on the desired retinal region, subjects are directed to blink and then fixate on the target for 20 s while the measurement is acquired. The experiment is repeated three times using different retinal features: the fovea, optic nerve, and retinal vessels. Typical examples of retinal features are shown in Fig. 2.
2.4.2. Saccades
In the second part of the experiment, subjects were directed to continuously switch their gaze between two fixating points separated by a known angular distance ranging from 1° to 8° increasing gradually in steps of 1°. Each fixation point has an angular extent of 0.6°. The goal in this experiment is to register and detect the saccades corresponding to the angular separation between the fixating points. This target is shown as target FT2 Fig. 1(A) [62]. For each subject, we select a vascularized area in the retina as a region of interest for imaging and tracking. Before the measurements, subjects are instructed to blink and then perform the periodic saccades with the aid of a regular auditory metronome set to 70 beats per min. Each measurement lasted 20 s and was repeated twice for each angular separation of the fixating points.
3. Results and discussion
3.1. Tracking algorithm evaluation
According to Eqs. (2)–(3), the N-back tracking algorithm accumulates errors in the registration of each pair of frames over time. A full model of the error for the trajectory must include sources of error representing different inputs to the algorithm. Namely, errors introduced during stable fixation and fixation periods in between saccades, when eye displacements and velocities are relatively small, and errors introduced during the larger excursions of saccades. The former can be modeled as a random walk process with a constant proportionality coefficient, because in the velocity range characteristic for fixations the distortions of frames due to eye motion are negligible and the frame overlap is high. The error increases significantly during saccades because the N-back frames at higher velocities start to show geometrical distortions and the overlapping areas become smaller. As a result, the error of the system is not stationary, and its variance increases with time. In our experiments, the error was measured as the difference between the true position returned by the GVSC monitor and the position estimated by the algorithm. A typical accumulation of root squared error (RSE) for the experiment with saccades 4° in magnitude is shown in Fig. 4. The increase of accumulated RSE due to the intersaccadic parts of the trajectories is interleaved with abrupt changes occurring during the saccades, clearly illustrating the non-stationary nature of the error. This error model is not valid for the trajectory , estimated by minimizing Eq. (4) and subsequently sub-sampling its result over . Here, the accumulated errors are corrected by the MDS procedure (see Ref. [56]) and a non-stationary error is hypothesized.
The augmented Dickey–Fuller (ADF) test with trend adjustment rejects the hypothesis that is a non-stationary process with and , compared with the test for , which yields and . The results of the tests indicate that is free from accumulative tracking error, and thus a figure for RMSE can be reported for the whole trajectory. Figure 4 shows the error time series of and for a 4° saccade experiment. Note how the increasing N-back error is eliminated once the KFs correction is performed.
Two strategies for choosing the KFs were tested. In the first strategy, the KFs are selected at fixed intervals (20 ms, 80 ms and 160 ms). This strategy is most effective for fixation experiments, when the motion amplitude is small compared to frame size and the probability that the frames will overlap is high. In such case, the typical (minimum) RMSE for trajectories is 0.039 arcmin for a frame interval 20 ms and 0.045 arcmin for frame intervals of 80 and 160 ms. In the experiments with forced saccades, acceptable results are achieved by taking a single KF from a fixation period that occurs between saccades. The KF is chosen from the time between velocity peaks (i.e., after the end or in-between the saccades). The interval between the KFs in this strategy is almost 900 ms, therefore, the achieved RMSE is higher and increases from 0.36 arcmin for a 1° saccade to 5 arcmins for an 8° saccade. For in vivo eye movement data, both strategies for choosing the KFs are combined.
In the current stage of the device development, all the computations are implemented in C language and run on a multithreaded CPU. The trajectory reconstruction is performed in post-processing. For 20 s of eye motion recording with 25 000 frames and 1000 KFs requires approx. one minute for the N-back and three minutes for calculation of KF displacements followed by MDS trajectory optimization. Our preliminary experiments show that the N-back part of the algorithm can be performed in less than 100µs per frame when implemented in the graphics processing unit (GPU) what makes an implementation operating in real-time feasible.
3.2. Human eye experiments
Figure 5 illustrates a typical saccade of 4° in magnitude from the experiment described in Methodology subsection 3.2. In this case, retinal vessels were chosen as the tracking features. Selected frames are shown corresponding to the solid blue circles in the saccadic plot. Solid green and red circles represent the start and end of the saccade, respectively. For clearness of presentation, the start of the saccade is moved to the origin of the coordinate system.
The saccade data shown in the Fig. 5 is a typical example from a series of voluntary back-and-forth saccades performed by subject 1 in the second part of the experiment. The complete series is presented in Fig. 6, where the saccade from Fig. 5 is shaded in blue.
The green region indicates a gap in the trajectory due to a blink. One can notice that the trajectory reconstruction is not affected by the partial loss of data during the blink. Green and red solid circles represent the detected starts and ends of the saccades, respectively. Horizontal grey-shaded stripes mark the angular size of the fixation targets, also shown in scale on the right side of the plot. The time between the detected saccades corresponds well with the settings of the metronome beats, and the majority of the magnitudes of the saccades are within the range of fixation targets.
Follow-up correcting saccades were observed in all the trajectories, with saccadic undershoot or overshoot. Because the task involved performing horizontal saccades (x-coordinate), the vertical component (y-coordinate) is small in comparison with the horizontal component. Nevertheless, the correlation of both x- and y- retinal trajectories is clearly evident, which indicates that the eye motion deviates from a horizontal line during the saccade.
Motion parameters such as velocity and acceleration can be readily calculated from the saccade trajectories. Figure 7 summarizes the displacement angular magnitudes, velocities, and accelerations of all 42 saccades from both 4°-forced saccade experiments on subject 1, with a distinction between temporal-nasal and nasal-temporal directions. A clear asymmetry can be observed amongst the directions, especially in the acceleration plots. Correcting saccades, which are comparatively much smaller in magnitude, are not shown in Fig. 7.
Figure 8(a) and (d) depict the vertical and horizontal eye-motion components, respectively. In this plot, the motion components are plotted on separate axes with different scales. By combining both components, the x–y retinal trajectory is readily obtained shown in the figure with color-coded velocity values in panel (c). Panel (b) is a projection of the horizontal and vertical trajectories from panels (a) and (c) in the form of a retinal position density map. Contours of fixations targets in panel (b) are shown to scale. For further examples and results, please refer to Supplementary Materials Visualization 1 (12.9MB, mp4) , Visualization 2 (12.6MB, mp4) , Visualization 3 (13MB, mp4) , Visualization 4 (12.5MB, mp4) , Visualization 5 (11.7MB, mp4) , Visualization 6 (12.2MB, mp4) , Visualization 7 (12.2MB, mp4) , and Visualization 8 (12.3MB, mp4) , which show videos of FET imaging and retrieved trajectories during 2, 4, 6 and 8° saccades performed by the subjects.
Figure 9 shows typical results of in vivo fixation experiments for Subject 1. Figure 9(a) and (d) show vertical and horizontal eye motion components, respectively. Green-shaded regions represent blinks that occurred during the measurement. Figure 9(b) shows the retinal position density map during this measurement. Although eye drift and microsaccades can be easily identified, it is clear that most of the time, the eye is focused on a definite region, likely the center of the fixation target. The target is drawn to scale in the same figure panel. Eye excursions are clearly visible in individual x- and y-trajectories, as well as in panel (c), which shows the entire detected x–y retinal trajectory with color-coded velocity. In addition, despite the relatively stable eye position, typical eye drift and microsaccades are easily identified. These eye excursions are clearly visible in individual x- and y-trajectories, as well as in Fig. 9(c). For further examples and results, please refer to Supplementary Materials Visualization 9 (14.7MB, mp4) , Visualization 10 (11.1MB, mp4) , Visualization 11 (12.8MB, mp4) , and Visualization 12 (15MB, mp4) , which show videos of FET imaging and retrieved trajectories during fixation periods. In these examples the tracking was performed respectively on retinal vessels, the optic disc, larger retinal vessels near the optic disc and the fovea.
A summary of all 5,159 detected saccades and microsaccades from all the measurements performed in this study is presented in Fig. 10 in the form of a saccadic main sequence [64]. We have removed any remaining outliers by fitting a main sequence formula as proposed by Baloh [65] and by further visual inspection of data points in the sequence. The final plot consists of 5,159 points, each one corresponding to a saccade or microsaccade.
Results shown in Fig. 10 are in good agreement with the previously published results using different instrumentation [2,35]. Notably, our tracking system extends the range of detectable magnitudes of microsaccades and increases the accuracy of their measurement. The inset of Fig. 10 shows the smallest microsaccade magnitude detected in this study, with a magnitude of 0.028°.
The results of this study demonstrate our system’s capability for the accurate reconstruction of retinal motion with a maximum angular resolution of 0.039 arcmin RMSE and a temporal resolution of up to 790 µs. Further parametric characterization of eye motion, including intersaccadic intervals and their statistical distribution, number and duration of fixations, are currently being conducted in a clinical setting with a more statistically significant population.
4. Conclusions
We have demonstrated a novel, noninvasive eye tracking system capable of detecting retinal displacements as small as 0.028° with an angular resolution of 0.039 arcmin and a maximum velocity of 300°/s across an angular span as wide as 8°. Our tracking algorithms quantify eye displacements using the shifts of a subset of frames in a sequence spanning the full acquisition cycle, obviating the need for a single reference frame and allowing for the precise measurement of eye movements exceeding the spatial extent of single acquired frames. Therefore, our system extends the limitation on maximum detectable saccadic magnitude and velocity characteristic for current image-based retinal trackers and allows the detection of finer features of eye motion enabling new, promising opportunities in retinal imaging or clinical neuroscience. Furthermore, our system offers the ability to perform the precise measurement of both microsaccades occurring during fixation as well as large saccades without the need for any additional external imaging devices such as a wide-field SLO. The subtle features of saccadic dysfunction, fixation instability and abnormal smooth pursuit can be readily extracted and quantified in deeper detail, thus offering a promising tool set for the early identification of biomarkers of neurodegenerative diseases. Moreover, FET can be readily combined with other eye imaging modalities such as SLO or OCT to provide eye motion correction without major hardware changes to these modalities.
Acknowledgments
The project “FreezEYE Tracker – ultrafast system for image stabilization in biomedical imaging“ was conducted within the TEAM TECH Programme of the Foundation for Polish Science, co-financed by the European Union under the European Regional Development Fund.
We would like to thank Carlos López-Mariscal, PhD for valuable help during manuscript preparation.
Funding
Fundacja na rzecz Nauki Polskiej10.13039/501100001870 (POIR.04.04.00-00-2070/16-00).
Disclosures
AM2M Ltd. L.P: MM (E), MN (I, E), KD (I, E), AS (I), MS (I).
References
- 1.Martinez-Conde S., Macknik S. L., Hubel D. H., “The role of fixational eye movements in visual perception,” Nat. Rev. Neurosci. 5(3), 229–240 (2004). 10.1038/nrn1348 [DOI] [PubMed] [Google Scholar]
- 2.Martinez-Conde S., Otero-Millan J., Macknik S. L., “The impact of microsaccades on vision: towards a unified theory of saccadic function,” Nat. Rev. Neurosci. 14(2), 83–96 (2013). 10.1038/nrn3405 [DOI] [PubMed] [Google Scholar]
- 3.Wade N. J., Tatler B. W., “Did Javal measure eye movements during reading,” J. Eye Mov. Res. 2, 1–7 (2009). 10.16910/jemr.2.5.520664810 [DOI] [Google Scholar]
- 4.Wade N. J., “How Were Eye Movements Recorded Before Yarbus?” Perception 44(8-9), 851–883 (2015). 10.1177/0301006615594947 [DOI] [PubMed] [Google Scholar]
- 5.Yarbus A. L., “Methods,” in Eye Movements and Vision (Plenum Press, 1967), pp. 5–58. 10.1007/978-1-4899-5379-7_2 [DOI] [Google Scholar]
- 6.Robinson D. A., “Movement Using a Scleral Search in a Magnetic Field,” IEEE Trans. Bio-Med. Electron. 10(4), 137–145 (1963). 10.1109/TBMEL.1963.4322822 [DOI] [PubMed] [Google Scholar]
- 7.Cornsweet T. N., Crane H. D., “Accurate two-dimensional eye tracker using first and fourth Purkinje images,” J. Opt. Soc. Am. 63(8), 921–928 (1973). 10.1364/JOSA.63.000921 [DOI] [PubMed] [Google Scholar]
- 8.SR Research Ltd. , “EyeLink 1000 User Manual,” http://sr-research.jp/support/EyeLink%201000%20User%20Manual%201.5.0.pdf
- 9.Tobii Technology Inc. , “Tobii Pro Spectrum Product Description,” https://www.tobiipro.com/siteassets/tobii-pro/product-descriptions/tobii-pro-spectrum-product-description.pdf/?v=2.0%0A
- 10.Cornsweet T. N., “New Technique for the Measurement of Small Eye Movements,” J. Opt. Soc. Am. 48(11), 808–811 (1958). 10.1364/JOSA.48.000808 [DOI] [PubMed] [Google Scholar]
- 11.Ferguson R. D., “Servo tracking system utilizing phase-sensitive detection of reflectance variations,” U.S. patent 5(767), 941 (1998). [Google Scholar]
- 12.Ferguson R. D., “Servo tracking system utilizing phase-sensitive detection of reflectance variations,” U.S. patent 5(943), 115 (1999). [Google Scholar]
- 13.Hammer D. X., Ferguson R. D., Magill J. C., White M. A., Elsner A. E., Webb R. H., “Image stabilization for scanning laser ophthalmoscopy,” Opt. Express 10(26), 1542–1549 (2002). 10.1364/OE.10.001542 [DOI] [PubMed] [Google Scholar]
- 14.Hammer D. X., Ferguson R. D., Magill J. C., a White M., Elsner A. E., Webb R. H., “Compact scanning laser ophthalmoscope with high-speed retinal tracker,” Appl. Opt. 42(22), 4621–4632 (2003). 10.1364/AO.42.004621 [DOI] [PubMed] [Google Scholar]
- 15.Ferguson R. D., Hammer D. X., Paunescu L. A., Beaton S., Schuman J. S., “Tracking optical coherence tomography,” Opt. Lett. 29(18), 2139–2141 (2004). 10.1364/OL.29.002139 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Hammer D. X., Ferguson R. D., Magill J. C., Paunescu L. A., Beaton S., Ishikawa H., Wollstein G., Schuman J. S., “Active retinal tracker for clinical optical coherence tomography systems,” J. Biomed. Opt. 10(2), 024038 (2005). 10.1117/1.1896967 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Hammer D. X., Ferguson R. D., Bigelow C. E., Iftimia N. V., Ustun T. E., Burns S. A., “Adaptive optics scanning laser ophthalmoscope for stabilized retinal imaging,” Opt. Express 14(8), 3354–3367 (2006). 10.1364/OE.14.003354 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Burns S. A., Tumbar R., Elsner A. E., Ferguson D., Hammer D. X., “Large-field-of-view, modular, stabilized, adaptive-optics-based scanning laser ophthalmoscope,” J. Opt. Soc. Am. A 24(5), 1313–1326 (2007). 10.1364/JOSAA.24.001313 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Kocaoglu O. P., Ferguson R. D., Jonnal R. S., Liu Z., Wang Q., Hammer D. X., Miller D. T., “Adaptive optics optical coherence tomography with dynamic retinal tracking,” Biomed. Opt. Express 5(7), 2262–2284 (2014). 10.1364/BOE.5.002262 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Webb R. H., Hughes G. W., “Scanning laser ophthalmoscope,” IEEE Trans. Biomed. Eng. BME-28(7), 488–492 (1981). 10.1109/TBME.1981.324734 [DOI] [PubMed] [Google Scholar]
- 21.Wornson D. P., Hughes G. W., Webb R. H., “Fundus tracking with the scanning laser ophthalmoscope,” Appl. Opt. 26(8), 1500–1504 (1987). 10.1364/AO.26.001500 [DOI] [PubMed] [Google Scholar]
- 22.Stetter M., Sendtner R. A., Timberlake G. T., “A novel method for measuring saccade profiles using the scanning laser ophthalmoscope,” Vision Res. 36(13), 1987–1994 (1996). 10.1016/0042-6989(95)00276-6 [DOI] [PubMed] [Google Scholar]
- 23.Mulligan J. B., “Recovery of motion parameters from distortions in scanned images,” in Proceedings of the Image Registration Workshop, Le Moigne J., ed. (NASA Goddard Space Flight Center, 1997), pp. 281–292. [Google Scholar]
- 24.Arathorn D. W., Yang Q., Vogel C. R., Zhang Y., Tiruveedhula P., Roorda A., “Retinally stabilized cone-targeted stimulus delivery,” Opt. Express 15(21), 13731–13744 (2007). 10.1364/OE.15.013731 [DOI] [PubMed] [Google Scholar]
- 25.Sheehy C. K., Yang Q., Arathorn D. W., Tiruveedhula P., de Boer J. F., Roorda A., “High-speed, image-based eye tracking with a scanning laser ophthalmoscope,” Biomed. Opt. Express 3(10), 2611–2622 (2012). 10.1364/BOE.3.002611 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Vienola K. V., Braaf B., Sheehy C. K., Yang Q., Tiruveedhula P., Arathorn D. W., de Boer J. F., Roorda A., “Real-time eye motion compensation for OCT imaging with tracking SLO,” Biomed. Opt. Express 3(11), 2950–2963 (2012). 10.1364/BOE.3.002950 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Braaf B., Vienola K. V., Sheehy C. K., Yang Q., Vermeer K. A., Tiruveedhula P., Arathorn D. W., Roorda A., de Boer J. F., “Real-time eye motion correction in phase-resolved OCT angiography with tracking SLO,” Biomed. Opt. Express 4(1), 51–65 (2013). 10.1364/BOE.4.000051 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Sheehy C. K., Tiruveedhula P., Sabesan R., Roorda A., “Active eye-tracking for an adaptive optics scanning laser ophthalmoscope,” Biomed. Opt. Express 6(7), 2412–2423 (2015). 10.1364/BOE.6.002412 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Stevenson S. B., Sheehy C. K., Roorda A., “Binocular eye tracking with the tracking scanning laser ophthalmoscope,” Vision Res. 118, 98–104 (2016). 10.1016/j.visres.2015.01.019 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Yang Q., Zhang J., Nozato K., Saito K., Williams D. R., Roorda A., Rossi E. A., “Closed-loop optical stabilization and digital image registration in adaptive optics scanning light ophthalmoscopy,” Biomed. Opt. Express 5(9), 3174–3191 (2014). 10.1364/BOE.5.003174 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Damodaran M., Vienola K. V., Braaf B., Vermeer K. A., de Boer J. F., “Digital micromirror device based ophthalmoscope with concentric circle scanning,” Biomed. Opt. Express 8(5), 2766–2780 (2017). 10.1364/BOE.8.002766 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Vienola K. V., Damodaran M., Braaf B., Vermeer K. A., de Boer J. F., “In vivo retinal imaging for fixational eye motion detection using a high-speed digital micromirror device (DMD)-based ophthalmoscope,” Biomed. Opt. Express 9(2), 591–602 (2018). 10.1364/BOE.9.000591 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Azimipour M., Zawadzki R. J., Gorczynska I., Migacz J., Werner J. S., Jonnal R. S., “Intraframe motion correction for raster-scanned adaptive optics images using strip-based cross-correlation lag biases,” PLoS One 13(10), e0206052–24 (2018). 10.1371/journal.pone.0206052 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Salmon A. E., Cooper R. F., Langlo C. S., Baghaie A., Dubra A., Carroll J., “An automated reference frame selection (ARFS) algorithm for cone imaging with adaptive optics scanning light ophthalmoscopy,” Trans. Vis. Sci. Tech. 6(2), 9–15 (2017). 10.1167/tvst.6.2.9 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Sheehy C. K., Bensinger E. S., Romeo A., Rani L., Stepien-Bernabe N., Shi B., Helft Z., Putnam N., Cordano C., Gelfand J. M., Bove R., Stevenson S. B., Green A. J., “Fixational microsaccades: A quantitative and objective measure of disability in multiple sclerosis,” Mult. Scler. J., 1352458519894712 (2020). [DOI] [PubMed]
- 36.Duchowski A. T., “A breadth-first survey of eye-tracking applications,” Behav. Res. Methods, Instruments. Comput. 34(4), 455–470 (2002). 10.3758/BF03195475 [DOI] [PubMed] [Google Scholar]
- 37.Otero-Millan J., Serra A., Leigh R. J., Troncoso X. G., Macknik S. L., Martinez-Conde S., “Distinctive features of saccadic intrusions and microsaccades in progressive supranuclear palsy,” J. Neurosci. 31(12), 4379–4387 (2011). 10.1523/JNEUROSCI.2600-10.2011 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Otero-Millan J., Schneider R., Leigh R. J., Macknik S. L., Martinez-Conde S., “Saccades during attempted fixation in Parkinsonian disorders and recessive ataxia: from microsaccades to square-wave jerks,” PLoS One 8(3), e58535 (2013). 10.1371/journal.pone.0058535 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Kerr-Gaffney J., Harrison A., Tchanturia K., “Eye-tracking research in eating disorders: A systematic review,” Int. J. Eat. Disord. 52(1), 3–27 (2019). 10.1002/eat.22998 [DOI] [PubMed] [Google Scholar]
- 40.Rolfs M., “Microsaccades: small steps on a long way,” Vision Res. 49(20), 2415–2441 (2009). 10.1016/j.visres.2009.08.010 [DOI] [PubMed] [Google Scholar]
- 41.Viviane C., Peter K., Sabine K., “Eye tracking in virtual reality,” J. Eye Mov. Res. 12(1), 1–8 (2019). 10.16910/jemr.12.1.3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.MacAskill M. R., Anderson T. J., “Eye movements in neurodegenerative diseases,” Curr. Opin. Neurol. 29(1), 61–68 (2016). 10.1097/WCO.0000000000000274 [DOI] [PubMed] [Google Scholar]
- 43.Benson P. J., Beedie S. A., Shephard E., Giegling I., Rujescu D., St. Clair D., “Simple viewing tests can detect eye movement abnormalities that distinguish schizophrenia cases from controls with exceptional accuracy,” Biol. Psychiatry 72(9), 716–724 (2012). 10.1016/j.biopsych.2012.04.019 [DOI] [PubMed] [Google Scholar]
- 44.Wu C. C., Cao B., Dali V., Gagliardi C., Barthelemy O. J., Salazar R. D., Pomplun M., Cronin-Golomb A., Yazdanbakhsh A., “Eye movement control during visual pursuit in Parkinson’s disease,” PeerJ 6, e5442 (2018). 10.7717/peerj.5442 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Gitchel G. T., Wetzel P. A., Baron M. S., “Pervasive ocular tremor in patients with Parkinson disease,” Arch. Neurol. 69(8), 1011–1017 (2012). 10.1001/archneurol.2012.70 [DOI] [PubMed] [Google Scholar]
- 46.Fletcher W. A., Sharpe J. A., “Saccadic eye movement dysfunction in Alzheimer’s disease,” Ann. Neurol. 20(4), 464–471 (1986). 10.1002/ana.410200405 [DOI] [PubMed] [Google Scholar]
- 47.Mallery R. M., Poolman P., Thurtell M. J., Full J. M., Ledolter J., Kimbrough D., Frohman E. M., Frohman T. C., Kardon R. H., “Visual fixation instability in multiple sclerosis measured using SLO-OCT,” Invest. Ophthalmol. Visual Sci. 59(1), 196–201 (2018). 10.1167/iovs.17-22391 [DOI] [PubMed] [Google Scholar]
- 48.Bijvank J. A., Van Rijn L. J., Balk L. J., Tan H. S., Uitdehaag B. M. J., Petzold A., “Diagnosing and quantifying a common deficit in multiple sclerosis: Internuclear ophthalmoplegia,” Neurology 92(20), e2299–e2308 (2019). 10.1212/WNL.0000000000007499 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Rodríguez-Labrada R., Vázquez-Mojena Y., Velázquez-Pérez L., “Eye movement abnormalities in neurodegenerative diseases,” in Eye Motility (IntechOpen, 2019). [Google Scholar]
- 50.LaRocca F., Dhalla A.-H., Kelly M. P., Farsiu S., Izatt J. A., “Optimization of confocal scanning laser ophthalmoscope design,” J. Biomed. Opt. 18(7), 076015 (2013). 10.1117/1.JBO.18.7.076015 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51.Watson A. B., Yellott J. I., “A unified formula for light-adapted pupil size,” J. Vis. 12(10), 12 (2012). 10.1167/12.10.12 [DOI] [PubMed] [Google Scholar]
- 52.Donnelly W. J., III, Roorda A., “Optimal pupil size in the human eye for axial resolution,” J. Opt. Soc. Am. A 20(11), 2010–2015 (2003). 10.1364/JOSAA.20.002010 [DOI] [PubMed] [Google Scholar]
- 53.Charman W. N., Heron G., “Fluctuations in accommodation: a review,” Oph Phys Optics 8(2), 153–164 (1988). 10.1111/j.1475-1313.1988.tb01031.x [DOI] [PubMed] [Google Scholar]
- 54.Lengwiler F., Rappoport D., Jaggi G. P., Landau K., Traber G. L., “Reliability of Cyclotorsion measurements using Scanning Laser Ophthalmoscopy imaging in healthy subjects: The CySLO study,” Br. J. Ophthalmol. 102(4), 535–538 (2018). 10.1136/bjophthalmol-2017-310396 [DOI] [PubMed] [Google Scholar]
- 55.Evangelidis G. D., Psarakis E. Z., “Parametric Image Alignment Using Enhanced Correlation Coefficient Maximization,” IEEE Trans. Pattern Anal. Mach. Intell. 30(10), 1858–1865 (2008). 10.1109/TPAMI.2008.113 [DOI] [PubMed] [Google Scholar]
- 56.Borg I., Groenen P., Modern Multidimensional Scaling: Theory and Applications (Springer, 1997). [Google Scholar]
- 57.Engbert R., Kliegl R., “Microsaccades uncover the orientation of covert attention,” Vision Res. 43(9), 1035–1045 (2003). 10.1016/S0042-6989(03)00084-1 [DOI] [PubMed] [Google Scholar]
- 58.Dai W., Selesnick I., Rizzo J. R., Rucker J., Hudson T., “A parametric model for saccadic eye movement,” in 2016 IEEE Signal Processing in Medicine and Biology Symposium (SPMB) (IEEE, 2016), pp. 1–6. [Google Scholar]
- 59.Engbert R., Mergenthaler K., Sinn P., Pikovsky A., “An integrated model of fixational eye movements and microsaccades,” Proc. Natl. Acad. Sci. 108(39), E765–E770 (2011). 10.1073/pnas.1102730108 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 60.Delori F. C., Webb R. H., Sliney D. H., American National Standards Institute , “Maximum permissible exposures for ocular safety (ANSI 2000), with emphasis on ophthalmic devices,” J. Opt. Soc. Am. A 24(5), 1250–1265 (2007). 10.1364/JOSAA.24.001250 [DOI] [PubMed] [Google Scholar]
- 61.Nowakowski M., Sheehan M., Neal D., Goncharov A. V., “Investigation of the isoplanatic patch and wavefront aberration along the pupillary axis compared to the line of sight in the eye,” Biomed. Opt. Express 3(2), 240–258 (2012). 10.1364/BOE.3.000240 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 62.Thaler L., Schütz A. C., Goodale M. A., Gegenfurtner K. R., “What is the best fixation target? The effect of target shape on stability of fixational eye movements,” Vision Res. 76, 31–42 (2013). 10.1016/j.visres.2012.10.012 [DOI] [PubMed] [Google Scholar]
- 63.Atchison D. A., Fisher S. W., Pedersen C. A., Ridall P. G., “Noticeable, troublesome and objectionable limits of blur,” Vision Res. 45(15), 1967–1974 (2005). 10.1016/j.visres.2005.01.022 [DOI] [PubMed] [Google Scholar]
- 64.Bahill A. T., Clark M. R., Stark L., “The main sequence, a tool for studying human eye movements,” Math. Biosci. 24(3-4), 191–204 (1975). 10.1016/0025-5564(75)90075-9 [DOI] [Google Scholar]
- 65.Baloh R. W., Sills A. W., Kumley W. E., Honrubia V., “Quantitative measurement of saccade amplitude, duration, and velocity,” Neurology 25(11), 1065 (1975). 10.1212/WNL.25.11.1065 [DOI] [PubMed] [Google Scholar]