Skip to main content
Translational Vision Science & Technology logoLink to Translational Vision Science & Technology
. 2025 Dec 1;14(12):3. doi: 10.1167/tvst.14.12.3

Lens-Viewpoint Extraction Method for Personalized Progressive Addition Lens Fitting

Yating Yang 1,2,*, Xiaoqin Chen 3,4,*, Zhichao Deng 1,2, Jianchun Mei 1,2, Yutong Li 1,2, Xingyi Guo 3,4, Peiran Zhou 3,4, Lihua Li 3,4, Qing Ye 1,2,
PMCID: PMC12697685  PMID: 41324310

Abstract

Purpose

To validate the accuracy of a lens-viewpoint extraction method and demonstrate its feasibility for tracking users’ visual behavior on progressive addition lenses (PALs). This aims to provide a quantifiable fitting solution for PAL wearers in the future.

Methods

We evaluated a method using eye-tracking technology to extract the viewpoint on the lens, implemented with a self-made setup. Accuracy verification experiments were performed using a mechanical artificial eye, and a clinical test on 10 subjects was performed to assess the accuracy of the method and explored its feasibility.

Results

The accuracy of the lens-viewpoint extraction method was verified to be 0.21 ± 0.06 mm and 0.30 ± 0.07 mm on the lens in the mechanical artificial eye experiment and clinical test, respectively. An exemplary analysis of user eye behavior on the lens was provided.

Conclusions

These results demonstrate that the lens-viewpoint extraction method achieves high accuracy and is feasible for tracking users’ personalized visual habits on the lens.

Translational Relevance

The lens-viewpoint extraction method introduced in this study shows significant potential for providing a quantifiable solution to improve wearing comfort for the presbyopia population, holding important translational value in the field of precision optometry.

Keywords: Lens fitting, Eye-tracking, Progressive addition lenses

Introduction

The advent of the aging era has led to the prevalence of presbyopia.1,2 Among the solutions for presbyopia, progressive addition lenses (PALs) have become important due to their noninvasive and portable characteristics.3 Based on optimized design and refined processing, the area between focal points can achieve a smooth transition of focal length.4,5 Therefore, the user can easily shift their vision between different vision zones on the PALs. Figure 1a shows the commonly designed vision zones of PALs: distance, transition, near, and astigmatic. The vertical length of the transition zone is the channel length, and the offset between the centers of the distance zone and near zone is the inset.

Figure 1.

Figure 1.

(a) Schematic diagram of the power distribution of the PALs. (b) Conceptual diagram of lens viewpoint.

Although PALs are typically effective in meeting the visual needs of users, a significant proportion of users have difficulty in adapting to them, which affects their daily life.6,7 As reported in previous studies, the wearing of PALs actually affects the eye habits of the lens-wearing users themselves.8,9 The main reason for these problems is the discrepancy between the visual habits of users and the power distribution of the PALs. The mismatch is related to the subjective and ambiguous evaluations in the current fitting process of PALs. Normally, the fitting of PALs primarily involves selecting the optimal power distribution type for the wearer by inquiring about their lifestyle habits, profession, and other subjective factors. This process is overly subjective and lacks objective assessment of the user's visual habits, making it impossible to objectively determine the power distribution type of PALs that best match the user's personalized visual habits. Accordingly, it is extremely important to develop a technical method to objectively collect a user's visual habits during the fitting phase of progressive lenses in order to minimize such mismatches. This is crucial to enhancing the comfort of progressive lenses for presbyopia groups.

Eye movements can provide a wealth of information that aids in understanding the visual behavior of PAL wearers. There are many ways to track eye movements, such as eye tracking based on three-dimensional (3D) eye models, eye tracking based on two-dimensional (2D) regression, eye tracking based on external shape, and so on.10,11 Eye-tracking technology has been utilized to study the relationship between a user's visual habits and the different designs of PALs in previous research. For example, Benedi-Garcia et al.12 developed an experimental method using an eye tracker (Tobii Pro Glasses 3; Tobii, Stockholm, Sweden) to identify the actual regions of a PAL being used, which revealed how a user’s posture affects lens usage and demonstrated good concordance between theoretical and actual regions of use. Concepcion-Grande et al.13 used the Tobii Pro Glasses 3 wearable eye tracker to record gaze position when reading with near and distance vision while using two types of PALs (PPL-D and PPL-N). PPL-D and PPL-N are designs of progressive lenses suitable for distance and near use, respectively. Smith et al.14 recorded participants’ eye and head movements using Pupil Core headsets (Pupil Labs, Berlin, Germany) to investigate gaze and behavior metrics at different viewing distances with multifocal contact lenses, single-vision contact lenses, and PALs. These studies showed that PAL design and viewing habits mutually influence each other. During dispensing, recording the habitual viewing area on the lens via extracting the viewpoint can inform PAL selection or customization. The viewpoint on the lens refers to the point where the line of sight falls on the lens when the human eye looks through it, which is referred to as the lens viewpoint. Figure 1b shows a conceptual diagram of the lens viewpoint.

In this study, we evaluated a method to determine the precise vision position on the lens based on eye tracking technology and a homemade setup. To validate the proposed method, rigorous validation experiments were conducted using mechanical artificial eyes and corresponding clinical trials. The feasibility of the method was verified. On this basis, the visual behavior of the user wearing glasses can be obtained which may help to solve mismatch problems in the future.

Materials and Methods

Eye Tracking

In this study, a 2D regression-based eye-tracking technology was employed to extract the viewpoint on the lens. The principle of 2D regression-based eye tracking is that, when the subject looks at a set of calibration points that have known coordinates, a camera is used to capture and record changes in eye features. The eye features are then mapped to calibration points:

X=a0+a1x+a2x2+a3y+a4y2+a5xy (1)
Y=b0+b1x+b2x2+b3y+b4y2+b5xy (2)

First, the internal parameters of the camera lens were calibrated through a checkerboard calibration method15 to correct the distortion. Then, a threshold segmentation algorithm16 was used to detect the pupil on the artificial eye images; the YOLOv11 model17 was utilized for pupil detection in human eye images. Finally, the coordinates of the pupil center were mapped to the calibration points on the lens using a six-parameter binary function,18 as shown in Equations 1 and 2. In these equations, x and y denote the abscissa and ordinate of the pupil center, respectively; X and Y represent the abscissa and ordinate of the calibration point, respectively; and a0a5 and b0b5 are the parameters that must be fitted.

The YOLOv11 training method was as follows: We selected 300 images from six subjects, evenly distributed across complete, slightly occluded, and severely occluded pupils. The dataset was split into 180 training and 120 test images. Training annotations included partially occluded pupils, as shown in Figure 2a. During implementation, results with confidence < 0.85 or unrecognized pupils were excluded. Figures 2b and 2c show the recognition effects of partial occlusion and severe occlusion, respectively.

Figure 2.

Figure 2.

(a) A labeling method for the condition when the pupil was occluded. (b) Recognition of slightly occluded pupil. (c) Identification of severely occluded pupil.

Extraction of Lens Viewpoint

The core of the lens-viewpoint extraction method is to find the functional relationship between the pupil center coordinates and lens viewpoint. A set of calibration points (“calibration viewpoints”) on the lens is calculated according to the principle of similar triangle by Equations 3 and 4:

Xon_the_glass=deye_and_glassesdeye_and_glasses+dglasses_and_screenXon_the_screen (3)
Yon_the_glass=deye_and_glassesdeye_and_glasses+dglasses_and_screenYon_the_screen (4)

This approach is referred to as the projection method. In the equations, d represents distance, and X and Y represent horizontal and vertical coordinates, respectively. In this study, nine calibration points were utilized. A schematic diagram of the method to obtain the calibration viewpoints is shown in Figure 3a. When the subject looked straight ahead, the subject's pupil center was aligned with the fifth calibration point on the screen so the coordinates of the calibration points on the lens were derived according to the principle of similar triangle. The accuracy of the lens-viewpoint extraction method was tested by computing the distance between the lens target point (obtained via projection) and the predicted viewpoint (from the predicted model).

Figure 3.

Figure 3.

(a) Schematic diagram of the projection method to obtain the calibration viewpoints. (b) Fit of the function between the center of the pupil and the calibration viewpoints.

The fitting of the calibration points on the lens to the pupil center is shown in Figure 3b. A six-parameter binary regression function was used to fit the function between the pupil center coordinates and calibration points on the lens. When the subject looked at the nine calibration points on the screen, the calibration points on the lens were recorded, and the corresponding pupil center coordinates were collected. They were fitted to obtain a predictive model.

The whole technology road map of this study is shown in Figure 4. The overall process of lens-viewpoints extraction is as follows. First, the three-dimensional facial contour of the spectacle-wearing user is measured, including vertex distance (VD), pupil height (PH), and pupil distance (PD). The calibration process is then carried out, and, finally, the user is guided to observe objects from far to near to collect their visual habits in a natural posture. The accuracy verification consists of mechanical artificial eye experiments and clinical experiments. The VISUFIT 1000 (ZEISS, Oberkochen, Germany) platform19 was utilized to measure the VD, PH, and PD.

Figure 4.

Figure 4.

The method road map of the lens-viewpoint extraction method. VD, PH, and PD were measured, after which the calibration was conducted and the user's visual habits were obtained using the predictive model. An exploration of feasibility was conducted with an artificial eye experiment and a clinical experiment in which the calibration and validation accuracies were calculated and the predictive model was utilized to give an clinical example of visual habit gathering.

The eye tracker used in this study is in the form of a helmet as shown in Figure 5a; it features an adjustable red truss holding two infrared (IR) eye cameras. This red truss can be rotated to adjust the shooting angle of the IR eye camera. The part under the truss that is in contact with the helmet can also be rotated to adjust the shooting distance of the IR eye camera. Pre-experiment adjustments ensured clear pupil imaging, as shown on the right in Figure 6d, and unobstructed subject vision. Cameras recorded 30 Hz at 720 pixels video transmitted via USB. Unlike commercial head-mounted trackers requiring custom lens clamps, this design eliminates such constraints.

Figure 5.

Figure 5.

(a) The helmet-mounted eye tracker. (b) The “screen” used for the mechanical artificial eye experiment: red coordinate paper. (c) The “screen” used in the clinical experiment. The red LED light sources at nine calibration points and five validation points were illuminated sequentially.

Figure 6.

Figure 6.

(a) Schematic diagram and photograph of the optical path. (b) Helmet-mounted eye trackers used in clinical experiment. (c) A photograph taken during the clinical experiment. (d) The clinical experimenet and it's corresponding human eye image.

The screen was comprised of grid paper (1-mm accuracy) for artificial eyes (Fig. 5b) and an electronic board for clinical trials (Fig. 5c). Fourteen points were divided into nine calibration points (training set) and five validation points (test set). Calibration and validation areas spanned 18 × 15 cm and 12 × 10 cm, respectively. Accuracy was determined by the mean error per set. Thus, the range of viewing angles validated by the current method was ±7.8° for the artificial eye experiment and ±8.5° for the clinical experiment.

In Experiment 1, five trials were conducted to test the accuracy of the eye tracker while participants were spectacle free. They sequentially gazed at the 14 target points on the screen located 120 cm ahead while their head was fixed by a head support. Nine calibration points were used to calibrate the prediction model, after which the remaining five validation points were used to calculate the accuracy of the eye tracker. The accuracy was calculated as the angular deviation between the predicted gaze point and the actual gaze point on the screen.

In Experiment 2, a mechanical artificial eye (Fig. 6a) was designed to validate the lens-viewpoint extraction method. It featured a central red light-emitting diode (LED) and a WiFi module for remote rotation control via an Android app. The lens was fixed 1 cm in front of the eye, with grid paper placed 130 cm away (Fig. 6b). The laser point on the lens represented the true viewpoint. The experiment was conducted as follows. First, with the lens not placed, the laser was emitted horizontally. The lens was then placed and adjusted so the laser passed through the lens optical center, with the laser landing point on the lens chosen as the origin of the coordinates. Next, using the app, the laser was then rotated sequentially to 13 predefined target points on the grid paper. Simultaneously, an IR camera recorded the corresponding viewpoints on the lens, which were processed in ImageJ (National Institutes of Health, Bethesda, MD, USA). A six-parameter binary quadratic model was then fitted to relate pupil center coordinates to the calibration viewpoints.

In Experiment 3, the clinical experiments involved 10 subjects with myopia under 400° wearing their own eyeglasses. In order to better adapt to clinical trials, the eye tracker was updated for clinical use but retained the structure and function as shown in Figure 6c. The procedure for the experiment was as follows: First, the subject's head was fixed with a head rest as shown in Figure 6d, maintaining a distance of 120 cm between the eye and the electronic calibration board. Next, the VD, PH, and PD were measured using the ZEISS VISUFIT 1000. Subjects then sequentially fixated on 14 activated LED target points on the electronic board. After the accuracy verification experiment, another experiment was carried out to collect the distribution of viewpoints for one participant. With helmet placement and head stabilization, calibration involved sequentially fixating on 14 targets. After that, the participant's head could move completely freely. The participant observed two stimuli: a computer screen (1-meter distance) and a handheld piece of A4 paper. The participant first viewed the screen freely and then continuously focused on the self-held document.

This study was reviewed and approved by the Chinese Ethics Review Committee and followed the principles of the tenets of the Declaration of Helsinki. All subjects provided informed written consent to participate.

Results

In Experiment 1, the accuracy of the eye trackers was tested on five subjects who were not wearing eyeglasses. The angular accuracy of the eye trackers is systematically summarized in Table 1. The eye trackers developed in this study demonstrated a mean angular accuracy of 0.46° ± 0.3°.

Table 1.

Test Results for Five Subjects Not Wearing Eyeglasses

Subject
1 2 3 4 5
Accuracy, mean 0.17° 0.87° 0.57° 0.18° 0.5°

The results of the artificial eye experiment (Experiment 2) are shown in Figure 7Figures 7a and 7b show two examples of results, one for calibration and one for validation. Figure 7c shows the average accuracy of the nine calibration points and five validation points across the five experiments. Based on the statistical results, the average error of the nine calibration points was 0.12 ± 0.02 mm. The average error of the five validation points was 0.21 ± 0.06 mm. Table 2 shows the statistics for the goodness of fit (R2) of Equations 1 and 2 in the five sets of mechanical eye validation experiments. The dependent variable of Equation 1 is abscissa X, and the dependent variable of Equation 2 is ordinate Y. Based on the data shown in Table 2, the models obtained by fitting Equations 1 and 2 performed well on both the training set (nine calibration points) and the test set (five validation points). These measurement results confirm that the lens-viewpoint extraction method developed in this study achieved a submillimeter dimensional tolerance (0.21 mm) on the lens surface.

Figure 7.

Figure 7.

(a) Error at each calibration point. (b) Error at each validation point. The purple dots are the true lens viewpoints, and the blue dots are the lens viewpoints predicted by the lens-viewpoint extraction method. (c) Accuracy results for the mechanical artificial eye experiments.

Table 2.

R 2 for Function Fitting in Mechanical Artificial Eye Experiment

Artificial Eye
1 2 3 4 5
X/R2
 Calibration 0.9994 0.9996 0.9998 0.9996 0.9997
 Validation 0.9967 0.9895 0.9992 0.9929 0.9974
Y/R2
 Calibration 0.9990 0.9994 0.9976 0.9991 0.9991
 Validation 0.9801 0.9992 0.9951 0.9784 0.9863

In Experiment 3, clinical experiments were conducted with 10 subjects. The results are presented in Table 3. The average calibration accuracy and validation accuracy were 0.10 ± 0.01 mm and 0.30 ± 0.07 mm, respectively. Table 4 shows the goodness of fit for the functions. Most of the 10 sets of experiments were fitted with R2 close to 1 on the training and test sets.

Table 3.

Static Results of Clinical Experiments

Subject Calibration Accuracy (mm) Validation Accuracy (mm)
1 0.1203 0.2257
2 0.0948 0.3629
3 0.0791 0.1956
4 0.0971 0.2631
5 0.1076 0.3562
6 0.0961 0.3305
7 0.1062 0.3587
8 0.1033 0.2186
9 0.1059 0.3539
10 0.1135 0.3567

Table 4.

R 2 for Function Fitting in the Clinical Experiments

Subject
1 2 3 4 5 6 7 8 9 10
X/R2
 Calibration 0.9916 0.9985 0.9992 0.9976 0.9989 0.9711 0.9988 0.9982 0.9958 0.9969
 Validation 0.9838 0.957 0.9871 0.9532 0.9961 0.9264 0.9655 0.988 0.9379 0.8397
Y/R2
 Calibration 0.9958 0.9949 0.9943 0.9951 0.9981 0.9084 0.9942 0.9972 0.9961 0.9951
 Validation 0.9057 0.8308 0.9185 0.915 0.8882 0.8934 0.9651 0.9569 0.8792 0.8286

Figures 8a and 8b visualize the error at each calibration point and validation point of group 1. Figures 8c and 8d show the error at each calibration point and validation point of group 1. A participant in group 1 was selected to collect the visual habits of using glasses. The calibration accuracy for this test was 0.12 mm and the validation accuracy was 0.22 mm. Figures 8e and 8f provide a viewpoint distribution map and the corresponding lens physiognomy characterization, respectively. The origin coordinates of the viewpoint map are the center of the lens. The viewpoint map and the lens positioning map correspond to each other at equal scale. As illustrated in the viewpoint distribution map, the subject exhibited a discernible top-to-bottom movement of the viewpoint, which is associated with the sight shift from far to near. The top and bottom swept over a height of approximately 15 mm, and the width of the primary area of use was approximately 18 mm.

Figure 8.

Figure 8.

(a) The error at each calibration point of group 1. (b) The error at each validation point of group 1. (c) The lens-viewpoint distribution map on lens. (d) The lens physiognomy characterization.

Discussion

The accuracy of the eye tracker used in this study has been demonstrated to be 0.46° ± 0.3°, which is comparable to that of similar head-mounted 2D regression-based eye trackers, such as 0.87° in the research of Blignaut20 and 0.8° in the study by Brolly.21 This indicates that our eye tracker is feasible in terms of accuracy performance.

The lens-viewpoint extraction method achieved calibration and validation accuracies of 0.12 mm and 0.22 mm (mechanical artificial eye) and 0.10 mm and 0.30 mm (clinical test), respectively. Validation errors consistently exceeded calibration errors despite R2 ≈ 1 (Tables 24), confirming model consistency but revealing generalization limitations. In the research of Blignaut,20 validation errors consistently exceeded calibration errors across calibration point distributions. However, optimizing the calibration point distribution and function combination reduces this error gap. Future work must identify the optimal combination for our eye-tracker configuration. Clinical validation errors exceeded mechanical results by 0.10 mm, primarily due to head movement during calibration. Using only a chin rest allowed significant head displacement (±6 cm), which Cerrolaza et al.22 demonstrated increases regression mapping error. The follow-up study will fix the participant's head with a medical frame during calibration. A gyroscope will be integrated into the eye tracker that we made for collecting the head movement during the calibration, enabling real-time movement error compensation. Expanding samples will identify other error sources, allowing tailored compensation methods for each factor to achieve higher lens-viewpoint extraction accuracy.

Figure 8e indicates a 10-mm channel length from far to near targets, although internal displacement remains unclear because the A4 paper was not exactly centered between the eyes. Therefore, more accurate measurements of channel length and inward displacement require a more precisely controlled near target. Dual-tone dots in viewpoint diagrams reflect gaze frequency: light for single viewpoints, dark where multiple fixations overlap. This indicates that there is a difference in frequency in the area of the lenses used by the subjects when looking at things. In addition, for images of real human eyes, eyelid and eyelash occlusion was common, especially when the subject looked at a closer target. Table 5 shows model performance on the test set. By eliminating the incorrectly recognized pupil coordinates, the validity of the viewpoints in the viewpoint distribution map is guaranteed.

Table 5.

Recognition Performance of YOLOv11 on the Test Set

Parameter
Missing Recognition Confidence Less Than 0.85 Total
Count, n 4 7 120

The current experiment has verified that the viewing angle (7.8°/8.5°) is smaller than that of the comfortable eye of the human eye (without turning the head) according to the study by Proudlock and Gottlob,23 which is mainly due to ensuring that the subject can see comfortably without discomfort in the clinical experiment. However, a limitation of the current method is that validation at a larger angle would affect the validity of Equations 3 and 4 due to the nonlinearity induced by lens refraction. In future work, to address this issue and obtain accurate calibration viewpoints while avoiding refractive errors, we plan to model the lens in optical simulation software (Zemax) and determine the viewpoints via a ray-tracing algorithm.

Using 2D regression eye tracking, we established direct pupil-to-lens viewpoint correspondence. Because the lens and head positions remain fixed, head movement does not influence the predictive model. Our system relies solely on pupil center coordinates, which eliminates the need for an integrated infrared light source to create glints. This approach simplifies the hardware setup and, in principle, could reduce computational costs by avoiding glint detection and vector calculation. After calibration, no head restraint is required and subjects may wear their own glasses. These features are designed to enhance comfort, with the participant holding a natural posture, which we anticipate will facilitate the integration of eye tracking into clinical PAL-fitting workflows. Unlike commercial trackers that demand custom fixtures and frames, our approach can integrate smoothly into clinical try-on workflows. Viewpoint data collected with trial lenses enable calculation of channel length and inset for lens selection. Furthermore, viewpoint distributions across scenarios can support the customization of functional zones in PALs, although clinical fitting scenario design requires further investigation.

In summary, this study moved beyond traditional fitting methods by establishing a direct pupil-to-lens viewpoint mapping model. The modular design demonstrates the feasibility of integrating eye tracking into clinical prototypes, and quantitative viewpoint analysis supports personalized PAL customization based on natural visual habits. However, further validation and comparison with commercial systems are necessary to assess translational potential, along with investigating viewpoint patterns for optimal zone design.

Conclusions

In conclusion, in this study, we evaluated a lens-viewpoint extraction method based on an innovative pupil-to-lens viewpoint mapping model and designed a mechanical artificial eye experiment to assess the method. The method was also implemented on 10 recruits. Results demonstrated that, in the mechanical artificial eye experiment, the lens-viewpoint extraction method exhibited an accuracy of 0.21 ± 0.06 mm for the lens; in the clinical experiments, the accuracy was 0.30 ± 0.07 mm for the lens. The method proposed in this study has the potential to be useful for providing presbyopia groups a personalized fitting of PALs.

Acknowledgments

Supported by a grant from the Tianjin Key Medical Discipline (Specialty) Construction Project (TJYXZDXK-016A) and Open Research Fund of the Institute of Optometry Science Research of Nankai University (NKSGZ202406).

Disclosure: Y. Yang, None; X. Chen, None; Z. Deng, None; J. Mei, None; Y. Li, None; X. Guo, None; P. Zhou, None; L. Li, None; Q. Ye, None

References

  • 1. Fricke TR, Tahhan N, Resnikoff S, et al.. Global prevalence of presbyopia and vision impairment from uncorrected presbyopia. Ophthalmology. 2018; 125(10): 1492–1499. [DOI] [PubMed] [Google Scholar]
  • 2. Wolffsohn JS, Davies LN. Presbyopia: effectiveness of correction strategies. Prog Retin Eye Res. 2019; 68: 124–143. [DOI] [PubMed] [Google Scholar]
  • 3. Rich W, Reilly MA. A review of lens biomechanical contributions to presbyopia. Curr Eye Res. 2023; 48(2): 182–194. [DOI] [PubMed] [Google Scholar]
  • 4. Meister DJ, Fisher SW. Progress in the spectacle correction of presbyopia. Part 1: design and development of progressive lenses. Clin Exp Optom. 2021; 91(3): 240–250. [DOI] [PubMed] [Google Scholar]
  • 5. Meister DJ, Fisher SW. Progress in the spectacle correction of presbyopia. Part 2: modern progressive lens technologies. Clin Exp Optom. 2021; 91(3): 251–264. [DOI] [PubMed] [Google Scholar]
  • 6. Alvarez TL, Kim EH, Granger-Donetti B. Adaptation to progressive additive lenses: potential factors to consider. Sci Rep. 2017; 7(1): 2529. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Han Y, Ciuffreda KJ, Selenow A, Ali SR. Dynamic interactions of eye and head movements when reading with single-vision and progressive lenses in a simulated-computer-based environment. Invest Ophthalmol Vis Sci. 2003; 44(4): 1534–1545. [DOI] [PubMed] [Google Scholar]
  • 8. Concepcion-Grande P, Chamorro E, Cleva JM, Alonso J, Gomez-Pedrero JA. Correlation between reading time and characteristics of eye fixations and progressive lens design. PLoS One. 2023; 18(3): e0281861. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Hutchings N, Irving EL, Jung N, Dowling LM, Wells KA. Eye and head movement alterations in naïve progressive addition lens wearers. Ophthalmic Physiol Opt. 2007; 27(2): 142–153. [DOI] [PubMed] [Google Scholar]
  • 10. Hansen DW, Ji Q. In the eye of the beholder: a survey of models for eyes and gaze. IEEE Trans Pattern Anal Mach Intell. 2010; 32(3): 478–500. [DOI] [PubMed] [Google Scholar]
  • 11. Kar A, Corcoran P. A review and analysis of eye-gaze estimation systems, algorithms and performance evaluation methods in consumer platforms. IEEE Access. 2017; 5: 16495–16519. [Google Scholar]
  • 12. Benedi-Garcia C, Concepcion-Grande P, Chamorro E, Cleva JM, Alonso J. Experimental method for identifying regions of use of a progressive power lens using an eye-tracker: validation study. Life (Basel). 2024; 14(9): 1178. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Concepcion-Grande P, González A, Dotor Goytia P, et al.. Evaluation of reading performance with PALs using eye tracking technology. Invest Ophthalmol Vis Sci. 2022; 63(7): 1816. [Google Scholar]
  • 14. Smith SL, Maldonado-Codina C, Morgan PB, Read ML. Gaze and behavioural metrics in the refractive correction of presbyopia. Ophthalmic Physiol Opt. 2024; 44(4): 774–786. [DOI] [PubMed] [Google Scholar]
  • 15. Lidegaard M, Hansen DW, Krüger N. Head mounted device for point-of-gaze estimation in three dimensions. In: Proceedings of the Symposium on Eye Tracking Research and Applications: ETRA’14. New York: Association for Computing Machinery; 2014: 83–86 [Google Scholar]
  • 16. Fuhl W, Tonsen M, Bulling A, Kasneci E. Pupil detection for head-mounted eye tracking in the wild: an evaluation of the state of the art. Mach Vision Appl. 2016; 27(8): 1275–1288. [Google Scholar]
  • 17. Kishor R. Performance benchmarking of YOLOv11 variants for real-time delivery vehicle detection: a study on accuracy, speed, and computational trade-offs. Asian J Res Comput Sci. 2024; 17(12): 108–122 [Google Scholar]
  • 18. Shao G, Che M, Zhang B, Cen K, Gao W. A novel simple 2D model of eye gaze estimation. In: Proceedings of 2010 Second International Conference on Intelligent Human-Machine Systems and Cybernetics. Piscataway, NJ: Institute of Electrical and Electronics Engineers; 2010: 300–304. [Google Scholar]
  • 19. ZEISS. (2025). ZEISS Digital Centration: experience precision in a new dimension. Available at: https://www.zeiss.com/vision-care/en/eye-care-professionals/equipment/frame-selection-and-centration.html. Accessed November 7, 2025.
  • 20. Blignaut P. Mapping the pupil-glint vector to gaze coordinates in a simple video-based eye tracker. J Eye Mov Res. 2013; 7(1): 1–11. [Google Scholar]
  • 21. Brolly XL, Mulligan JB. Implicit calibration of a remote gaze tracker. In: Proceedings of 2004 Conference on Computer Vision and Pattern Recognition Workshop. Piscataway, NJ: Institute of Electrical and Electronics Engineers; 2004: 134–134. [Google Scholar]
  • 22. Cerrolaza JJ, Villanueva A, Villanueva M, Cabeza R. Error characterization and compensation in eye tracking systems. In: Proceedings of the Symposium on Eye Tracking Research and Applications: ETRA’12). New York: Association for Computing Machinery; 2012: 205–208. [Google Scholar]
  • 23. Proudlock FA, Gottlob I. Physiology and pathology of eye–head coordination. Prog Retin Eye Res . 2007; 5: 486–515. [DOI] [PubMed] [Google Scholar]

Articles from Translational Vision Science & Technology are provided here courtesy of Association for Research in Vision and Ophthalmology

RESOURCES