Abstract
Recent studies have demonstrated that integrating AI into colonoscopy procedures significantly improves the adenoma detection rate (ADR) and reduces the adenoma miss rate (AMR). However, few studies address the critical issue of endoscopist-AI collaboration in real-world settings. Eye-tracking data collection is considered a promising approach to uncovering how endoscopists and AI interact and influence each other during colonoscopy procedures. A common limitation of existing studies is their reliance on retrospective video clips, which fail to capture the dynamic demands of real-time colonoscopy, where endoscopists must simultaneously navigate the colonoscope and identify lesions on the screen. To address this gap, we established a dataset to analyze changes in endoscopists’ eye movements during the colonoscopy withdrawal phase. Eye-tracking data was collected from graduate students, nurses, senior endoscopists, and novice endoscopists while they reviewed retrospectively recorded colonoscopy withdrawal videos, both with and without computer-aided detection (CADe) assistance. Furthermore, 80 real-time video segments were prospectively collected during endoscopists’ actual colonoscopy withdrawal procedures, comprising 43 segments with CADe assistance and 37 segments without assistance (normal control).
Subject terms: Scientific data, Endoscopy
Background & Summary
Over the past five years, artificial intelligence (AI) has played an increasingly significant role in the computer-aided detection (CADe) and diagnosis (CADx) of polyps1,2. When effectively designed and integrated, AI systems can markedly enhance endoscopic performance. These systems function as auxiliary observers to reduce missed polyps, assist in lesion characterization to support informed decision-making, provide educational feedback to refine endoscopists’ skills, and streamline workflows within the endoscopic process3. Numerous rigorously conducted randomized controlled trials (RCTs) have demonstrated that AI integration in colonoscopy procedures substantially improves the adenoma detection rate (ADR) compared to traditional methods performed by endoscopists4–7.
Following several recent prospective RCTs investigatingAI applications in colonoscopy, the focus is rapidly shifting toward practical implementation and clinical integration in real-world settings. Recent research has identified the top 10 research questions for the deployment of AI in colonoscopy8. Interestingly, two lower-ranked questions stood out and caught our attention: “How do we audit AI systems once they are deployed in the clinical environment?” and “Could AI polyp detection and characterization systems distract endoscopists and impair performance?”. Currently, few studies address the critical issue of endoscopist-AI collaboration in real-world settings. To mitigate the risks of distraction and performance impairment, it is crucial to conduct ongoing evaluations and monitor how AI systems impact endoscopic performance. This will ensure that AI integration not only enhances clinical outcomes but also supports endoscopists in maintaining optimal performance during procedures.
The study of where an endoscopist’s gaze falls during the examination of the colon’s lumen is a fascinating area of endoscopy, continually enriched by emerging evidence. Research utilizing gaze analysis is shedding light on how endoscopists’ visual patterns vary across different scenarios9. Evidence indicates that experienced endoscopists spend more time visually searching the peripheries of the screen and excel in poorly visualized areas compared to their less experienced counterparts10–13. Troya et al. examined the influence of a CADe system on reaction time, mucosal misinterpretation, and changes in gaze patterns14. Kumahara et al. compared white light imaging (WLI) with linked color imaging (LCI) and blue-laser imaging (BLI-bright), showing that LCI and BLI-bright significantly reduced polyp miss rates and detection times, especially for non-experts15.He et al. showed that experts spent more time in the central luminal view and were faster at recovering from view loss compared to novices16. Higashino et al. evaluated tumor detection sensitivity using LCI and WLI, finding that LCI significantly improved sensitivity, particularly for novices and trainees, by enhancing color differences between lesions and surrounding mucosa17. Niederhauser et al. studied fixation duration and cognitive load during functional endoscopic sinus surgery (FESS) training, reporting increased fixation duration and decreased cognitive load as proficiency improved18. Karamchandani et al. found that higher skill levels in trainee endoscopists correlated with longer fixation durations and higher fixation frequencies on peripheral bowel mucosa, indicating a more effective search strategy11 (Supplementary Table). Their findings revealed that CADe systems detect polyps more quickly than human operators. However, the use of CADe did not improve human reaction times; instead, it was associated with a higher rate of normal mucosal misinterpretation and reduced the eye travel distance across the screen. These findings raise concerns about the potential impact of CADe systems on the learning curve of inexperienced trainees. Specifically, reliance on CADe may hinder their ability to develop the efficient “bottom U” visual scanning pattern that is characteristic of skilled endoscopists19. Existing studies are often limited by their reliance on retrospective video clips or brief sequences to analyze endoscopists’ vision patterns, failing to capture the dynamic and interactive nature of colonoscopy. Unlike static imaging techniques such as X-rays or CT scans, colonoscopy requires continuous real-time adjustments based on visual feedback, as endoscopists navigate the colonoscope while simultaneously identifying lesions. The integration of AI further transforms this process into a collaborative human-AI effort, where real-time decision-making and interaction play a critical role. Our dataset addresses these limitations by incorporating real-time data collected during live colonoscopy procedures, offering a more accurate representation of the cognitive and motor demands of dynamic navigation. With a larger and more diverse sample, as well as experimental conditions including both CADe-assisted and non-assisted scenarios, our approach not only provides insights into expertise-based differences and visual fatigue but also enables robust analyses of how AI systems impact endoscopist performance in complex clinical workflows. This comprehensive design fills critical gaps in existing research and paves the way for future advancements in understanding human-AI collaboration in endoscopy.
Building on the above considerations, our team set out to establish a dataset aimed at analyzing the changes in endoscopists’ eye movements during the colonoscopy withdrawal process. This dataset consists of two distinct subsets. The first subset captures eye-tracking data as endoscopists retrospectively view video clips. The second subset involves real-time eye-tracking performed by endoscopists during the colonoscopy withdrawal procedure, both with and without the assistance of a CADe system.
The dataset described here comprises colonoscopy videos and eye-tracking data. The eye-tracking data were collected from graduate students, nurses, senior endoscopists and novice endoscopists as they retrospectively reviewed colonoscopy withdrawal phase video clips, both with or without CADe assistance. In addition, 80 real-time videos were prospectively collected during colonoscopy withdrawal procedures performed by endoscopists, including 43 segments assisted by CADe and 37 segments without assistance (normal control).
Methods
Ethical statement
The Institutional Review Board of Zhongshan Hospital approved the studies to develop the eye-tracking dataset for the assessment of endoscopists (Approval No. B2023-262R). All endoscopists and patients involved in the study signed informed consent forms, which included consent for both participation and data sharing. The collected patient data included details of the colonoscopy procedure and related diagnoses, ensuring no personal information that could identify the patients was gathered. The following subsections outline the steps involved in constructing the database. The eye-tracking dataset consists of two parts: a retrospective dataset and a prospective real-time dataset.
Data acquisition of the retrospective dataset
The retrospective dataset was constructed using a video comprising 60 video clips, each approximately 30 seconds long. Among these clips, 41 contained polyps, while no polyps were found in the remaining 19 clips. Two experienced endoscopists (Y.Z. and QL.L.) independently reviewed these 60 video segments and reached a consensus on the diagnosis. The clips were randomly arranged with a 2-second black screen interval between each pair of clips, resulting in a total video length of 33 minutes and 23 seconds. All colonoscopy videos used in this study were collected at the Endoscopy Center of Zhongshan Hospital affiliated with Fudan University, between December 2020 and December 2021. The endoscopic equipment employed included standard colonoscopes (CF-H260 and CF-H290 [Olympus Medical Systems, Tokyo, Japan]) and video systems (EVIS LUCERA ELITE CV-290/CLV-290SL [Olympus Medical Systems]).
Eye movement data were captured using a Tobii Pro Nano® eye tracker (Tobii Technology, Stockholm, Schweden). This device was chosen for its high accuracy in recording gaze data, enabling precise identification of focal points during visual assessments with a high level of detail. Before collecting a participant’s eye-tracking metrics, a calibration procedure was conducted to determine the geometric properties of the participant’s eyes, ensuring accurate gaze point measurements.
Following the calibration of the Tobii Pro Nano®, the retrospective video was displayed on a 15.6” Full HD monitor (1920 × 1080 resolution) to replicate a real-world diagnostic environment, and participants were seated at a consistent distance of 65–70 cm from the screen. They were instructed to carefully examine each video segment, provide a clinical diagnosis, and identify potential abnormalities. During the experiment, participants had access to a button, which they were instructed to press as soon as they first identified a polyp in a video segment. This action confirmed the precise moment of polyp detection. If no polyp was observed, participants were asked to continue watching the video uninterrupted.
A total of 26 individuals participated in this part of the experiment. Among them, 6 were senior endoscopists who had performed at least 5000 colonoscopies, 8 were novice endoscopists, 8 were endoscopy nurses, and 4 were graduate students with backgrounds in medicine and computer science.
The experimental procedure is illustrated in Fig. 1. The same video sequence was presented twice: once without and once with bounding boxes overlaid by a commercially available CADe system (EndoAdd, version June 2023; Xuan Wei Technology, China). Prior to each assessment, participants were informed whether the session would involve the use of CADe. During the initial observation, no bounding boxes were displayed on the videos. After a two-week washout period, participants completed a second observation, during which bounding boxes were displayed on the videos. In this phase, participants were tasked with determining whether the bounding boxes generated by the CADe system accurately represented polyps.
Fig. 1.
(a) The video sequence was presented with and without the overlay of bounding boxes generated by a commercially available CADe system, watched by graduate students, nurses, senior or novice endoscopists. (b) The Tobii Pro Nano® eye tracker was installed just below the monitor during the real-time colonoscopy and underwent calibration before the start of the procedure. (c) Flow Diagram. (d) The visualization illustrates the types of data included in Retrospective and Prospective videos.
Eye-tracking data, including the coordinates of the participants’ gaze, were collected using the eye-tracking device at a frequency of 30 Hz and stored in a Comma-Separated Values (CSV) file.
Data acquisition of the Prospective real-time dataset
To enhance the clarity of our methodology, we have included a flow diagram (Fig. 1c) that outlines the main steps of the study. For the prospective real-time dataset, videos of endoscopists performing colonoscopy withdrawal were collected along with their corresponding eye-tracking data. During the Procedure, the colonoscopy was performed with the monitor positioned within one meter of the endoscopists. A Tobii Pro Nano® eye tracker was mounted just below the monitor (Fig. 1b) and calibrated followed a standard 9-point calibration protocol before the start of each procedure. After calibration, a test fixation point was displayed to verify accuracy, and recalibration was performed if the deviation exceeded 0.5°20. The video were recorded, and the eye-tracking device collected gaze corrdinates at a frequency of 30 Hz. The eye-tracking data were subsequently saved as a CSV file by inputting the collected information into the designated format.
All colonoscopy videos used in this study were collected between February 2024 and March 2024 at the Endoscopy Center of Zhongshan Hospital affiliated with Fudan University. The endoscopic equipment used included standard colonoscopes (CF-H290 [Olympus Medical Systems, Tokyo, Japan]) and video systems (EVIS LUCERA ELITE CV-290/CLV-290SL [Olympus Medical Systems]).
Patients were randomly assigned to either the routine group or the CADe-assisted group in a 1:1 ratio by opening sealed opaque envelopes. The randomization sequence was generated by an investigator and kept confidential until the last patient was enrolled to ensure allocation concealment. The primary goal during insertion was to achieve cecal intubation as quickly as possible. During the colonoscopy, any detected polyps were resected or biopsied for pathological examination. In the routine group, endoscopists performed colonoscopy withdrawal according to their standard daily practice. In the CADe-assisted group, diagnostic boxes were displayed to highlight polyp lesions within the colonic lumen. The boxes appeared gray for suspected polyp lesions and changed to green when a polyp was confirmed.
Data processing
In the retrospective analysis, to ensure the data’s cleanliness and purity, the beginning and ending sections of the videos unrelated to diagnosis were removed. The specific procedure was as follows: during the retrospective review, videos were played from the beginning, and playback was initiated by pressing the spacebar. Each spacebar press was recorded with a timestamp and logged in the key_press_log. Using the timestamp of the first spacebar press and the total length of the video, the head and tail portions were trimmed, retaining only segments relevant to the retrospective analysis. For the prospective data, we did not process it in order to maintain its authenticity, allowing for more accurate and reliable data for research purposes.
Data Records
The entire dataset can be downloaded from Science Data Bank (link: https://www.scidb.cn/s/2ERFNf, 10.57760/sciencedb.09204)21. This is a dataset mainly containing eye movement information, which can provide data support for the research of related AI algorithms.
The dataset consists of both retrospective and prospective data. The retrospective dataset includes participants’ analyses of fixed videos, comprising 37 retrospective videos in total. Of these, 22 videos include AI detection boxes, while the remaining 15 are original videos. If multiple polyps appear within a specific period, they are considered a single occurrence. The total duration of the fixed videos is 33 minutes and 23 seconds.
As illustrated in Fig. 1d, the types of signal data included in retrospective and prospective datasets differ. In the retrospective dataset, participants watched two identical videos, one with CADe assistance and one without, while their eye movements were recorded using an eye tracker. Additionally, participants logged detection signals by pressing a button at the moment they first identified a polyp. In the prospective data, some examinations were conducted using CADe, while others were not. Additionally, the eye tracker recorded the actual eye movement data during the examinations.
The folder structure is shown in Fig. 6. The retrospective data is divided into three folders: Assets, Normal control, and CADe-assisted. The Assets folder contains the retrospective videos, videos processed with CADe, detection box coordinates, and a file documenting the appearance and disappearance times of polyps. The Normal control and CADe-assisted folders contain each participant’s eye-tracking coordinate data in CSV files, as well as text files logging button press timestamps when participants observed polyps.
Fig. 6.
Folder structure.
In the text file, the entry “Send value (space) to COM2” marks the start of the review analysis, while “Send value e to COM2” indicates that the participant observed a polyp and pressed the button. The CSV file contains real-world timestamps, and the relative coordinates of the left and right eye viewpoints on the screen. Additionally, a file named “polyp-presence-period.csv” documents the appearance and disappearance times of each polyp in the fixed videos. Participants are categorized into four proficiency levels: graduate students, endoscopy nurses, novice endoscopists, and senior endoscopists.
The prospective dataset consists of 80 examination videos, 43 of which contain AI detection boxes, while the remaining 37 are original videos. Each examination record includes two videos: one showing the fixation points and the other being the original video. In addition to the videos, a CSV file logs eye movement data and coordinates over time. The grouping information and examiner details are stored in a meta.csv file.
Technical Validation
Verification metrics included fixation, reaction time, eye travel distance, detection rate, blink frequencies, and analysis of visual gaze patterns (VGP) derived from fixation point locations within the visual field. A fixation point was defined as the temporary suspension of eye movement (≥80 ms) on a specific point (≤30 pixels) within the field of view.
Reaction time
Reaction time was defined as the interval between the appearance of a polyp and the first button press within its start and end times. If the button was not pressed within the polyp’s time window, the reaction time was not recorded, and any button presses outside this window were ignored.
Eye travel distance
Eye travel distance was defined as the total movement distance of the viewpoint across the video, calculated in pixels, based on consecutive recordings when both left and right eye gaze data were available.
Polyp detection rate
Polyp detection rate was defined as the ratio of button presses occurring within the start and end times of polyps.
Blinking
Blinking was detected when the pupil of either the left or right eye was obscured for more than 70 ms.
Figure 2 provides an example of visual coverage areas during video review by a student and a senior examiner.
Fig. 2.
Eye movement heatmaps of senior participants and students watching videos using computer-aided detection (CADe) and unused videos. In order to improve the visualization effect, the heat map was overlaid on the still image of the video. The warmer the color, the more participants focus on a specific area.
To evaluate VGP, the screen displaying the colonoscopy was divided into a 5 × 5 grid (Fig. 3). The grid was separated in two ways to assess VGP. The first split the grid into three concentric rings, assessing central versus peripheral gaze distribution on the screen, and the second split it into three segments separating the left, middle, and right sections of the screen to assess horizontal distribution of gaze as illustrated in Fig. 3.
Fig. 3.
Separation of the 5 × 5 grid into specific groups for assessment of central versus peripheral visual gaze patterns (left) and horizontal visual gaze patterns (right).
Eye-tracking metrics
The comparison of basic eye movement indicators among the four groups is summarized in Table 1. Senior endoscopists exhibit faster reaction times compared to the other groups. The participation of CADe system has overall improved endoscopists’ reaction time, and the improvement is significant for those with less experience.
Table 1.
Comparison of eye tracking indicators among four groups of subjects.
| Student | Nurse | Senior | Novice | ||
|---|---|---|---|---|---|
| CADe assisted | Fixation frequency (fixation/s) | 3.21 | 3.51 | 3.93 | 3.79 |
| Blink frequency (blink/s) | 0.33 | 0.38 | 0.26 | 0.34 | |
| Reaction time (s) | 2.62 | 4.11 | 2.54 | 2.75 | |
| Eye travel distance (pixel) | 2503763 | 2101584 | 2144280 | 1991307 | |
| Detection rate (%) | 95.73 | 91.46 | 89.76 | 95.47 | |
| Normal Control (Without AI) | Fixation frequency (fixation/s) | 3.73 | 3.89 | 3.99 | 4.10 |
| Blink frequency (blink/s) | 0.19 | 0.23 | 0.24 | 0.14 | |
| Reaction time (s) | 4.07 | 4.33 | 2.63 | 2.71 | |
| Eye travel distance (pixel) | 1395570 | 1827850 | 2118338 | 1620276 | |
| Detection rate (%) | 96.34 | 95.12 | 94.15 | 82.32 |
The average of all data.
Central versus peripheral distribution of fixations
To detect polyps, endoscopists often focus their gaze on the Ring 2, as shown in Fig. 4.
Fig. 4.
The percentage of total attention points in each loop for four groups of subjects. Circles indicate outliers.
Horizontal distribution of fixations
During polyp detection, endoscopists predominantly focus their gaze on the center of the polyp examination screen. Since the examination screen is located on the right side of the overall display, the left side of the screen contains some black areas. Consequently, there are more fixations on the left side compared to the right, as depicted in Fig. 5.
Fig. 5.
The percentage of total fixation points in each horizontal distribution for four groups of subjects. Circles indicate outliers.
Supplementary information
Acknowledgements
We thank Wen-Long Wu, Te Luo, De-Jia Sun, Chang Xiao, Zi-Jing Meng and Jia-Yan Wang for their kindness and support of this research. The computations in this research were performed using the CFFF platform of Fudan University. This study was supported by National Natural Science Foundation of China (82170555, 81873552 and 81670483), China Computer Federation (CCF)-Baidu Open Fund under Grant CCF-BAIDU 202316, International Science and Technology Cooperation Program under the 2023 Shanghai Action Plan for Science under Grant 23410710400, National Key R&D Program of China (2019YFC1315800), Shanghai Rising-Star Program (19QA1401900), Major Project of Shanghai Municipal Science and Technology Committee (19441905200), and Shanghai Sailing Programs of Shanghai Municipal Science and Technology Committee (19YF1406400 and 22YF1409300).
Author contributions
Study concept and design: Y.Z. S.W. Acquisition of data: Y.Z., R.Y., P.F., Z.Z. Analysis and interpretation of data: Y.Z., R.Y., P.F. Drafting of the manuscript: Y.Z., R.Y., P.F. Critical revision of the manuscript for important intellectual content: S.W., P.Z. Statistical analysis: Y.Z., R.Y., P.F. Obtained funding: Y.Z., Q.L., S.W., P.Z. Technical, or material support: S.W., P.Z. Study supervision: S.W., P.Z.
Code availability
Python script for exploratory data analysis and dataset comparison, are publicly available at https://github.com/ku262/EndoGaze.
Competing interests
The authors declare no competing interests.
Footnotes
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
These authors contributed equally: Yan Zhu, Rui-Jie Yang, Pei-Yao Fu.
Contributor Information
Quan-Lin Li, Email: li.quanlin@zs-hospital.sh.cn.
Shuo Wang, Email: shuowang@fudan.edu.cn.
Ping-Hong Zhou, Email: zhou.pinghong@zs-hospital.sh.cn.
Supplementary information
The online version contains supplementary material available at 10.1038/s41597-025-04535-6.
References
- 1.Hann, A., Troya, J. & Fitting, D. Current status and limitations of artificial intelligence in colonoscopy. United European gastroenterology journal9, 527–533, 10.1002/ueg2.12108 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Kudo, S. E. et al. Artificial intelligence and colonoscopy: Current status and future perspectives. Digestive endoscopy: official journal of the Japan Gastroenterological Endoscopy Society31, 363–371, 10.1111/den.13340 (2019). [DOI] [PubMed] [Google Scholar]
- 3.Young, E., Edwards, L. & Singh, R. The Role of Artificial Intelligence in Colorectal Cancer Screening: Lesion Detection and Lesion Characterization. Cancers15, 10.3390/cancers15215126 (2023). [DOI] [PMC free article] [PubMed]
- 4.Xu, L. et al. Artificial intelligence-assisted colonoscopy: A prospective, multicenter, randomized controlled trial of polyp detection. Cancer medicine10, 7184–7193, 10.1002/cam4.4261 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Wang, P. et al. Real-time automatic detection system increases colonoscopic polyp and adenoma detection rates: a prospective randomised controlled study. Gut68, 1813–1819, 10.1136/gutjnl-2018-317500 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Repici, A. et al. Efficacy of Real-Time Computer-Aided Detection of Colorectal Neoplasia in a Randomized Trial. Gastroenterology159, 512–520.e517, 10.1053/j.gastro.2020.04.062 (2020). [DOI] [PubMed] [Google Scholar]
- 7.Lee, M. C., Parker, C. H., Liu, L. W., Farahvash, A. & Jeyalingam, T. Impact of study design on adenoma detection in the evaluation of artificial intelligence-aided colonoscopy: a systematic review and meta-analysis. Gastrointestinal endoscopy10.1016/j.gie.2024.01.021 (2024). [DOI] [PubMed]
- 8.Ahmad, O. F. et al. Establishing key research questions for the implementation of artificial intelligence in colonoscopy: a modified Delphi method. Endoscopy53, 893–901, 10.1055/a-1306-7590 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Sivananthan, A. et al. Eye tracking technology in endoscopy: Looking to the future. Digestive endoscopy: official journal of the Japan Gastroenterological Endoscopy Society35, 314–322, 10.1111/den.14461 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Edmondson, M. J. et al. Looking towards objective quality evaluation in colonoscopy: Analysis of visual gaze patterns. J Gastroenterol Hepatol31, 604–609, 10.1111/jgh.13184 (2016). [DOI] [PubMed] [Google Scholar]
- 11.Karamchandani, U. et al. Visual gaze patterns in trainee endoscopists - a novel assessment tool. Scand J Gastroenterol57, 1138–1146, 10.1080/00365521.2022.2064723 (2022). [DOI] [PubMed] [Google Scholar]
- 12.Lami, M. et al. Gaze patterns hold key to unlocking successful search strategies and increasing polyp detection rate in colonoscopy. Endoscopy50, 701–707, 10.1055/s-0044-101026 (2018). [DOI] [PubMed] [Google Scholar]
- 13.Lee, A. et al. Identification of gaze pattern and blind spots by upper gastrointestinal endoscopy using an eye-tracking technique. Surg Endosc36, 2574–2581, 10.1007/s00464-021-08546-3 (2022). [DOI] [PubMed] [Google Scholar]
- 14.Troya, J. et al. The influence of computer-aided polyp detection systems on reaction time for polyp detection and eye gaze. Endoscopy54, 1009–1014, 10.1055/a-1770-7353 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Kumahara, K. et al. Objective evaluation of the visibility of colorectal lesions using eye tracking. Dig Endosc31, 552–557, 10.1111/den.13397 (2019). [DOI] [PubMed] [Google Scholar]
- 16.He, W. et al. Eye gaze of endoscopists during simulated colonoscopy. J Robot Surg14, 137–143, 10.1007/s11701-019-00950-1 (2020). [DOI] [PubMed] [Google Scholar]
- 17.Higashino, M. et al. Improvement of detection sensitivity of upper gastrointestinal epithelial neoplasia in linked color imaging based on data of eye tracking. J Gastroenterol Hepatol38, 710–715, 10.1111/jgh.16106 (2023). [DOI] [PubMed] [Google Scholar]
- 18.Niederhauser, L. et al. Training and proficiency level in endoscopic sinus surgery change residents’ eye movements. Sci Rep13, 79, 10.1038/s41598-022-25518-2 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Bisschops, R. et al. Advanced imaging for detection and differentiation of colorectal neoplasia: European Society of Gastrointestinal Endoscopy (ESGE) Guideline - Update 2019. Endoscopy51, 1155–1179, 10.1055/a-1031-7657 (2019). [DOI] [PubMed] [Google Scholar]
- 20.Nyström, M., Andersson, R., Niehorster, D. C., Hessels, R. S. & Hooge, I. T. C. What is a blink? Classifying and characterizing blinks in eye openness signals. Behavior Research Methods56, 3280–3299, 10.3758/s13428-023-02333-9 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Zhu, Y. et al. EndoGaze. V2. Science Data Bank.10.57760/sciencedb.09204 (2024). [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
Python script for exploratory data analysis and dataset comparison, are publicly available at https://github.com/ku262/EndoGaze.






