Abstract
Smartphone-based portable slit lamps are rapidly evolving from simple add-ons into practical, low-cost front-line tools for anterior segment care beyond traditional clinical settings. By integrating high-performance smartphone cameras with compact optical attachments (e.g., slit-light converters, macro lenses, blue filters) and dedicated applications, these devices deliver slit-lamp-style imaging and video capture to environments where conventional biomicroscopes are inaccessible. Accumulating clinical evidence—most notably for the Smart Eye Camera (SEC) and iSpector MINI HE 010-21—confirms that smartphone slit lamps reliably support assessments across major anterior segment disorders. SEC-based recordings enable evaluation of tear film–related signs in dry eye disease; smartphone-acquired slit-lamp images show strong agreement with standard slit lamps for corneal ulcers, scars, and other ocular surface pathologies; slit-beam acquisition facilitates preliminary screening of shallow anterior chambers and narrow angles relevant to primary angle-closure glaucoma; and cataract screening and grading via smartphone systems align closely with conventional slit-lamp evaluations. Notably, recent advancements have transcended mere image capture to embrace artificial intelligence (AI)-enabled analysis, positioning smartphone slit lamps as scalable screening and triage solutions. Across the studies reviewed, AI models trained on smartphone slit-lamp images or videos demonstrate robust feasibility for automated dry eye diagnosis, corneal opacity detection, keratitis screening, cataract grading, pterygium detection/grading, and narrow-angle identification, typically through pipelines integrating image quality control, region-of-interest localization/segmentation, and disease-specific prediction. Despite these advances, however, key barriers remain, including incomplete replication of full slit-lamp functionality, lack of standardized acquisition protocols, and limited multicenter external validation for most AI systems. Future progress should prioritize hardware stabilization, optical design improvements, disease-specific standardized imaging workflows, and large-scale prospective validation to unlock the full potential of AI-assisted smartphone slit lamps for community screening, teleophthalmology, and care in underserved regions.
Keywords: Smartphone-based portable slit lamp, Anterior segment imaging, Artificial intelligence, Corneal disease, Cataract, Dry eye disease, Anterior chamber depth, Teleophthalmology
Key Summary Points
| Smartphone-based portable slit- lamps (e.g., Smart Eye Camera [SEC] and iSpector) are rapidly evolving into practical, low-cost front-line alternatives to conventional slit-lamp biomicroscopy for anterior segment examinations, enabling high-resolution imaging and video capture beyond the traditional clinic. |
| This narrative review synthesizes evidence from 23 original studies on their clinical performance across major anterior segment disorders (dry eye, corneal disease, anterior chamber assessment, and cataract) and summarizes how artificial intelligence (AI) is being integrated into these platforms. |
| Beyond imaging, AI-enabled pipelines trained on smartphone slit-lamp images/videos demonstrate feasibility for automated detection, grading, and multi-disease triage, positioning smartphone slit lamps as scalable screening tools for community and teleophthalmology use. |
| Further clinical impact will depend on continued hardware refinement and optical design improvements, standardized disease-specific acquisition protocols, and rigorous multicenter external validation of AI models, particularly for deployment in underserved and resource-limited settings. |
Introduction
In 2015, cataract, uncorrected refractive error, and glaucoma were the leading causes of blindness globally (36.0 million estimated cases), while the most common reasons for moderate or severe vision impairment—affecting approximately 216.6 million individuals—included uncorrected refractive error, cataract, age-related macular degeneration, glaucoma, and corneal opacity [1]. Delays in diagnosis and intervention for these diseases may result in disease progression, greater therapeutic complexity, and poorer visual outcomes [2]. Notably, among these major blinding disorders, cataract and corneal opacity can be effectively screened and detected at an early stage using slit-lamp examination alone. Slit-lamp evaluation is also crucial for early identification of primary angle-closure glaucoma (PACG), a major subtype of glaucoma characterized by shallow anterior chambers and narrow angles. However, significant disparities in eye care access persist between rural and urban or more developed regions, often due to a shortage of trained ophthalmologists and insufficiently equipped facilities in less developed areas [3]. Community-based screening programs are therefore essential, enabling earlier identification, appropriate management, and effective referral, and generating meaningful public health benefits [4, 5].
Conventional slit-lamp devices, some equipped with integrated cameras, can document a wide spectrum of anterior segment pathologies. However, these devices are typically expensive, heavy, and require specialist training to operate. By contrast, portable slit-lamp devices—particularly those based on smartphones—are more affordable, compact, and user-friendly. Continued advancements in external attachments, including macro lenses, blue-light filters, and auxiliary illumination sources [6], alongside the evolution of smartphone hardware featuring built-in macro lens capabilities, high-resolution sensors, advanced processors, and touchscreens for efficient image processing, have improved image quality for anterior segment assessment. Consequently, smartphone-based portable slit lamps represent a promising solution for anterior segment imaging, especially suited to resource-limited settings [6].
The potential value of artificial intelligence (AI) in ophthalmology is particularly pronounced in areas requiring large-scale analysis of image data [7]. Recently, AI applications have facilitated the diagnosis and management of various ocular diseases [8, 9]. Efforts are also underway to engineer ophthalmic devices that integrate AI algorithms to improve the efficiency and reach of eye care delivery [10, 11]. Despite these advances, widespread implementation in screening and resource-constrained environments remains challenging due to the need for slit-lamp image databases for algorithm training, validation, and deployment. Owing to their portability and ease of use, smartphone-based portable slit lamps provide an innovative platform for large-scale, high-quality data acquisition, supporting the development and clinical integration of AI models. Several studies have demonstrated that images obtained using smartphone-based portable slit lamps equipped with suitable attachments achieve diagnostic accuracy comparable to that of conventional slit-lamp microscopes [12–16], underscoring the potential of combining AI-driven algorithms with portable slit-lamp imaging as an effective tool for screening and early identification of major causes of blindness.
This review synthesizes current evidence regarding the clinical performance of smartphone-based portable slit-lamp devices for anterior segment disease assessment, emphasizing their integration with AI, and discusses ongoing challenges and future directions in this field.
This article is based on previously conducted studies and does not contain any new studies with human participants or animals performed by any of the authors.
Literature Search Method
This article is based on previously conducted studies and does not contain any new studies with human participants or animals performed by any of the authors. Electronic bibliographic searches in PubMed and Google Scholar from their inception to 15 July 2025 were conducted for this narrative review, with the primary research aim of examining the role of smartphone-based portable slit-lamp imaging in anterior segment disease, and its integration with AI. The search strategy incorporated a range of keywords, including “portable slit lamp,” “handheld slit lamp,” “smartphone slit lamp,” “smart eye camera,” “anterior segment,” “cornea,” “conjunctiva,” “anterior chamber,” “cataract,” “keratitis,” “pterygium,” “dry eye,” “artificial intelligence,” “deep learning,” “machine learning,” “telemedicine,” and “teleophthalmology.” Only original research articles that focused on smartphone-based slit lamps for anterior segment imaging and AI integration were included in the analysis. Reviews, systematic reviews, and other non-primary research articles were excluded. Duplicates were removed, and articles not written in English or lacking full-text availability were excluded from the review. A total of 23 articles were included in the final manuscript. No formal risk-of-bias assessment was conducted due to the narrative nature of the review.
Smartphone-Based Slit-Lamp Imaging
Modern smartphones have witnessed remarkable technological improvements, including the incorporation of high-resolution sensors, macro and tele-macro capabilities, optical zoom, image stabilization, and advanced computational image processing algorithms. Collectively, these upgrades enable smartphones to capture detailed anterior segment images under diffuse illumination, providing a basis for basic slit-lamp examinations [17].
Expanding on these innovations, smartphone-based slit-lamp devices have emerged as a new generation of mobile ophthalmic imaging platforms. By integrating a smartphone’s camera and light source with specialized optical attachments—such as slit-light converters, macro lenses, and filters—these devices mimic core functions of a conventional slit-lamp biomicroscope in a compact and portable form (Fig. 1). Because device iterations, software pipelines, and region-specific commercial variants evolve rapidly, Table 1 summarizes representative peer-reviewed reports and device families with their key design features and imaging capabilities; non–peer-reviewed prototypes and local commercial variants may exist beyond those listed. Notably, a substantial proportion of the published evidence—including feasibility studies and agreement assessments against conventional slit-lamp examination—along with many AI applications for automated detection, grading, and clinical decision support, has been built on a subset of commonly used platforms [e.g., SEC (Fig. 2) and iSpector-based systems (Fig. 3)], which also underpin a substantial share of the studies discussed in this review. In the following sections, we synthesize the evidence on the clinical assessment performance of smartphone-based portable slit-lamp systems across anterior segment diseases and discuss how these platforms are being integrated with AI to augment diagnostic and analytical capabilities.
Fig. 1.

Schematic diagram of a smartphone-based portable slit-lamp imaging system. Integrating the smartphone camera and light source with specialized optical attachments, including slit-light converters, macro lenses, and filters. Original figure created by the authors
Table 1.
Smartphone-related portable slit-lamp devices and smartphone-enabled anterior segment imaging solutions (representative peer-reviewed reports)
| Device | Country | Publication year | Key distinguishing features | Representative reference |
|---|---|---|---|---|
| Smart Eye Camera (SEC, OUI Inc., Tokyo, Japan) | Japan | 2019 | Smartphone-attachable anterior segment imaging module: a lightweight three-dimensional (3D)-printed frame coupled to the smartphone camera/light source, incorporating a removable convex macro lens (≈ ×20) for close-focus anterior segment imaging and video capture. Illumination/fluorescein support converts smartphone white light to ~488-nm blue excitation via a removable blue filter and uses a removable 525–550-nm band-pass filter to enhance fluorescein-related signals. Slit-beam capability transforms smartphone illumination into a narrow-slit beam (~0.2–1.0 mm) at a fixed ~40° angle (reported), enabling portable slit-beam–like assessment (Fig. 2) | [18] |
| Anterior segment photography with an intraocular lens (IOL) (ASPI) | India | 2019 | Ultralow-cost, clinic-buildable smartphone macro-adapter: a do-it-yourself (DIY) mount made from two small strips of rigid chart paper/plastic with aligned ~5-mm apertures, using an intraocular lens (IOL) fixed over the opening as the imaging optic and taped to align with the smartphone camera. Flexible illumination and capture: still photos or video can be recorded by moving the phone to focus, using the built-in flash or an external light source. Reported to provide high magnification for anterior segment documentation in clinics, hospitals, health centers, and outreach camps | [19] |
| Portable slit lamp (ISPECTOR MINI HE 010–21, eyerobo, Shenyang, China) | China | 2020 | Compact handheld smartphone slit lamp (reported ~ 126 × 53 × 31 mm; ~100 g) designed for pocket portability and rapid deployment. Fixed-geometry slit illumination generated by a light-emitting diode (LED)-based light source coupled with a slit aperture and projection optics (reported slit width ≤ 0.2 mm at ~40 mm working distance), combined with a microlens imaging path (~ ×10 magnification); the slit source and microlens are arranged at ~30° to approximate cataract screening geometry. The smartphone camera captures still images/video for documentation and potential upload to deep-learning systems (Fig. 3) | [20] |
| Portable smartphone-based digital slit lamp (PSL D20) | India | 2021 | Portable smartphone-based digital slit lamp with separate illumination and imaging optical paths. Illumination: multicolor LED source with condenser optics and a right-angled prism; swiveling slit head allowing angle maneuvering up to ~45° on either side, variable slit width (0–12 mm) and fixed slit length [12], with white/blue/green modes and adjustable intensity. Imaging/handling: ~ ×4 optical magnification, working distance ~61 mm, compact handheld form factor (< 500 g) with single-hand controls and a docking station. Power/software: rechargeable lithium-ion battery (≥ 5 h use; auto-sleep), acquisition app with focus-driven capture, exposure optimization, optional digital zoom (~ ×2.25), teleophthalmology workflow (secure cloud patient management; optional Digital Imaging and Communications in Medicine (DICOM) export for electronic medical record (EMR)/picture archiving and communication system (PACS)), and offline peak-sharpness frame extraction from video to reduce motion blur | [21] |
| Smartphone corneal imaging attachment (Ocular CellScope) | India | 2022 | Slide-on smartphone corneal imaging attachment providing magnified, controlled-illumination photographs using a +25-diopter lens aligned to the smartphone camera and external LED light sources. Dual-mode configurations support both white-light corneal photography (white LEDs; ~5250 K correlated color temperature) and fluorescein imaging (blue LEDs with ~472-nm peak wavelength plus an emission filter) to isolate fluorescence from stained epithelial defects. Modular design allows a single unmodified smartphone to switch between white light and fluorescein modes by swapping/attaching the corresponding LED/filter module | [22] |
| The Slitscope (DIY slit-beam attachment; prototypes and 3D-printed model) | India | 2024 | Ultralow-cost DIY slit-beam generator described as two preliminary prototypes plus a 3D-printed version, built from readily available materials and designed to be replicable. Prototype 1 uses stacked plastic containers housing a 9 V battery-powered white LED, an opaque slit aperture, and a movable high-power condensing lens to converge light into a thin slit beam. Prototype 2 repurposes a commercial torch (“moonlight” beam) with a chart-paper slit (~ 0.5 mm) and an IOL-based convex lens module to reconverge the divergent beam into a slit image. Designed for use alongside a smartphone to capture anterior segment images analogous to slit-lamp photography | [23] |
| 3D-printed all-in-one magnetic smartphone adapter (fundus and anterior segment imaging) | Italy | 2025 | 3D-printed all-in-one magnetic smartphone adapter for anterior segment and fundus imaging: a custom 3D-printed adapter designed to magnetically attach to a modern smartphone and, when needed, to auxiliary optics (e.g., a slit lamp eyepiece or a condensing lens) to facilitate clinical imaging of both anterior and posterior segment structures. The modular design uses magnetic fastening for rapid assembly and interchange of imaging modules; when coupled with a smartphone, it supports high-resolution (e.g., 4K) photos/videos of the anterior segment and central/peripheral fundus | [24] |
| 3D-printed portable slit lamp for high-resolution anterior segment images | Italy | 2025 | 3D-printed portable slit-lamp system integrated with a macro lens-equipped smartphone: the illumination unit comprises a warm white LED light pen housed in a custom 3D-printed case, with a biconvex lens focusing the light through a narrow slit (reported 0.4 mm) to generate slit illumination. A high-end smartphone equipped with a macro lens (reported 100 mm) serves as the imaging module for high-resolution anterior segment photography | [25] |
Fig. 2.
Characteristics of the Smart Eye Camera (SEC). A The hardware, which contains the base component that fits into the iPhone 7. The blue filter fits above the light source, and a convex macro lens is placed above the camera. B The iPhone 7 equipped with the SEC. C The recording interfaces. Reprinted from Shimizu E, 2021. Smart Eye Camera: A Validation Study for Evaluating the Tear Film Breakup Time in Human Subjects. Transl Vis Sci Technol. 10(4):28. 10.1167/tvst.10.4.28. Licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. This submission is non-commercial and is not intended for monetary gain or commercial advantage
Fig. 3.
The compact design and key features of the iSpector. Reprinted from He X, 2025. Intelligent screening of narrow anterior chamber angle based on portable slit lamp. NPJ Digit Med. 8(1):449. 10.1038/s41746-025-01853-2. Licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. This submission is non-commercial and is not intended for monetary gain or commercial advantage
Clinical Assessment of Anterior Segment Diseases with Smartphone-Based Portable Slit Lamp
Dry Eye Disease
Handayani et al. investigated inter-observer reliability in tear film breakup time (TBUT) measurements obtained from SEC recordings as a telemedicine solution for dry eye disease (DED) diagnosis in remote Indonesian settings. For TBUT assessment, SEC was positioned 4 cm from the corneal apex for optimal focus, and recordings were made at 4K resolution and 30 frames per second (fps). TBUT analysis was conducted independently by an ophthalmologist and a resident physician under standardized blinded procedures. Results showed substantial intero-bserver reliability for TBUT measurement (ICC = 0.78, 95% CI 0.31–1.26, p = 0.001) and strong agreement in DED diagnosis combining ocular surface disease index (OSDI) and TBUT criteria (weighted kappa = 0.71). These findings illustrate both the reliability of SEC-based TBUT measurement and its utility in expanding diagnostic access in settings with limited resources [26].
Borselli et al. evaluated the SEC’s capabilities for tear meniscus height (TMH) measurement relative to the slit lamp. Images from both devices were processed in ImageJ, calibrated, and enhanced via contrast adjustment, thresholding, and binarization to optimize TMH visualization and support manual measurement. Among 40 eyes (20 subjects), TMH values from SEC were comparable to slit lamp (upper: 0.209 vs. 0.235 mm, p = 0.073; lower: 0.297 vs. 0.260 mm, p = 0.275). Bland–Altman analysis verified strong agreement, with mean bias within ±0.03 mm [27].
Shimizu et al. evaluated the SEC’s diagnostic capability for dry eye disease in a clinical setting. Using identical protocols for both conventional slit-lamp and SEC examinations, SEC was positioned 4 cm from the corneal apex for optimum focus, with recordings at 4K/30 fps and at least three blinks per video (Fig. 4). TBUT and corneal fluorescein staining (CFS) were measured and compared according to updated DED diagnostic criteria [28]. SEC demonstrated high diagnostic validity (sensitivity 0.957; specificity 0.900; PPV 0.880; NPV 0.964) with moderate inter-observer reliability for TBUT (κ = 0.527) and CFS (κ = 0.550), and strong correlation with conventional slit-lamp measurements (r = 0.887 for TBUT; κ = 0.920 for CFS), confirming SEC’s practical reliability for DED assessment [29].
Fig. 4.
Comparison of dry eye disease visual characteristics between conventional slit-lamp microscope and the Smart Eye Camera (SEC). Clinical images of the left eye from a 53-year-old patient with severe ocular graft-versus-host disease, presenting with a broad conjunctival pseudomembrane, significant lower eyelid meibomian gland dysfunction, and corneal epitheliopathy. A–C were captured using a conventional slit-lamp microscope; D–F were video-recorded using the SEC. A, D show the superior tarsal plate with a broad conjunctival pseudomembrane (diffuse illumination method). B, E depict the lower conjunctiva and eyelid with pseudomembranes and meibomian gland dysfunction (diffuse illumination method). D, F illustrate the corneal epithelial disorder (blue light illumination method). Reprinted from Shimizu E, 2021. Smart Eye Camera: A Validation Study for Evaluating the Tear Film Breakup Time in Human Subjects. Transl Vis Sci Technol. 10(4):28. 10.1167/tvst.10.4.28. Licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. This submission is non-commercial and is not intended for monetary gain or commercial advantage
Mizukami et al. investigated the effect of image enhancement on diagnostic interpretation of SEC video across TBUT, TMH, and CFS scoring, and conjunctivochalasis detection. Enhancement involved amplifying the green (G) channel intensity within the RGB color space to highlight fluorescein-stained structures, with component intensities multiplied by a factor to avoid excessive brightness or saturation. Each processed video underwent manual inspection to ensure clarity and prevent “blown-out” areas. Three enhancement settings were tested: no enhancement (G0), mild (G3), and strong (G7). Strong enhancement (G7) improved subtle tear film break up detection, while mild enhancement (G3) optimized CFS scoring. By contrast, detection of conjunctivochalasis and TMH was most reliable without enhancement (G0) [30].
Collectively, these studies demonstrate that the SEC provides reliable assessments of TBUT, TMH, and CFS in diagnosing DED, and that tailored image enhancement strategies can optimize detection depending on the diagnostic target. The SEC is suitable for both in-clinic and telemedicine applications, offering consistent, accessible dry eye disease evaluation.
Cornea
Kumar et al. assessed the diagnostic performance of smartphone-based corneal imaging to detect corneal scars in patients with corneal ulcers. In their study, corneal photographs acquired via a digital single-lens reflex (SLR) camera were compared to those images captured with a smartphone equipped with a custom external attachment (Corneal CellScope) comprising a three-dimensional (3D)-printed case, a +25-diopter lens, and dual light-emitting diode (LED) illumination. The sensitivity for corneal opacity detection was 68% (95% CI: 58–77%) with default smartphone settings and 59% (95% CI 49–69%) with the SLR; specificity was 97% (95% CI 93–100%) for the smartphone and 97% (95% CI 92–100%) for the SLR. The sensitivity of smartphone-based detection increased for larger scars (81% for opacities ≥ 2 mm), more visually significant opacities (100% for eyes with visual acuity worse than 20/400), and more recent scars (85% for cultures within the last 12 months) [31].
Ghafarian et al. evaluated the SEC for the detection of corneal ulcers (Fig. 5). Compared with conventional slit-lamp examination, the SEC achieved 100% sensitivity and specificity in ulcer detection. It also enabled accurate assessment of clinical parameters, including infiltration size and thinning, epithelial defect extent, hypopyon size, and corneal vascularization quadrants, with agreement exceeding 80%. The SEC also showed comparable accuracy to both slit-lamp examination and slit-lamp photography in determining infiltration patterns and possible underlying microorganisms [32].
Fig. 5.
Comparison of corneal ulcer images captured by Smart Eye Camera (SEC) and conventional slit-lamp microscopy. Left two columns: images acquired with SEC; right two columns: images from conventional slit-lamp microscopy. Fluorescein staining results are illustrated under a cobalt blue filter. LASIK (Laser-Assisted in Situ Keratomileusis), CED (corneal endothelial dysfunction), AC (anterior chamber). Reprinted from Ghafarian S, 2025. Clinical evaluation of corneal ulcer with a portable and smartphone-attachable slit lamp device: Smart Eye Camera. Sci Rep. 15(1):3099. 10.1038/s41598-025-87820-z. Licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. This submission is non-commercial and is not intended for monetary gain or commercial advantage
Andhare et al. conducted a prospective cohort study of 100 patients with various corneal disorders to evaluate SEC’s diagnostic accuracy relative to a conventional slit lamp. Videos of the anterior segment were acquired in 4 K resolution at 30 fps using the SEC positioned 2–4 cm from the cornea, matching the focal range of its integrated convex lens. Each recording included at least three blinks to ensure complete visualization of the ocular surface. Mask review by two independent corneal specialists yielded over 90% diagnostic concordance with the gold-standard slit-lamp examination, substantiating the SEC’s accuracy for diagnosing a diverse array of corneal conditions, including pterygium, epithelial disorders, keratitis, corneal ulcers, dystrophies, ocular surface squamous neoplasia, and limbal stem cell deficiency [14].
The cumulative evidence from these studies demonstrates that a smartphone-based portable slit-lamp imaging system, particularly the SEC or similar attachments, enable reliable and sensitive corneal evaluations and may serve as a practical alternative to conventional slit-lamp examinations, notably in environments with limited resources.
Anterior Chamber Depth
Shimizu et al. performed a validation study to assess a SEC for anterior chamber depth (ACD) evaluation and angle estimation, using traditional slit-lamp microscopy and anterior segment optical coherence tomography (AS-OCT) as references. Both SEC and conventional slit-lamp imaging used focused illumination at a 40° angle and a 1-mm slit beam under controlled lightening (30–75 lx). Magnified, 3D slit-light photographs supported subjected ACD grading, while peripheral angles were estimated via the modified Van Herick Plus method in the inferior quadrant (Fig. 6). SEC showed strong agreement with conventional slit-lamp devices for ACD (r = 0.814) and Van Herick Plus grades (r = 0.919; κ = 0.757). Both devices’ ACD scores correlated moderately with AS-OCT (r = 0.609 and 0.641), and Van Herick Plus grades correlated strongly with trabecular-iris angles on AS-OCT (r = 0.702 and 0.764), suggesting the SEC’s suitability for preliminary ACD and angle assessment [33].
Fig. 6.
Comparison of anterior segment imaging by Smart Eye Camera (SEC), conventional slit-lamp microscopy, and anterior segment optical coherence tomography (AS-OCT). A Ophthalmological images of a 62-year-old Asian man. The upper-tier images were obtained using the SEC, the middle-tier images using conventional slit-lamp microscopy, and the bottom-tier images using AS-OCT. B Ophthalmological images of a 32-year-old Asian woman. The upper-tier images were obtained using the SEC, the middle-tier images using the conventional slit lamp, and the bottom-tier images using AS-OCT. Reprinted from Shimizu E, 2021. A Study Validating the Estimation of Anterior Chamber Depth and Iridocorneal Angle with Portable and Non-Portable Slit-Lamp Microscopy. Sensors. 21(4):1436. doi: 10.3390/s21041436. Licensed under a Creative Commons Attribution (CC BY) 4.0 International License
Cataract
Yazu et al. compared the diagnostic performance of the SEC for cataract grading versus conventional slit-lamp microscopy. Following pharmacologic pupil dilation, cataract grading was conducted by ophthalmologists using a conventional slit lamp, and SEC videos were recorded by an orthoptist and subsequently graded by a masked ophthalmologist with the World Health Organization (WHO) nuclear opacity scale. The two grading approaches showed strong correlation (r = 0.871, p < 0.001) and substantial inter-rater agreement (κ = 0.807, p < 0.001), indicating comparable diagnostic capability of the SEC and conventional slit lamp for cataract assessment [34].
Triningrat et al. conducted a similar study using standardized protocols for both slit lamp and SEC video acquisition. Video recording employed ×10 magnification, medium brightness, 45° beam angle, a beam height surpassing the nucleus, and 0.1 mm beam thickness; capture settings were matched at 1080p/30 fps. Slit lamp and SEC videos were coded and graded on separate days to minimize bias (Fig. 7). Intra-observer (κ = 0.793–0.818) and inter-observer (κ = 0.795–0.817) agreement were high. However, concordance between SEC and conventional slit-lamp grades was only moderate to substantial (κ = 0.606–0.717), lower than the earlier study (κ = 0.807) reported by Yazu et al. The discrepancy was attributed to a smaller sample size (67 vs. 128 eyes in Yazu et al.), limited standardization in SEC imaging, restricted illumination adjustment, inadequate eyelid opening, and insufficient head stabilization [35].
Fig. 7.

Nuclear cataract grading comparison between conventional slit-lamp camera and the Smart Eye Camera (SEC). First row: Images captured with a conventional slit-lamp camera; second row: Corresponding images acquired with the SEC. All images are graded according to the World Health Organization (WHO) nuclear cataract grading system. NUC (nuclear cataract). Reprinted from Triningrat AAMP, 2025. Reliability and Accuracy of Smart Eye Camera in Determining Grading of Nuclear Cataract. Korean J Ophthalmol. 39(2):114–124. 10.3341/kjo.2023.0131. Licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. This submission is non-commercial and is not intended for monetary gain or commercial advantage
Duong et al. assessed SEC versus conventional slit lamp for nuclear cataract grading in a Vietnamese cohort. Cataracts were graded by two ophthalmologists using the slit lamp, while SEC videos were captured by trained staff and reviewed independently by two optometrists. SEC positioning was standardized at 4 cm from the eye, stabilized on the patient’s forehead, with fixation at 3 m, and slit centered when the iris filled approximately 80% of the screen; suboptimal videos were retaken (Fig. 8). Nuclear cataract (NUC) grading used both the Lucio–Buratto (Vietnam Ministry of Health) and WHO scales. Inter-device correlation was strong (Buratto: r = 0.797; WHO: r = 0.579; both p < 0.001) with good reliability (κ = 0.774 for Buratto; κ = 0.539 for WHO). Lower performance with WHO grading was ascribed to the shorter imaging time, operator variability, patient cooperation, and environmental factors. Additional limitations were fixed light intensity, immobile SEC lighting, and smartphone camera processing variability, which could affect color fidelity and grading accuracy [36].
Fig. 8.

Smart Eye Camera (SEC) images of nuclear cataracts graded by the World Health Organization (WHO) nuclear cataract grading system. A Nuclear cataract (NUC)-0 with posterior polar cataract, B NUC-1, C NUC-2, D NUC-3. All images were captured using the SEC and graded according to the WHO nuclear cataract grading system. Reprinted from Duong NM, 2024. Diagnostic Assessment of Nuclear Cataracts Using a Smartphone-Attachable Slit-Lamp Device: A Cross-Sectional Study in Vietnam. Cureus. 16(11):e73783. 10.7759/cureus.73783. Licensed under a Creative Commons Attribution (CC BY) 4.0 International License
These longitudinal studies collectively suggest that the SEC is a promising tool for cataract grading, but its performance depends on acquisition protocols and technical factors, underscoring the need for standardized imaging procedures.
Integration of Artificial Intelligence in Smartphone-Based Anterior Segment Imaging
Dry Eye Disease Assessment
Shimizu et al. developed an AI system for diagnosing DED from SEC-recorded ocular surface videos. Videos were segmented into frames, excluding low-quality frames, and the corneal region was cropped for consistency. Each frame was labeled as “tear breakup positive” or “tear breakup negative”, and a Swin Transformer model was trained to predict breakup probability. TBUT was calculated by marking the moment of eye opening as 0 s and identifying the first frame with positive breakup, with the elapsed time serving as TBUT. DED diagnosis was based on AI-estimated TBUT ≤ 5 s plus OSDI > 13. The model achieved high frame-level performance (accuracy 0.789, AUC 0.877) and robust DED diagnostic accuracy (sensitivity 0.778, specificity 0.857, AUC 0.813), demonstrating the feasibility of automated video-based DED assessment [37].
Pterygium Detection and Grading
Liu et al. developed a three-stage AI pipeline for pterygium detection and grading in smartphone-acquired images. In the first stage, a ResNet101 Faster R-CNN model was trained to detect pterygium in both slit-lamp dataset (SLD) and smartphone-based dataset (SPB) images. The model was trained using only slit-lamp images from the SLD. Its performance was evaluated using mAP (mean average precision), mIoU (mean intersection over union), and mAcc (mean accuracy), achieving high accuracy on slit-lamp images (mAP 0.9881, mIoU 0.9788, mAcc 96.6%) and smartphone images (mAP 0.9563, mIoU 0.9100, mAcc 95.2%). In the second stage, the U-Net based on Se-ResNeXt50 (SRU-Net) was employed to segment the pterygium and cornea. The segmentation model was trained using 2276 SLD images and 118 SPB images, with evaluation metrics including mIoU, mHD (mean Hausdorff distance), and mPA (mean pixel accuracy). Separate models trained on SLD (SM1) and smartphone images (SM2) performed suboptimally on smartphone data (SM1: mIoU 0.7781, mHD 0.3507, mPA 0.8889; SM2: mIoU 0.6784, mHD 0.5556, mPA 0.8317). However, the fusion model (SM3), trained on both datasets, showed significant improvement (mIoU 0.8169, mHD 0.3139, mPA 0.9259). Consequently, SM3 was selected as the final segmentation model for smartphone-based images. In the third stage, OpenCV algorithms were used to measure key pterygium indicators, such as base width, length, and area, for risk assessment. The horizontal corneal diameter was assumed to be 12 mm, and the length, width, and area of the pterygium were calculated based on pixel ratios relative to the corneal diameter and segmented pterygium contours. For SPB tested with SM3, the model achieved a microaverage F1 score of 0.8981, sensitivity of 0.8709, specificity of 0.9668, AUC of 0.9295, and high accuracy (88.31%). The kappa consistency coefficient was 0.9086. SM3 performed reliably across different smartphone brands (F1 0.933–0.959, sensitivity 0.814–0.884, specificity 0.973–0.983, accuracy 84–96%) and demonstrated diagnostic and grading accuracy similar to that of expert ophthalmologists (F1 0.9248 vs. 0.8971, accuracy 88.5% vs. 93.9%) [38]. This study demonstrates the feasibility of a fusion training model combining slit-lamp and smartphone data, establishing an effective approach for pterygium detection and segmentation, and offering valuable insights for future research.
Corneal Disease Assessment
Yoshitsugu et al. utilized the SEC to acquire anterior segment videos and developed a two-stage deep learning pipeline for corneal opacity detection. Initially, the corneal region of interest (ROI) was manually annotated and used to train a segmentation model for precise ROI extraction. These segmented corneal images were then contrast-enhanced using the contrast-limited adaptive histogram equalization (CLAHE) algorithm and input into a specifically trained EfficientNet-B4 model for final classification. While initial classification using unprocessed full-frame images yielded accuracy of 0.80, restricting the analysis to the refined ROI markedly improved accuracy to 0.96. The segmentation model demonstrated exceptional performance in identifying corneal boundaries (intersection over union (IoU) scores of 0.94). Convolutional neural network (CNN)-based analysis of medical examination videos offers distinct advantages, as video data capture richer temporal features than static images and support multi-disease diagnosis [39].
Zhongwen Li et al. applied a meta-learning framework—cosine nearest centroid-based metric learning (CNCML)—to smartphone-acquired corneal photographs for keratitis detection. This approach leverages knowledge transferred from slit-lamp photographs to improve accuracy in smartphone-based imaging. A standardized protocol for smartphone-based corneal image acquisition ensured subject consistency. First, a conventional deep learning model was trained on a large labeled database of slit-lamp images, thereby encoding features that reflect corneal characteristics from professional imaging. Subsequently, small cohorts of smartphone images (0 to 60 per class) were used for training, with the balance used for validation. CNCML consistently outperformed conventional deep learning models, yielding accuracy improvements from 0.56% to 12.93% and macro-area under the curve (AUC) gains from 0.004 to 0.023. These results show that feature knowledge from conventional slit-lamp images can be transferred to portable smartphone-based imaging, enabling robust keratitis screening even with limited datasets [40].
Anterior Chamber Narrow-Angle Detection and Depth Prediction
He et al. conducted a rigorous investigation using iSpector to identify narrow anterior chamber angles (NACA), employing standardized acquisition of both vertical and horizontal slit-light bands with consistent positioning, illumination, and slit-light orientation. The AI workflow involved (1) limbus localization (UNet backbone, PSPNet, and FCN heads), (2) semantic segmentation of the cornea and iris slit-light bands, (3) anatomical cross-section reconstruction, and (4) extraction of eight quantitative anatomical parameters for ensemble-based narrow-angle classification. Internal evaluation showed high sensitivity for horizontal bands (0.90) and specificity for vertical bands (0.95), comparable to Scheimpflug photography (sensitivity 0.92, specificity 0.86). Decreased external validation performance was attributed to handheld instability, variation in device–eye distance, and inconsistent lighting—all affecting image sharpness and slit-light pattern clarity. This study substantiates the feasibility of advanced AI algorithms for NACA detection using smartphone-based portable slit-lamp devices, while underscoring the need for improved generalizability through quality control [41].
Chen et al. explored machine-learning-guided prediction of ACD from images acquired with the MIDAS smartphone-based portable slit lamp. The MIDAS prototype integrates a smartphone camera with a 45° LED slit illumination system fixed 21 mm working distance and ×10 magnification, attaches via a clamp, is battery-powered, and is designed for compact operation. Standardized mesopic imaging was followed by AS-OCT acquisition as the reference. Pixel distances between corneal and iris landmarks were algorithmically measured and paired with AS-OCT-derived ACD. A random forest regression model with 11 decision trees was trained on 60% of the data and validated (40% out-of-bag) with mean absolute error. Predicted ACD values correlated strongly with AS-OCT (R2 = 0.91 for training, R2 = 0.73 for validation), supporting the potential of the smartphone-based portable slit lamp plus AI as a non-contact, high-accuracy anterior segment assessment tool [42].
Cataract Screening and Grading
The iSpector is an intelligent smartphone-based portable slit-lamp system featuring AI-augmented algorithms for real-time cataract screening. Hu et al. developed a Unified Diagnosis Framework for Automated Nuclear Cataract Grading (UDFA), utilizing a multistage pipeline: standardized image acquisition (eye cover for ambient lighting control), image grading by experienced ophthalmologists, dataset division for training/validation/testing, and a composed workflow. The UDFA process includes image quality selection using the Tenengrad focus measure, nucleus localization via a YOLOv3 object detection network, and classification through a heterogeneous ensemble strategy. A deep learning-based ShuffleNet model extracted high-level features from the cropped nucleus, while a support vector machine utilized gray-level co-occurrence matrix texture features. Predictions from both models were integrated using a stacking ensemble with a logistic regression meta-learner to produce the final cataract grade. In performance testing, UDFA outperformed GoogleNet and ResNet101, achieving an AUC of 0.9198 versus 0.8389 and 0.8990, respectively. Its accuracy reached 93.48%, surpassing ResNet101 (87.6%) and GoogleNet (83.53%). UDFA also demonstrated superior F1 scores, Youden index, and kappa values, highlighting both predictive and grading consistency [43].
Yu et al. further explored the impact of the iSpector device coupled with basic ophthalmology training on rural cataract detection, referral, and surgical rates. Participants were divided into AI-assisted (iSpector device and training) and control (training only) groups. Detection rates were comparable, but the AI-assisted group achieved referral rates 1.7 times higher and surgery rates 4.9 times higher than control—supporting the use of smartphone-based portable slit-lamp devices and basic training to improve cataract management in underserved areas [44].
Shimizu et al. developed an AI algorithm for nuclear cataract grading from SEC-acquired videos and the Emery–Little classification. After filtering out cases with corneal disease, aphakia/pseudophakia, or poor-quality video, 2628 eyes were included. Videos were segmented and processed through a two-stage machine learning pipeline: a ConvNeXT model first filtered “diagnosable” frames suitable for nuclear sclerosis evaluation, which were then graded by ophthalmologists and used to train an EfficientNetV2 classifier for automated severity assessment based on the Emery–Little scale. Training was conducted on non-overlapping datasets; models were tested independently. The system achieved 0.918 accuracy, 0.941 sensitivity, 0.827 specificity, and 0.884 AUC, demonstrating that smartphone-based portable slit-lamp video analysis combined with AI enables accurate automated cataract grading [45].
Ignatowicz et al. developed a deep learning–based Android application for automatic nuclear cataract grading, utilizing publicly available datasets and meticulous preprocessing (manual pupil region extraction, alpha channel masking, bounding-box cropping, resizing) to reduce background interference and maximize focus on relevant clinical features. Several CNN architectures were benchmarked, from large models (VGG16, ResNet50) to optimized variants (VGG11, ResNet18, MobileNetV3, EfficientNet-B0). Pre-deployment, VGG16 attained the highest accuracy (94.4%), with ResNet18 and MobileNetV3 performing comparably (> 94%). Upon TensorFlow Lite conversion and mobile integration, the accuracy of some models dropped below 70%; however, VGG11 maintained ~84% accuracy and stability, and was ultimately selected for deployment. The app provided near real-time processing (160–190 ms/image), though image quality factors—including overexposure and blur—remained decisive for reliable grading, emphasizing feasibility and challenges of translating complex grading algorithms to point-of-care mobile platforms [46].
Vasan et al. validated e-Paarvai, an Android-based cataract screening application built on a 16-layer CNN with SoftMax output. Using 1400 images graded by an ophthalmologist under standardized processing, the model classified eyes into categories of mature cataract or immature cataract, pseudophakia, or normal, with fallback for poor-quality samples. Compared with slit-lamp-based grades by ophthalmologists, e-Paarvai achieved high sensitivity (96%) but low specificity (25%), overall accuracy of 88%, positive predictive value (PPV) of 92.3%, and negative predictive value (NPV) of 57.8%. Grading accuracy was high for immature cataracts (94.2%) but lower for mature cataracts (22%), pseudophakia (29.3%), and clear lenses (2%). Diagnostic accuracy improved with the integration of patient demographics (sex and age) and best-corrected visual acuity (BCVA). ROC analysis confirmed significant gains when both auxiliary data were included, with the best results obtained when both demographics and BCVA were included. Strict acquisition protocols—mid-dilated pupils, dark room, flash, 30–40 cm distance, ×2 magnification—were vital for consistent grading [47].
Diagnosis of Multiple Anterior Segment Diseases
Ueno et al. implemented an AI-assisted multi-disease triage system for anterior segment conditions using slit-lamp camera images, consolidating 36 corneal and lenticular disorders into nine diagnostic categories (including normal, cataract, infectious keratitis, immunological keratitis, corneal scar, corneal deposits, bullous keratopathy, ocular surface tumor, and primary angle-closure glaucoma) identified using YOLO v3, YOLO v5, and RetinaNet. YOLO v5 achieved top performance (AUC 0.931–0.998, sensitivity 0.628–0.958, specificity 0.969–0.998) and was successfully translated to smartphone corneal images of the same disease spectrum. Although accuracy was lower for smartphone images than slit-lamp images (75% vs. 88%; predictive score 0.906 ± 0.145 vs. 0.964 ± 0.082; p < 0.001), introducing a quality threshold (predictive score ≥ 0.98) restored accuracy, with smartphone images substantially improved to near slit-lamp levels. The AI-driven triage framework using smartphone images stratified cases into four urgency levels—“urgent,” “semi-urgent,” “routine,” and “observation”—achieving high sensitivity/specificity across all categories, with perfect accuracy (1.00) for urgent cases. This confirms the feasibility of AI-enabled triage from smartphone-based imaging with appropriate quality control [48].
Maehara et al. further validated this system, called CorneAI, by comparing ophthalmologists’ diagnostic accuracy with versus without AI support. CorneAI assistance improved overall accuracy (79.2–88.8%, p < 0.001). Subgroup analyses revealed similar benefits: specialists improved from 82.8% to 90.0%, and residents from 75.6 to 86.2% (both p < 0.001). Accuracy for smartphone images increased from 78.7% to 85.5%, and for slit-lamp images from 81.2% to 90.6% (both p < 0.001). Notably, CorneAI achieved 86% standalone accuracy, but the combined use with clinicians improved performance further. These results clearly demonstrate that AI support enhances diagnostic accuracy for anterior segment diseases in both smartphone and conventional slit-lamp imaging [49].
Collectively, these studies highlight the promise of integrating AI with smartphone-based portable slit-lamp imaging—not only bridging the gap with conventional clinical assessment, but also enabling automated, scalable, and standardized anterior segment disease diagnosis. Future research should prioritize robust image quality control, particularly standardized acquisition protocols, broad generalizability across populations and devices, real-time platform integration, and comprehensive multi-disease diagnostic capability to realize the full potential of technologies.
Current Status and Future Directions
The conventional slit lamp remains the reference standard for ophthalmic examination; however, its lack of portability, relatively high cost, and need for specialized expertise restrict its accessibility in primary care and resource-limited settings [50]. Smartphone-based imaging solutions have gained prominence as a means to address these limitations. This review consolidates recent data on the effectiveness of smartphones and smartphone-based portable slit lamps for the preliminary diagnosis of anterior segment disorders, including DED, corneal disease, narrow anterior chamber angles, and lens opacity. Additionally, emerging studies integrating AI with these imaging modalities demonstrate feasibility for automated diagnosis, disease grading, and improved triage and referral by nonspecialist practitioners. Taken together, these findings support the clinical utility of a smartphone-based portable slit-lamp imaging system for image acquisition and preliminary diagnosis of common anterior segment conditions. Augmented by AI and supported by robust datasets and validation, these systems could significantly advance screening and diagnostic workflows in community and primary care settings.
Notwithstanding their benefits, however, significant challenges persist. Certain diagnostic features inherent to conventional slit lamps remain difficult to replicate on smartphone-based portable devices. For example, the absence of a chin and forehead rest diminishes patient stability, while handheld operation introduces motion artifacts that may blur images or compromise focus. Furthermore, uniform imaging acquisition protocols are rarely adopted: essential parameters—including device-to-eye distance, magnification, illumination intensity, and slit width and angle, as well as patient preparation—are inconsistently reported, thereby limiting reproducibility and comparability with established conventional slit-lamp assessments, and hindering broader clinical deployment.
Future developments should focus on hardware and methodological innovation. On the hardware front, a portable slit lamp should incorporate features that enhance patient and device stability, such as gimbal-supported fixation, as well as the utilization of built-in smartphone optical and electronic image stabilization technologies. Optical improvements—including independent and variable-intensity illumination, adjustable slit width/length, rotatable angle, and flexible magnification—are crucial to more closely replicate conventional slit-lamp functionality while retaining portability. On the methodological front, standardized disease-specific imaging protocols should be developed and universally adopted. Such protocols will not only ensure reproducible results and facilitate comparison with conventional slit-lamp assessments, but also support accurate disease recognition, broaden device dissemination, and streamline training of nonspecialist healthcare personnel. Moreover, uniform protocols are also critical for compiling the high-quality datasets needed for AI model training with strong generalizability.
At the same time, the integration of AI with smartphone-based portable slit-lamp imaging represents an existing avenue. Recent studies demonstrate that AI systems can substantially boost the diagnostic performance of smartphone-acquired images and videos for a wide range of anterior segment conditions, including DED, pterygium, corneal diseases, ACD estimation, and cataracts. Advanced network architectures—such as CNNs, EfficientNet, ConvNeXT, Swin Transformer, and ensemble frameworks—have attained high diagnostic accuracy, sensitivity, and specificity, and close agreement with established grading standards. These AI-assisted workflows facilitate (1) automated, objective, and reproducible diagnosis/grading; (2) multistage processing encompassing image quality control, region-of-interest segmentation, anatomical localization, feature extraction, and classification; (3) video-based analysis harnessing temporal information, capturing dynamic changes otherwise unattainable with static images; and (4) adaptability across devices, as models trained on conventional slit-lamp or hybrid datasets can be seamlessly applied to smartphone-based imaging, thus magnifying their practical scope in primary care and underserved areas.
Key research directions for advancing smartphone-based portable slit-lamp imaging in anterior segment disease include (1) the development and adoption of standardized acquisition protocols to yield reproducible images and high-quality datasets suited to rigorous AI training, and (2) expansion beyond current limitations of sample size, single-center studies, and absence of external validation through multicenter, prospective research featuring larger cohorts and robust external testing to establish clinical effectiveness and model generalizability. In addition, (3) unlike two-dimensional retinal imaging, anterior segment disorders involve transparent, 3D structures such as the cornea and lens, which are difficult to fully capture with static images. Video-based AI workflows could be a promising solution. Recent studies demonstrate that frame-level selection and classification strategies in video data boost accuracy for multi-disease assessment, as illustrated by Shimizu et al.’s nuclear cataract grading pipeline using ConvNeXT and EfficientNetV2 [45], and by Mizukami et al.’s AI system for dry eye diagnosis employing sequential Swin Transformer analysis of TFBUT events [37]. Future advancements may involve automated, multi-plane sequential focusing (e.g., tear film, cornea, anterior lens surface, lens nucleus, posterior capsule) in videos, combined with AI models for frame selection, region-of-interest segmentation, feature extraction, automated diagnosis, classification, and other tasks, enabling comprehensive, standardized, multi-disease anterior segment analysis from a single video sequence.
Conclusion
Smartphone-based portable slit lamps, particularly when equipped with AI, constitute a practical and scalable approach for anterior segment assessment. They enable automated detection, grading, and multi-disease triage, and their clinical utility and accessibility may be further improved through continuous hardware refinement, standardized imaging protocols, and rigorous AI model validation, especially within community and resource-limited settings.
Author Contributions
Xinyu Liu contributed to the study conception and design, performed the literature search, and drafted the initial version of the manuscript. Lihui Meng contributed to the literature review and writing of the manuscript. Youxin Chen provided critical revision of the manuscript for important intellectual content and contributed to the interpretation and discussion of the literature. Huan Chen obtained funding, contributed to the study conception and overall supervision, and critically revised the manuscript as the corresponding author. Xinyu Liu, Lihui Meng, Youxin Chen, and Huan Chen read and approved the final manuscript.
Funding
The journal’s Rapid Service Fee was funded by Huan Chen. This work was supported by the CAMS Innovation Fund for Medical Sciences (CIFMS) (Grant No. 2023-12 M-C&T-B-004, Beijing, China) and the Peking Union Medical College Hospital Deposit Integration Commission Funds (Grant No. ZC201904168, Beijing, China).
Data Availability
Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.
Declarations
Conflict of Interest
Xinyu Liu, Lihui Meng, Youxin Chen, and Huan Chen have nothing to disclose.
Ethical Approval
This article is based on previously conducted studies and does not contain any new studies with human participants or animals performed by any of the authors.
Contributor Information
Youxin Chen, Email: Chenyx@pumch.cn.
Huan Chen, Email: chenhuan1@pumch.cn.
References
- 1.Flaxman SR, Bourne RRA, Resnikoff S, Ackland P, Braithwaite T, Cicinelli MV, et al. Global causes of blindness and distance vision impairment 1990–2020: a systematic review and meta-analysis. Lancet Glob Health. 2017;5(12):e1221–34. [DOI] [PubMed] [Google Scholar]
- 2.Tang J, Liang Y, O’Neill C, Kee F, Jiang J, Congdon N. Cost-effectiveness and cost-utility of population-based glaucoma screening in China: a decision-analytic Markov model. Lancet Glob Health. 2019;7(7):e968–78. [DOI] [PubMed] [Google Scholar]
- 3.Du K, Guan H, Zhang Y, Ding Y, Wang D. Knowledge of cataracts and eye care utilization among adults aged 50 and above in rural Western China. Front Public Health. 2022;10:1034314. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Orugun AJ, Atima MO, Idakwo U, Komolafe O, Oladigbolu KK, Peter E, et al. Validation and optimization of smart eye camera as teleophthalmology device for the reduction of preventable and treatable blindness in Nigeria. Eye (Basingstoke). 2025;39(5):925–30. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Kuroiwa R, Mizukami T, Nishimura H, Khemlani RJ, Nakayama S, Shimizu E, et al. Prevalence of anterior segment diseases on a remote island: a telemedicine-based study using the Smart Eye Camera. Cureus. 2025;17(6):e86759. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Cao B, Vu CHV, Keenan JD. Telemedicine for cornea and external disease: a scoping review of imaging devices. Ophthalmol Ther. 2023;12(5):2281–93. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Waisberg E, Ong J, Kamran SA, Masalkhi M, Paladugu P, Zaman N, et al. Generative artificial intelligence in ophthalmology. Surv Ophthalmol. 2025. 10.1016/j.survophthal.2024.04.009. [DOI] [PubMed] [Google Scholar]
- 8.Nguyen HV, Tan GSW, Tapp RJ, Mital S, Ting DSW, Wong HT, et al. Cost-effectiveness of a national telemedicine diabetic retinopathy screening program in Singapore. Ophthalmology. 2016;123(12):2571–80. [DOI] [PubMed] [Google Scholar]
- 9.Shibata N, Tanito M, Mitsuhashi K, Fujino Y, Matsuura M, Murata H, et al. Development of a deep residual learning algorithm to screen for glaucoma from fundus photography. Sci Rep. 2018;8(1):14665. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Wu X, Huang Y, Liu Z, Lai W, Long E, Zhang K, et al. Universal artificial intelligence platform for collaborative management of cataracts. Br J Ophthalmol. 2019;103(11):1553–60. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Shimizu E, Tanji M, Nakayama S, Ishikawa T, Agata N, Yokoiwa R, et al. AI-based diagnosis of nuclear cataract from slit-lamp videos. Sci Rep. 2023;13(1):22046. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Goel R, Macri C, Bahrami B, Casson R, Chan WO. Assessing the subjective quality of smartphone anterior segment photography: a non-inferiority study. Int Ophthalmol. 2023;43(2):403–10. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Ludwig CA, Newsom MR, Jais A, Myung DJ, Murthy SI, Chang RT. Training time and quality of smartphone-based anterior segment screening in rural India. Clin Ophthalmol (Auckland, NZ). 2017;11:1301–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Andhare P, Ramasamy K, Ramesh R, Shimizu E, Nakayama S, Gandhi P. A study establishing sensitivity and accuracy of smartphone photography in ophthalmologic community outreach programs: review of a smart eye camera. Indian J Ophthalmol. 2023;71(6):2416–20. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Joshi VP, Jain A, Thyagrajan R, Vaddavalli PK. Anterior segment imaging using a simple universal smartphone attachment for patients. Semin Ophthalmol. 2022;37(2):232–40. [DOI] [PubMed] [Google Scholar]
- 16.Vilela MAP, Arrigo A, Parodi MB, da Silva Mengue C. Smartphone eye examination: artificial intelligence and telemedicine. Telemed J E Health. 2024;30(2):341–53. [DOI] [PubMed] [Google Scholar]
- 17.Pujari A, Saluja G, Agarwal D, Selvan H, Sharma N. Clinically useful smartphone ophthalmic imaging techniques. Graefes Arch Clin Exp Ophthalmol. 2021;259(2):279–87. [DOI] [PubMed] [Google Scholar]
- 18.Shimizu E, Ogawa Y, Yazu H, Aketa N, Yang F, Yamane M, et al. Smart eye camera: an innovative technique to evaluate tear film breakup time in a murine dry eye disease model. PLoS ONE. 2019;14(5):e0215130. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Chandrakanth P, Nallamuthu P. Anterior segment photography with intraocular lens. Indian J Ophthalmol. 2019;67(10):1690–1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Hu S, Wu H, Luan X, Wang Z, Adu M, Wang X, et al. Portable handheld slit-lamp based on a smartphone camera for cataract screening. J Ophthalmol. 2020;2020:1037689. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Dutt S, Nagarajan S, Vadivel SS, Baig AU, Savoy FM, Ganapathy VM, et al. Design and performance characterization of a novel, smartphone-based, portable digital slit lamp for anterior segment screening using telemedicine. Transl Vis Sci Technol. 2021;10(8):29. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Kumar A, Ali FS, Stevens VM, Melo JS, Venkatesh Prajna N, Lalitha P, et al. Smartphone-based anterior segment imaging: a comparative diagnostic accuracy study of a potential tool for blindness prevalence surveys. Ophthalmic Epidemiol. 2021;29(5):491–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Chandrakanth P, Akkara JD, Joshi SM, Gosalia H, Chandrakanth KS, Narendran V. The slitscope. Indian J Ophthalmol. 2024;72(5):741–4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Rubegni G, Cartocci A, Tognetti L, Tosi G, Salfi M, Caruso A, et al. Design of a new 3D printed all-in-one magnetic smartphone adapter for fundus and anterior segment imaging. Eur J Ophthalmol. 2024;35(1):119–25. [DOI] [PubMed] [Google Scholar]
- 25.Caruso A, Cappellani F, Rodolico MS, et al. A 3D-printed portable slit lamp for high-resolution anterior segment images. Eur J Ophthalmol. 2025. 10.1177/11206721251375142 [DOI] [PubMed] [Google Scholar]
- 26.Handayani AT, Valentina C, Suryaningrum IGAR, Megasafitri PD, Juliari IGAM, Pramita IAA, et al. Interobserver reliability of tear break-up time examination using “Smart Eye Camera” in Indonesian Remote Area. Clin Ophthalmol. 2023;17:2097–107. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Borselli M, Toro MD, Rossi C, Taloni A, Khemlani R, Nakayama S, et al. Feasibility of tear meniscus height measurements obtained with a smartphone-attachable portable device and agreement of the results with standard slit lamp examination. Diagnostics (Basel). 2024;14(3):316. 10.3390/diagnostics14030316 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Tsubota K, Yokoi N, Watanabe H, Dogru M, Kojima T, Yamada M, et al. A new perspective on dry eye classification: proposal by the Asia Dry Eye Society. Eye Contact Lens. 2020;46(Suppl 1):S2–S13. 10.1097/ICL.0000000000000643 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Shimizu E, Yazu H, Aketa N, Yokoiwa R, Sato S, Katayama T, et al. Smart eye camera: a validation study for evaluating the tear film breakup time in human subjects. Transl Vis Sci Technol. 2021. 10.1167/tvst.10.4.28. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Mizukami T, Sato S, Asai K, Inoue T, Shimizu E, Shimazaki J, et al. Evaluating the effect of image enhancement on diagnostic reliability in dry eye disease using a portable imaging device. Diagnostics. 2024. 10.3390/diagnostics14222552. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Kumar A, Ali FS, Stevens VM, Melo JS, Venkatesh Prajna N, Lalitha P, et al. Smartphone-based anterior segment imaging: a comparative diagnostic accuracy study of a potential tool for blindness prevalence surveys. Ophthalmic Epidemiol. 2022;29(5):491–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Ghafarian S, Masoumi A, Tabatabaei SA, Yaseri M, Shimizu E, Nakayama S, et al. Clinical evaluation of corneal ulcer with a portable and smartphone-attachable slit lamp device: Smart Eye Camera. Sci Rep. 2025;15(1):3099. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Shimizu E, Yazu H, Aketa N, Yokoiwa R, Sato S, Yajima J, et al. A study validating the estimation of anterior chamber depth and iridocorneal angle with portable and non-portable slit-lamp microscopy. Sensors. 2021. 10.3390/s21041436. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Yazu H, Shimizu E, Okuyama S, Katahira T, Aketa N, Yokoiwa R, et al. Evaluation of nuclear cataract with smartphone-attachable slit-lamp device. Diagnostics. 2020. 10.3390/diagnostics10080576. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Triningrat AAMP, Doniho A, Jayanegara WG, Widiana IGR, Wigono S, Utari NML, et al. Reliability and accuracy of Smart Eye Camera in determining grading of nuclear cataract. Korean J Ophthalmol: KJO. 2025;39(2):114–24. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Duong NM, Nguyen Vu NQ, Le HT. Diagnostic assessment of nuclear cataracts using a smartphone-attachable slit-lamp device: a cross-sectional study in Vietnam. Cureus. 2024;16(11):e73783. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Shimizu E, Ishikawa T, Tanji M, Agata N, Nakayama S, Nakahara Y, et al. Artificial intelligence to estimate the tear film breakup time and diagnose dry eye disease. Sci Rep. 2023;13(1):5822. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Liu Y, Xu C, Wang S, Chen Y, Lin X, Guo S, et al. Accurate detection and grading of pterygium through smartphone by a fusion training model. Br J Ophthalmol. 2024;108(3):336–42. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Yoshitsugu K, Shimizu E, Nishimura H, Khemlani R, Nakayama S, Takemura T. Development of the AI pipeline for corneal opacity detection. Bioengineering. 2024. 10.3390/bioengineering11030273. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Li Z, Wang Y, Chen K, Qiang W, Zong X, Ding K, et al. Promoting smartphone-based keratitis screening using meta-learning: a multicenter study. J Biomed Inform. 2024;157:104722. [DOI] [PubMed] [Google Scholar]
- 41.He X, Dai G, Che H, Zhang C, Yan H, Dang Y, et al. Intelligent screening of narrow anterior chamber angle based on portable slit lamp. NPJ Digit Med. 2025. 10.1038/s41746-025-01853-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Chen D, Ho Y, Sasa Y, Lee J, Yen CC, Tan C. Machine learning-guided prediction of central anterior chamber depth using slit lamp images from a portable smartphone device. Biosensors. 2021. 10.3390/bios11060182. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Hu S, Wang X, Wu H, Luan X, Qi P, Lin Y, et al. Unified diagnosis framework for automated nuclear cataract grading based on smartphone slit-lamp images. IEEE Access. 2020;8:174169–78. [Google Scholar]
- 44.Yu S, Yan C, Qin G, Pazo EE, He X, Qi P, et al. Assessing the impact of AI-assisted portable slit lamps on rural primary ophthalmic medical service. Curr Eye Res. 2025;50(5):551–8. [DOI] [PubMed] [Google Scholar]
- 45.Shimizu E, Ohashi J, Nishimura H, Sato S, Fujioka S, Nishio M, et al. Artificial Intelligence for diagnosing nuclear cataracts based on the emery-little classification using anterior segment images. 2025.
- 46.Ignatowicz AA, Marciniak T, Marciniak E. AI-powered mobile app for nuclear cataract detection. Sensors (Basel, Switzerland). 2025. 10.3390/s25133954. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47.Vasan CS, Gupta S, Shekhar M, Nagu K, Balakrishnan L, Ravindran RD, et al. Accuracy of an artificial intelligence-based mobile application for detecting cataracts: results from a field study. Indian J Ophthalmol. 2023;71(8):2984–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48.Ueno Y, Oda M, Yamaguchi T, Fukuoka H, Nejima R, Kitaguchi Y, et al. Deep learning model for extensive smartphone-based diagnosis and triage of cataracts and multiple corneal diseases. Br J Ophthalmol. 2024;108(10):1406–13. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Maehara H, Ueno Y, Yamaguchi T, Kitaguchi Y, Miyazaki D, Nejima R, et al. Artificial intelligence support improves diagnosis accuracy in anterior segment eye diseases. Sci Rep. 2025;15(1):5117. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.Kumar S, Yogesan K, Constable IJ. Telemedical diagnosis of anterior segment eye diseases: validation of digital slit-lamp still images. Eye (Lond). 2009;23(3):652–60. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.





