Abstract
The integration of artificial intelligence (AI) into ophthalmic subspecialties, encompassing oculoplastics, is rapidly evolving. AI has demonstrated significant promise in enhancing diagnostic accuracy, streamlining clinical workflows, and personalizing surgical decisionmaking. This review aims to summarize current applications of AI in oculoplastics, identify major challenges limiting its broader adoption, and explore future directions for research and clinical translation. We provide a narrative review of peer-reviewed studies and recent developments in AI-assisted diagnosis, image analysis, surgical planning, and patient monitoring within the domain of oculoplastics. Emerging AI applications include automated detection of eyelid tumors, facial analysis for ptosis and orbital disorders, preoperative planning, and postoperative outcome assessment. However, challenges such as limited data diversity, lack of interpretability, regulatory barriers, and ethical considerations persist. AI holds great promise in augmenting oculoplastic care. Moving forward, multidisciplinary collaboration, clinical validation, and advances in multimodal learning will be key to realizing the full potential of AI in this field.
Keywords: Artificial intelligence, deep learning, medical imaging, oculoplastics, telemedicine
Introduction
Oculoplastics, a subspecialty of ophthalmology, focuses on the diagnosis and surgical treatment of disorders involving the eyelids, orbit, tear ducts, and periocular region. With the increasing complexity and precision required in both functional and esthetic procedures, the field faces growing demands for accurate diagnosis, individualized planning, and improved postoperative monitoring. Traditional clinical assessments are often subjective, time-consuming, and limited by the expertise of individual surgeons, highlighting the need for innovative tools to enhance clinical decision-making and patient care.
Artificial intelligence (AI), particularly its subsets of machine learning (ML) and deep learning (DL),[1,2] has made significant inroads into various medical domains, including radiology, dermatology, and general ophthalmology. In recent years, AI applications in oculoplastics have started to emerge, ranging from image-based diagnostics, preoperative planning, postoperative outcome evaluation, and patient triage. These technologies offer the potential to augment clinical workflows, reduce diagnostic error, and personalize treatment approaches.
AI-driven tools are being increasingly embedded into telemedicine platforms, enabling smart triage and remote screening of oculoplastic conditions. As powerful tools, they are expected to transform traditional medical practices and improve medical efficiency and quality. These systems allow nonspecialists to upload patient images and receive AI-generated assessments or referral recommendations. Especially in underserved or rural areas, AI-enhanced teleophthalmology holds promise for expanding access to care and reducing the burden on tertiary centers.
The objective of this review is to provide a comprehensive overview of current AI applications in oculoplastics, discuss the challenges limiting their broader implementation, and explore prospects that may reshape the specialty. By synthesizing the latest evidence and real-world applications, this review aims to offer insight into how AI can complement clinical practice and drive innovation in oculoplastic care.
Current Applications of Artificial Intelligence in Oculoplastics
Date-based diagnosis
Facial image
One of the most prominent applications of AI in oculoplastics lies in the automated analysis of periocular images. Photographic images are the most commonly used approach to analyze facial data due to their convenience and intuitiveness. Using deep convolutional neural networks (CNNs),[3] AI systems can detect and classify eyelid lesions, blepharoptosis, and thyroid eye disease (TED) with remarkable accuracy. These models can assist clinicians in differentiating diagnosis, especially in primary care or resource-limited settings where access to oculoplastic specialists is limited.
Eyelid tumors account for 5%–10% of skin tumors.[4] The prevalence and outcomes of various eyelid tumor subtypes vary significantly, due to the different originating tissues or cell, geographic location, genetic background, socioeconomic status, and healthcare policies. Malignant tumors sometimes have overlapping features with benign ones but may diffuse infiltration to surrounding tissues, damage the orbit, and affect intraorbital tissues. Hui et al.[5,6] successfully developed and deployed an AI model capable of analyzing digital photographs of eyelid tumors to provide real-time diagnostic suggestions regarding their benign or malignant nature. This model reached a performance that long surpassed general practitioners and ophthalmic residents. They further enriched the dataset, which contains 1195 ocular images from 616 patients, and integrated the new model into a public available WeChat application [Figure 1], demonstrating its accessibility and clinical value in community-level screening and diagnostic assisting.
Figure 1.
Flowchart for establishing “Intelligent Eyelid Tumor Screening” WeChat application[6]
TED is an uncommon orbital disorder characterized by complex and variable clinical manifestations. Patients can suffer from visual function, mental health, and social function impairment.[7] Delayed treatment often results in poor outcomes. Management guidelines differ for different stages and grades of TED based on clinical signs and symptoms, making it difficult for inexperienced physicians. AI-based image recognition can assist TED diagnosis in many aspects [Figure 2]. Yan et al.[8] combined anterior segment slit-lamp and facial images of 156 TED patients for a CAS assessment DL system, achieving 90% accuracy, precision, and F1 score. The inclusion of the anterior segment image significantly improves the performance of their ResNet50-based system. Huang et al.[9] identified each common sign of TED (eyelid retraction, ocular dyskinesia, conjunctival congestion, and eye movement disorders) based on 9 eye positions images from 1560 patients with an average area under the curve (AUC) of 0.85 in their ResNet50+ U-Net model. Nine eye positions images allow the models to learn close-to- three-dimensional (3D) information.
Figure 2.
(a) The representative samples of redness of the eyelids, swelling of the eyelids, redness of conjunctiva, swelling of conjunctiva, swelling of caruncle or plica, and normal eyes in facial and slit-lamp images.[8] (b) The representative samples of common clinical signs in thyroid eye disease (TED) patients with 9-eye positions.[9] (c) The segmentation of normal, mild TED, and severe TED orbit, with the input image, the deep learning-based segmentation, and three-dimensional (3D) generated model.[17] (d) Rendered 3D reconstructions showing the outline of the eyeball, extraocular muscles, fat, and optic nerve.[24] (e) Segmentation of the optic nerve tissue on the coronal water-IDEAL-T2WI in TAO patients[16]. IDEAL-T2WI=Integrated Diagonal Encoding for Accelerated T2-weighted Imaging
Compared to photos, video images can demonstrate more dimensions in capturing dynamic information. Lootus et al.[10] extracted Margin Reflect Distance 1 (MRD1) from 80 self-recorded smartphone video clips, using horizontal iris diameter (11.8 mm) as a scale reference. Participants were instructed to gaze directly into their smartphone camera, followed by an up-gaze, then back to the smartphone camera. The MRD1 calculated by ResNet50-based model had a strong correlation with the ground truth (r = 0.73). In the same year, Schulz et al.[11] managed to design a program “Video Analysis of the eyelid (VALID)” with a reliable output of MRD1, MRD2, and ocular surface area exposure based on U-Net CNN architecture, demonstrating high stability in consumer-grade videos. Chen et al.[12] presented a smartphone-based system that identifies up to 16 visually impaired disorders in young children, including congenital ptosis, based on gazing behaviours and facial features under visual stimuli. The system showed a stable performance, which indicates the potential to be applied in real-world settings.
Pathologic whole slide images
The gold standard diagnosis of eyelid tumor relies on a specific expertise and biological or pathological process. There are concerns that the lack of access to pathology expertise may result in foregoing intraoperative frozen section diagnosis and gross anatomical features for tumor identification. Thus, an AI-based diagnostic system is urgently needed, and the currently established models have performed ideally. Geng et al.[13] developed a DL-based pathological framework using 318 whole slide image (WSI) (255 sebaceous gland carcinoma WSI and 63 squamous cell carcinoma WSI) to classify sebaceous gland carcinoma and squamous cell carcinoma, as they have overlapping clinical and histopathological features. Wang et al.[14] developed a self-supervised learning pathology diagnostic system based on pathological WSIs for malignant melanoma detection, based on a rather small sample size of WSI, but achieving high accuracy, sensitivity, and specificity comparable to that of experienced pathologists. Such tools hold particular value in reducing diagnostic delays and supporting pathology services in resource-limited settings.
Computed tomography/magnetic resonance imaging
In TED patients, extraocular muscles (EOMs), especially the inferior and medial rectus, become enlarged due to autoimmune inflammation and fibrosis. This leads to restricted eye movement, diplopia, and proptosis.
Contrast-enhanced magnetic resonance imaging (MRI) can provide objective, qualitative, and quantitative data, indicating the conditions of patients’ EOMs. Li et al.[15] developed an ML model evaluating the activity stage of EOMs based on a large sample size of contrast-enhanced MRI from 2561 eyes of 1479 patients with TED. The Light Gradient Boosting Machine (LightGBM) model achieved the best diagnostic performance (AUC 93%). Wu et al.[16] used an optic-nerve-based radiomics nomogram on water-fat imaging for detecting dysthyroid optic neuropathy (DON). They included 104 orbits from 59 DON patients and 131 orbits from 69 TAO patients without DON patients. Eight radiomics features from water-fat imaging were used to build the radiomics signature. The optic-nerve-based radiomics nomogram showed better diagnostic performance than conventional MRI evaluation for differentiating DON from TAO without DON.
Orbital computed tomography (CT) provides high-resolution images for rapid and efficient orbital feature extraction and can reliably measure orbital fat, muscles, lacrimal glands, and optic nerve.[17] Ocular motility restriction is often seen in TED patients, whereas the evaluation is quite difficult. Liu et al.[18] obtained 410 sets of CT images and clinical data from TED patients for the ocular motility score triple motility rating CNN model, achieving an accuracy of 90%. TED can be complicated by compressive optic neuropathy (CON). It is important to identify patients with TED early to ensure close monitoring and treatment to prevent potential permanent disability or vision loss. Lin et al.[19] preprocessed CTs for three conditions: no TED, mild TED, and severe TED with CON, using 1187 CT images from 141 patients (31 normal controls and 110 TED). The Visual Geometry Group (VGG)-16-based model (90% accuracy) proved superior compared with human expert (70% accuracy) grading. CT imaging is also useful in distinguishing diseases with overlapping clinical presentations. Ha et al.[20] employed VGG-16 for classification training from 1628 images from 192 patients (110 TED, 51 orbital myositis, and 31 controls) with 98% accuracy, 99% AUC.
By leveraging DL architectures, AI systems are capable of identifying subtle radiologic features associated with TED and orbital tumors with high sensitivity and specificity. These models can assist radiologists and oculoplastic surgeons by offering rapid, reproducible assessments and facilitating early diagnosis.
Preoperative planning and three-dimensional simulation
AI-powered tools are increasingly used to enhance surgical planning by evaluating facial symmetry, eyelid position, and orbital structure.[21] The integration of 3D imaging and reconstruction technologies has significantly enhanced the precision and personalization of preoperative planning in oculofacial and orbital surgery. High-resolution 3D surface scanning and digital image-based modeling allow facial parameter measurements closer to reality. Song et al.[22] established a decision model based on 2D and 3D reconstruction of 120,000 key points of facial data of 100 ptosis patients for developing personalized surgery. The model achieved an accuracy of over 80% in both ptosis discrimination and surgical procedure classification. Chong et al.[23] demonstrated an automated smartphone-based 3D facial scanning system on 21 facial anthropometric landmarks. This customized 3D imaging application, “MeiXuan,” is consistent with direct measurements, showing its potential for future clinical applications.
CT-based modeling enables detailed assessment of periorbital anatomy, facilitating the design of customized implants for orbital wall reconstruction and predicting postoperative esthetic outcomes. MRI-based modeling for volume rendering can analyze soft-tissues change in DON or orbital tumor patients. Zhang et al.[24] applied 3D INCOOL software in MRI 3D reconstruction to evaluate volumetric changes in periorbital soft tissue of 21 healthy orbits, 38 TED orbits without DON, and 23 TED orbits with DON, providing quantitative metrics to guide surgical planning. Consorti et al.[25] compared the accuracy of customized orbital implants and 3D preformed titanium mesh in 52 inferomedial orbital fracture patients. While the same accuracy between customized implants and 3D preformed titanium meshes was observed, they recommend 3D preformed meshes due to the time and cost-reducing potential. These models can also have the ability to analyze 3D images for the prediction of surgical outcomes and assist in customizing surgical approaches in the near future, enabling more informed discussions before surgery.
Postoperative evaluation and monitoring
AI can facilitate postoperative assessment by objectively analyzing surgical results in tracking changes in eyelid position, swelling, and scar formation over time using serial images. Automated tools can also assist in identifying early complications, such as undercorrection, overcorrection, or asymmetry, thereby enabling timely intervention and improving long-term outcomes. Sun et al.[26] automatically measured eyelid morphological properties for the prediction of postblepharoptosis surgery appearance based on 970 pairs of images of 450 eyes by estimating midpupil lid distances. The predictive postoperative multiple radial midpupil lid distances results were consistent with real samples. Recently, Park et al.[27] and Lee et al.[28] introduced DeepLabV3 + based cornea exposure rate and eyeball exposure rate measurements, realizing the transition from linear measurement to 2D measurement for ptosis surgery outcome evaluation on small sample size, with high reliability (Intraclass correlation coefficient [ICC] 0.99) and high repeatability (ICC 1.00) in comparison to manual ImageJ measurements. However, patients’ satisfaction can decrease due to minor issues, especially in oculoplastics. Although the expected results displayed by the current model are ideal, there are still concerns about the difference between prediction and reality, before practical application in real-world settings.
Challenges and Limitations
Despite the remarkable progress of AI in oculoplastics, several challenges and limitations remain that hinder its widespread adoption in clinical practice. These obstacles are multifaceted, spanning from technical and ethical concerns to regulatory and workflow integration issues.
Limited and imbalanced datasets
The accuracy and generalizability of AI models depend heavily on the quality and quantity of training data. However, in oculoplastics, there is a relative scarcity of large, well-annotated, and representative datasets, especially for rare eyelid and ocular diseases, making it insufficient to adequately capture the full spectrum of disease clinical features. Unlike retinal imaging, where well-established public repositories such as Eye picture archiving and communication system (EyePACS) provide thousands of annotated images, periorbital datasets remain highly fragmented and typically confined to single institutions. As a result, most existing models rely on relatively small collections of retrospective images from a single demographic or institution, which limits external validity and increases the risk of algorithmic bias.
In addition, class imbalance is a pervasive issue. Benign eyelid tumors such as chalazia are much more prevalent in clinical settings than malignant tumors such as sebaceous gland carcinoma or Merkel cell carcinoma. Consequently, models trained on such imbalanced data may exhibit high overall accuracy while performing poorly in detecting rare but clinically critical conditions. This imbalance can lead to biased predictions, potentially delaying diagnosis or misclassifying malignant tumors as benign. Moreover, the underrepresentation of certain patient populations further limits the generalizability and equity of AI applications.
Techniques such as data augmentation, synthetic image generation, and transfer learning can help mitigate some of the effects of limited data, but they cannot fully substitute for well-curated real-world datasets. Therefore, establishing collaborative data-sharing frameworks and incentivizing the contribution of annotated oculoplastic imaging data will be critical steps toward improving AI performance and ensuring safe and equitable clinical deployment.
Interpretability and clinical trust
Most AI models, particularly those based on DL, function as “black boxes,” making it difficult for clinicians to understand the rationale behind a specific decision or prediction. In the context of periorbital diseases, where treatment decisions can involve potentially disfiguring or vision-threatening procedures, clinicians are understandably reluctant to rely on opaque systems whose reasoning cannot be easily scrutinized or verified. Methods such as heatmaps or explainable AI are being developed, but they only highlight the regions of an image most influential to the model’s prediction, which may undermine informed consent and shared decision-making. To address these concerns, developing methods that improve model interpretability is urgently needed. Hybrid approaches that combine DL with more transparent rule-based systems or that incorporate clinician oversight in the decision process, may enhance both accountability and trust. Ultimately, improving interpretability will be essential for achieving clinician acceptance, supporting regulatory approval, and ensuring that AI tools augment rather than undermine clinical judgment.
Regulatory and legal concerns
AI systems used in clinical settings are subject to regulatory scrutiny. Approval processes differ across regions, and there are limited guidelines specifically addressing AI in oculoplastics. Questions around liability, such as whether the developer, clinician, or institution is responsible in case of a diagnostic error, remain unresolved. In addition, the lack of standardized guidelines on how to assess the safety, efficacy, and fairness of AI tools complicates their pathway to regulatory approval. Agencies such as the US Food and Drug Administration and the European Medicines Agency have published preliminary frameworks for software as a medical device, but specific criteria for periorbital imaging and diagnostic applications remain limited. This uncertainty contributes to hesitation among clinicians and healthcare institutions when considering the adoption of AI-powered systems.
Addressing these regulatory and legal challenges will require not only clearer guidance from authorities but also collaboration among developers, clinicians, policymakers, and legal experts to establish transparent standards for validation, monitoring, and accountability. Efforts to build frameworks for continuous performance evaluation and to define the boundaries of liability will be critical to supporting safe, ethical, and legally sound integration of AI into oculoplastic care.
Data privacy and ethical considerations
AI applications that rely on facial or periocular images raise substantial privacy and ethical concerns. High-quality AI models typically require large volumes of patient images and metadata for training and validation. However, periorbital and facial images are inherently sensitive, as they contain unique identifiers that cannot be fully anonymized without compromising the clinical utility of the data. This creates heightened risks of re-identification, data breaches, and misuse of personal information, especially when dealing with cloud-based services or mobile applications. Yang et al.[29] introduced a digital mask, based on 3D reconstruction and DL algorithms, to irreversibly erase identifiable features, while retaining disease-relevant features needed for diagnosis, thus protecting the privacy of patients’ information.
Ensuring compliance with data protection regulations such as the Health Insurance Portability and Accountability Act and the General Data Protection Regulation imposes strict requirements for obtaining informed consent, securing datasets, and restricting access to personal health information. These legal frameworks can make it difficult to share data across institutions and countries while protecting privacy and security, thereby limiting the development of large, diverse, multicenter datasets that are critical for training robust AI systems.
To address these challenges, it is essential to develop transparent governance frameworks that define clear policies for data collection, de-identification, and sharing while respecting patient autonomy and confidentiality. Ethical review boards should ensure that data use agreements are explicit about secondary uses and commercial applications.
Clinical integration and workflow disruption
Effective AI tools may fail to gain traction if they do not integrate seamlessly into existing clinical workflows. Many models operate as standalone tools and require manual data input, limiting their practicality. It’s best for future tools to be embedded into electronic medical records, imaging software, or surgical planning platforms to ensure smooth adoption without adding to the clinician workload.
Moreover, AI-based image analysis tools often demand high-quality, standardized input data, such as consistent lighting, camera positioning, and resolution. In real-world clinical environments, where photographs may be captured by different devices, staff with varying levels of training, and under time constraints, ensuring such standardization can be difficult. This discrepancy between ideal and real-world conditions can lead to reduced accuracy and increased variability in model outputs, further complicating adoption.
To facilitate effective clinical integration, it will be essential to design AI systems that are user-centered, interoperable, and minimally disruptive. This includes embedding AI outputs directly into familiar clinical software, developing standardized workflows that define when and how AI recommendations are used, and providing adequate training and support for clinicians. Pilot implementations and iterative feedback loops can help identify workflow bottlenecks and optimize usability before large-scale deployment. Ultimately, ensuring that AI augments rather than hinders clinical efficiency and patient care will be critical for its long-term acceptance in oculoplastic practice.
Future Directions and Opportunities
The integration of AI into oculoplastics is still in its early stages, but the future holds considerable promise. To realize AI’s full potential and ensure safe, equitable, and effective clinical impact, several key areas warrant further development and investment.
Development of multicenter, multiethnic datasets
To overcome the limitations of data diversity, future AI models should be trained and validated on large-scale, multicenter datasets that include images and clinical information across different ethnicities, ages, and clinical settings. Public data-sharing initiatives and federated learning frameworks, which allow models to be trained on decentralized data while preserving patient privacy, could play a vital role in this regard.
Multimodal artificial intelligence integration
The next generation of AI tools is likely to move beyond image-only inputs. Combining periocular photographs with structured clinical data may lead to more comprehensive and accurate diagnostic systems. Wang et al.[30] developed an AI diagnostic system utilizing mass spectrometry-based proteomics for multiclass classification of eyelid tumors. The model exhibited an accuracy of 84.8% analyzing proteomic data from 8 tissue types and identified 18 novel biomarkers based on 233 formalin-fixed, paraffin-embedded samples from 150 patients. Multimodal learning could also enhance preoperative planning by integrating 3D facial scans, anatomical measurements, medical history, and biological informations.
Telemedicine and smart triage tools
The integration of AI with telemedicine can significantly expand access to care for patients with eyelid and orbital disorders, particularly in remote or underserved areas. AI-enabled smart triage tools can analyze uploaded clinical photographs or video consultations to identify common conditions such as chalazion, ptosis, or periorbital cellulitis, and prioritize urgent cases such as orbital cellulitis or malignancies for immediate referral. These systems support clinicians and tuition by providing preliminary diagnoses, severity assessments, and referral recommendations, thereby reducing unnecessary specialist visits, improving workflow efficiency, and diagnostic skills. In addition, AI-based platforms can facilitate longitudinal monitoring of chronic conditions through remote image comparisons, enabling early detection of disease progression or postoperative complications. As these technologies continue to evolve, they are expected to play an essential role in optimizing tele-oculoplastic care delivery and reducing barriers to timely diagnosis and treatment, while also assisting in follow-up and patient education.
Personalized surgical guidance
As AI evolves, its potential for real-time, individualized surgical guidance will expand. Wang et al.[31] used AI to segment orbital CT/MRI image structure, support endoscopic and 3D printing surgery, and lay the foundation for robotic surgery. Future models may offer personalized treatment plans based on a patient’s unique anatomy, preferences, and predicted response to surgery. Predictive analytics could also inform patients about expected cosmetic and functional outcomes, improving preoperative counseling and shared decision-making.
Real-world application
Before moving to the above aspects, AI models must be validated through well-designed, prospective clinical trials for regulatory approval and clinical acceptance. These trials should assess diagnostic accuracy, clinical outcomes, workflow efficiency, and patient satisfaction. Real-world implementation studies will also be critical in understanding how AI tools perform outside controlled research environments.
Conclusion
AI is rapidly transforming the landscape of oculoplastics, offering novel tools for image-based diagnosis, surgical planning, outcome monitoring, and telemedicine support. Early successes demonstrate the technology’s potential to enhance diagnostic accuracy, expand access to care, and improve patient outcomes.
However, significant challenges remain. These include the need for diverse, high-quality datasets, greater model transparency, regulatory clarity, and seamless integration into clinical workflows. Ethical considerations such as data privacy and algorithmic fairness also require careful attention as AI becomes more deeply embedded in healthcare delivery.
Looking ahead, the future of AI in oculoplastics is promising. Advances in multimodal learning, personalized analytics, and real-world validation will likely lead to increasingly sophisticated tools that complement clinical expertise. Through continued interdisciplinary collaboration and thoughtful implementation, AI is poised to become a valuable ally in delivering safer, more efficient oculoplastic care.
Data availability statement
Data sharing is not applicable to this article, as no datasets were generated or analyzed during the current study.
Conflicts of interest
The authors declare that there are no conflicts of interests of this paper.
Funding Statement
Nil.
References
- 1.Deo RC. Machine learning in medicine. Circulation. 2015;132:1920–30. doi: 10.1161/CIRCULATIONAHA.115.001593. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Alzubaidi L, Zhang J, Humaidi AJ, Al-Dujaili A, Duan Y, Al-Shamma O, et al. Review of deep learning: Concepts, CNN architectures, challenges, applications, future directions. J Big Data. 2021;8:53. doi: 10.1186/s40537-021-00444-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Elhaddad M, Hamam S. AI-driven clinical decision support systems: An ongoing pursuit of potential. Cureus. 2024;16:e57728. doi: 10.7759/cureus.57728. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Deprez M, Uffer S. Clinicopathological features of eyelid skin tumors. A retrospective study of 5504 cases and review of literature. Am J Dermatopathol. 2009;31:256–62. doi: 10.1097/DAD.0b013e3181961861. [DOI] [PubMed] [Google Scholar]
- 5.Hui S, Dong L, Zhang K, Nie Z, Jiang X, Li H, et al. Noninvasive identification of Benign and malignant eyelid tumors using clinical images via deep learning system. J Big Data. 2022;9:84. [Google Scholar]
- 6.Hui S, Xie J, Dong L, Wei L, Dai W, Li D. Deep learning-based mobile application for efficient eyelid tumor recognition in clinical images. NPJ Digit Med. 2025;8:185. doi: 10.1038/s41746-025-01539-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Bruscolini A, Sacchetti M, La Cava M, Nebbioso M, Iannitelli A, Quartini A, et al. Quality of life and neuropsychiatric disorders in patients with Graves'orbitopathy: Current concepts. Autoimmun Rev. 2018;17:639–43. doi: 10.1016/j.autrev.2017.12.012. [DOI] [PubMed] [Google Scholar]
- 8.Yan C, Zhang Z, Zhang G, Liu H, Zhang R, Liu G, et al. An ensemble deep learning diagnostic system for determining Clinical Activity Scores in thyroid-associated ophthalmopathy: Integrating multi-view multimodal images from anterior segment slit-lamp photographs and facial images. Front Endocrinol (Lausanne) 2024;15:1365350. doi: 10.3389/fendo.2024.1365350. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Huang X, Ju L, Li J, He L, Tong F, Liu S, et al. An intelligent diagnostic system for thyroid-associated ophthalmopathy based on facial images. Front Med (Lausanne) 2022;9:920716. doi: 10.3389/fmed.2022.920716. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Lootus M, Beatson L, Atwood L, Bourdais T, Steyaert S, Sarabu C, et al. Development and assessment of an artificial intelligence-based tool for ptosis measurement in adult myasthenia gravis patients using selfie video clips recorded on smartphones. Digit Biomark. 2023;7:63–73. doi: 10.1159/000531224. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Schulz CB, Clarke H, Makuloluwe S, Thomas PB, Kang S. Automated extraction of clinical measures from videos of oculofacial disorders using machine learning: Feasibility, validity and reliability. Eye (Lond) 2023;37:2810–6. doi: 10.1038/s41433-023-02424-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Chen W, Li R, Yu Q, Xu A, Feng Y, Wang R, et al. Early detection of visual impairment in young children using a smartphone-based deep learning system. Nat Med. 2023;29:493–503. doi: 10.1038/s41591-022-02180-9. [DOI] [PubMed] [Google Scholar]
- 13.Geng J, Zhang K, Dong L, Hui S, Zhang Q, Li Z, et al. AI assistance enhances histopathologic distinction between sebaceous and squamous cell carcinoma of the eyelid. NPJ Digit Med. 2025;8:406. doi: 10.1038/s41746-025-01775-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Wang L, Jiang Z, Shao A, Liu Z, Gu R, Ge R, et al. Self-supervised learning mechanism for identification of eyelid malignant melanoma in pathologic slides with limited annotation. Front Med (Lausanne) 2022;9:976467. doi: 10.3389/fmed.2022.976467. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Li Y, Ma J, Xiao J, Wang Y, He W. Use of extreme gradient boosting, light gradient boosting machine, and deep neural networks to evaluate the activity stage of extraocular muscles in thyroid-associated ophthalmopathy. Graefes Arch Clin Exp Ophthalmol. 2024;262:203–10. doi: 10.1007/s00417-023-06256-1. [DOI] [PubMed] [Google Scholar]
- 16.Wu H, Luo B, Zhao Y, Yuan G, Wang Q, Liu P, et al. Radiomics analysis of the optic nerve for detecting dysthyroid optic neuropathy, based on water-fat imaging. Insights Imaging. 2022;13:154. doi: 10.1186/s13244-022-01292-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Alkhadrawi AM, Lin LY, Langarica SA, Kim K, Ha SK, Lee NG, et al. Deep-learning based automated segmentation and quantitative volumetric analysis of orbital muscle and fat for diagnosis of thyroid eye disease. Invest Ophthalmol Vis Sci. 2024;65:6. doi: 10.1167/iovs.65.5.6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Liu Z, Tan K, Zhang H, Sun J, Li Y, Fang S, et al. CT-based artificial intelligence prediction model for ocular motility score of thyroid eye disease. Endocrine. 2024;86:1055–64. doi: 10.1007/s12020-024-03906-0. [DOI] [PubMed] [Google Scholar]
- 19.Lin LY, Zhou P, Shi M, Lu JE, Jeon S, Kim D, et al. Adeep learning model for screening computed tomography imaging for thyroid eye disease and compressive optic neuropathy. Ophthalmol Sci. 2024;4:100412. doi: 10.1016/j.xops.2023.100412. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Ha SK, Lin LY, Shi M, Wang M, Han JY, Lee NG. Deep learning model for differentiating thyroid eye disease and orbital myositis on computed tomography (CT) imaging. Orbit. 2025;44:758–66. doi: 10.1080/01676830.2025.2510587. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Wu KY, Kearn N, Truong D, Choulakian MY, Tran SD. Advances in regenerative medicine, cell therapy, and 3D bioprinting for corneal, oculoplastic, and orbital surgery. Adv Exp Med Biol. 2025;1483:69–114. doi: 10.1007/5584_2025_855. [DOI] [PubMed] [Google Scholar]
- 22.Song X, Tong W, Lei C, Huang J, Fan X, Zhai G, et al. Aclinical decision model based on machine learning for ptosis. BMC Ophthalmol. 2021;21:169. doi: 10.1186/s12886-021-01923-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Chong Y, Liu X, Shi M, Huang J, Yu N, Long X. Three-dimensional facial scanner in the hands of patients: Validation of a novel application on iPad/iPhone for three-dimensional imaging. Ann Transl Med. 2021;9:1115. doi: 10.21037/atm-21-1620. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Zhang T, Chen R, Ye H, Xiao W, Yang H. Orbital MRI 3D reconstruction based on volume rendering in evaluating dysthyroid optic neuropathy. Curr Eye Res. 2022;47:1179–85. doi: 10.1080/02713683.2022.2066697. [DOI] [PubMed] [Google Scholar]
- 25.Consorti G, Betti E, Catarzi L. Customized orbital implant versus 3D preformed titanium mesh for orbital fracture repair: A retrospective comparative analysis of orbital reconstruction accuracy. J Craniomaxillofac Surg. 2024;52:532–7. doi: 10.1016/j.jcms.2024.02.012. [DOI] [PubMed] [Google Scholar]
- 26.Sun Y, Huang X, Zhang Q, Lee SY, Wang Y, Jin K, et al. Afully automatic postoperative appearance prediction system for blepharoptosis surgery with image-based deep learning. Ophthalmol Sci. 2022;2:100169. doi: 10.1016/j.xops.2022.100169. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Park J, Yang H, Cho K, Park J, Kim S, Seo M, et al. The usefulness of AI-based cornea exposure rate (CER) analysis utilizing the anigma view system in evaluating ptosis surgery outcomes. J Clin Med. 2025;14:1691. doi: 10.3390/jcm14051691. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Lee B, Xu L, Oh SH, Ha Y, Kwon H, Lee KC, et al. AI-driven eyeball exposure rate (EER) analysis: A useful tool for assessing ptosis surgery effectiveness. PLoS One. 2025;20:e0319577. doi: 10.1371/journal.pone.0319577. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Yang Y, Lyu J, Wang R, Wen Q, Zhao L, Chen W, et al. Adigital mask to safeguard patient privacy. Nat Med. 2022;28:1883–92. doi: 10.1038/s41591-022-01966-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Wang L, Dai X, Liu Z, Zhao Y, Sun Y, Mao B, et al. AI-driven eyelid tumor classification in ocular oncology using proteomic data. NPJ Precis Oncol. 2024;8:289. doi: 10.1038/s41698-024-00767-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Wang Y, Sun J, Liu X, Li Y, Fan X, Zhou H. Robot-assisted orbital fat decompression surgery: First in human. Transl Vis Sci Technol. 2022;11:8. doi: 10.1167/tvst.11.5.8. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
Data sharing is not applicable to this article, as no datasets were generated or analyzed during the current study.


