Abstract
Purpose
To develop an interactive image editor, Ophthalmic Segmentation and Analysis Software (OASIS) for automating and improving the analysis of meibography images to gain insight into the progression of meibomian gland dysfunction.
Methods
A natural history study was conducted, collecting 2,439 meibography images from 325 patients. Clinicians used OASIS for image analysis, which involved both manual and deep-learning assisted processes. In the manual process, clinicians annotated three distinct masks per image: the eyelid, glands, and gland loss. In the assisted process, OASIS incorporated deep-learning models to infer gland masks, reducing the time required for gland-by-gland annotation. The software's interface provided additional tools for image enhancement and calculation of currently used clinical metrics including the Pult scale.
Results
OASIS enabled clinicians to quantitatively analyze MGD in under 3 minutes, representing an 87% reduction in time compared to traditional manual analysis methods. The software accurately calculated Pult meiboscale grades with fair agreement between the clinician and software (kappa = 0.79), demonstrating a high level of consistency.
Conclusions
OASIS significantly streamlines the analysis of meibography images and allows for a more objective and efficient evaluation of MGD. By implementing deep learning models for gland inference and providing a suite of custom annotation tools, OASIS may reduce the time burden on clinicians while maintaining accuracy.
Translational Relevance
OASIS paves the way for developing quantitative biomarkers for MGD and may have applications in both clinical practices and research. OASIS also further demonstrates the potential for AI-driven tools in improving ophthalmic image analysis.
Keywords: meibomian gland dysfunction, software, deep learning, gland segmentation
Introduction
Meibomian glands (MGs) reside within the tarsal plate of the eyelid and can be visualized on the tarsal conjunctiva or inner eyelid surface.1 These glands synthesize and excrete meibum, a substance containing polar and non-polar lipids that coat the ocular surface and prevent evaporation of the aqueous layer of the tear film.2,3 Atrophy of the MGs and decreased meibum production lead to tear film evaporation, which reduces eye surface lubrication and increases the risk of dry eye disease. This condition currently affects between 16 and 49 million people in the United States, or approximately 5% to 15% of the population.4 Meibomian gland dysfunction (MGD) is present in over 85% of patients with dry eye disease and is identified as the primary cause of the disease.5
Clinicians use non-contact and non-invasive infrared or transillumination imaging to evaluate the MGs and diagnose, treat, and manage MGD. The key feature analyzed in meibography images is the gland loss area, as shortening and atrophy of MGs are an observable pathology of MGD.6 The percentage of gland loss area relative to the tarsal plate of the eyelid is believed to correspond to the severity of MGD. However, manually determining the area of gland loss relative to the tarsal plate is subjective. Even semi-automatically, these approaches leave room for intra- and intergrader variability.7 Current assessments involve a clinician estimating the gland loss percentage by viewing meibography images. Alternatively, software such as ImageJ (National Institutes of Health, Bethesda, MD) can outline the loss area to measure the percentage. This process is time consuming and varies based on the quality and focus of the meibography images.
Over the years, various meiboscores or grading scales have been developed to categorize the severity of MGD in addition to gland loss percentage. The Pult scale, for example, converts gland loss percentage as determined with infrared meibography into five distinct categories, ranging from 0 (no gland loss) to 4 (over 75% gland loss).7 Other subjective metrics, such as the LEO scale by O'Dell et al.,8 have been used to quantify the disjointedness or fragmentation of individual glands in meibography images. In the DREAM study, Daniel et al.9 defined various gland morphologies, including distortion, tortuosity, shortening, thickening, and thinning. However, these grades are still subjective and generalize wide ranges of severity, making it difficult to identify subtle changes in gland atrophy between sequential patient visits.
Several papers have reported on applying automated and objective methods to determine gland loss and morphology. Approaches have included using various filters, thresholding, image processing techniques, and deep learning techniques for gland segmentation, meiboscore classification, and image enhancement tasks.10–17 One recent clinical application was the development of a “gland absence detection” model that was incorporated into the LipiScan Dynamic Meibomian Imager by Johnson & Johnson (New Brunswick, NJ) for accurate real-time analysis.17
Despite these advancements, existing methods often focus only on upper eyelids, lack comprehensive editing features after automatic inference, and rely on ideal high-quality images with refined regions of interest. To address these gaps, we propose Ophthalmic Segmentation and Analysis Software (OASIS), an interactive application featuring deep learning–based eyelid and gland segmentation models, manual editing tools, and quantitative metric generation. In OASIS, three anatomically derived masks (eyelid, gland, and gland loss) are annotated manually or with model assistance, followed by quantitative metric generation to evaluate MG area. OASIS provides an efficient, interactive platform for objective and reproducible evaluation of meibography images within a user-friendly application. This approach aims to overcome the limitations of existing methods, thus enhancing the accuracy and reliability of MGD diagnosis and management.
Methods
OASIS offers a robust suite of features, including automatic image segmentation, data handling, image processing, image annotation, and metric generation. This section includes screenshots that demonstrate the user interface of the editor, highlighting visualization and analysis tools.
Meibography Images
This study collected and evaluated 2439 meibography images from 325 patients. The images were taken using Johnson & Johnson LipiView II imagers from 11 clinical sites across the United States. Each 8-bit grayscale image was either [640 × 1280] or [832 × 1664] in pixel dimension with a resolution of 25 µm per pixel. Each patient was imaged at an initial Visit 1 time point and again 90 days later at the Visit 2 time point. At each visit, an infrared image was taken of the upper and lower inner eyelids of each eye, yielding four images per patient.
Staff were trained to reduce potential image confounders, including lid eversion, folds in the eyelid, reflections from light sources, and improper focus. Transillumination and infrared images were available for analysis of the lower eyelid, and only infrared images were available for the upper eyelid. Because both upper and lower eyelids were analyzed as part of the development of OASIS, only infrared images were used in this study. The images vary in quality and Pult-scale severity. Although images were categorized into two race levels (Asian and non-Asian), this data point was not utilized in any portion of the study. This study was performed in accordance with ethical standards. Informed consent was obtained from participants per the tenets of the Declaration of Helsinki, and an institutional review board submission was approved.
Image Processing
Two significant computations occur when an image is opened in OASIS: preset filter application and super-pixel creation. The preset filter automatically enhances the MGs in the raw infrared meibography image, as shown in Figure 1a. Using contrast-limited adaptive histogram equalization (CLAHE)18 with a kernel size of [20 × 20 pixels], local neighborhoods of the image are enhanced regardless of whether they are in light or dark areas relative to the whole image. For super-pixel generation, OASIS uses the simple linear iterative clustering (SLIC) algorithm described by Achanta et al.19 These super-pixel regions, or clusters of similar pixels as compared to their neighbors, conform to light/dark boundaries caused by the physiology of the eyelid and MGs. Super-pixels can be overlaid on the raw image in OASIS, as seen in Figure 1c, allowing clinicians to quickly “flood-fill” gland and eyelid pixel clusters during manual annotation.
Figure 1.
(a) An example infrared meibography image (original) of an upper inner eyelid that had undergone two image processing algorithms implemented in OASIS. (b) A preset CLAHE filter for gland enhancement. (c) A SLIC algorithm for super-pixel generation.
Manual Annotation of Anatomical Masks
This section describes the guidelines for annotating the three anatomically derived masks (eyelid, glands, gland loss) from which OASIS calculates clinically relevant metrics, including the Pult meiboscale. First, the eyelid mask is annotated with size-adjustable pen and eraser tools. Horizontally, the temporal edge aligns with the last expected anatomical location of the most temporal gland near the lateral canthus, and the nasal boundary includes the last expected location of the most nasal gland. Vertically, one mask edge is delineated by the mucocutaneous junction adjacent to the meibomian gland orifices. The other edge of the mask is marked at the eyelid eversion fold. After the outer boundary of the eyelid mask is delineated, the interior is filled. An example of the eyelid mask with a marked punctum and canthus is shown in Figure 2a.
Figure 2.
Three masks manually annotated on an enhanced example meibography image of the upper inner eyelid. (a) Annotated eyelid mask with a marked punctum (red) and canthus (green). (b) Gland mask where each gland is annotated in different RGB colors in the user interface. (c) Gland loss mask delineating regions of gland atrophy, including ghost gland pixels and areas between gland tops and eyelid edge.
For the gland mask, each gland is annotated with a different RGB color in the user interface as shown in Figure 2b. In the database, each RGB color is converted to a unique grayscale value from 1 to N, where N is the number of glands. Gland annotations start proximal to the gland orifice and extend to the most distal visible end of the gland. The eyelid and gland mask are then used as inputs to a back-end image processing algorithm to automatically generate the gland loss mask before manual refinement. The loss mask consists of large gaps between glands in the eyelid that are thick enough to represent missing glands. It also includes “ghost gland” pixels, or areas where MGs have disappeared but still reflect infrared light at a lower intensity than active glands. Finally, any region between the tops of glands and the eyelid edge is included in the gland loss region, as seen in Figure 2c.
Deep Learning Gland Segmentation
This section describes how manually annotated gland masks and corresponding images were used to train a nnU-Net deep learning model described by Isensee et al.20 for binary gland segmentation. The objective of the model was to classify every pixel in an unseen meibography image as either gland or no-gland, thereby producing a complete gland segmentation mask. To achieve this, each manually edited gland mask was converted to a simple binary format, where every pixel was labeled as either “gland” (1) or “no gland” (0). Both the images and their binary masks were then cropped to the bounding box of the eyelid mask before being used to train the deep learning model.
A single clinician used OASIS to manually annotate 668 images with varying image quality and disease severity following the abovementioned process. Images deemed unanalyzable due to poor lid eversion, poor image quality, out-of-focus imaging, or other confounding variables were removed from the training cohort. The analyzability of these images was agreed upon by three clinical experts. Then, the remaining 499 images and their corresponding gland segmentations were used to train the model. Eighty percent of the images (399) were placed in the training set, and the remaining 100 were used for testing. These images were used for fivefold cross-validation of a nnU-Net model. During training, standard nnU-Net data augmentation steps were applied, including random flipping, brightness/contrast adjustments, and elastic deformations to improve the robustness of the model. After cross-validation, the best model, determined by the highest Dice score on the validation set of each fold, was applied to a held-out test set of 100 images.
Finally, a nnU-Net model trained on all 499 manually annotated meibography images was applied to the remaining 1709 meibography images. Of these, 130 images were deemed unanalyzable due to confounding factors and were removed from the test set. The remaining 1579 images were analyzed with only an eyelid mask manually annotated for each image. The final model was used for automatic gland segmentation after cropping these images to the annotated eyelid region. The binarized gland segmentation outputs were then zero-padded appropriately to match the dimensions of their corresponding original images.
Metrics
OASIS uses the eyelid, gland, and gland loss masks to automatically compute metrics critical for assessing dry eye. These metrics included gland count, eyelid area, gland loss area, gland area, gland loss percentage, gland area percentage, Pult meiboscale, and job duration. Although no official or standard definition has been reported for many of these metrics, aside from the Pult meiboscale, below we provide the definitions used in this study while referencing previous delineations by Pult and Riede-Pult21:
-
•
Gland count—The number of uniquely annotated glands in each image
-
•
Eyelid area—The number of pixels within the annotated eyelid region
-
•
Gland loss area—The area between the distal borders of the gland area (opposite of the eyelid margin), including regions of ghost glands and complete gland loss, to the eyelid eversion edge
-
•
Percent gland loss—The ratio of gland loss area to the eyelid area
-
•
Percent gland area—The ratio of gland area to eyelid area
-
•
Pult meiboscale—The grade assigned to the image, derived by converting the percent gland loss to the Pult meiboscale7 using the Table.
These metrics are generated and viewable in OASIS after clicking “Calculate Metrics,” providing an immediate assessment of MG health.
Table.
Conversion Table Used by OASIS to Derive Pult Meiboscale Grades From Percent Gland Loss
| Percent Gland Loss | Pult Grade |
|---|---|
| 0%–5% | 0 |
| 6%–25% | 1 |
| 26%–50% | 2 |
| 51%–75% | 3 |
| 76%–100% | 4 |
Editor and Analysis Pipeline
Upon opening OASIS, users are presented with the Selector window shown in Figure 3. The Selector displays a list of patient folders identified by unique site and patient codes, followed by their imaging time points. After images are loaded into the database via the Selector, they are edited by an expert clinician in the Editor window, as seen in Figure 4. In the Editor, users can toggle the preset filter on/off, generate super-pixels, and adjust the image brightness, contrast, and sharpness.
Figure 3.
Selector window interface in OASIS displaying an example patient folder identified by site number, visit date, and unique patient ID with relevant images listed below with details regarding eyelid type, image type, and grading status.
Figure 4.
Editor window and tool buttons screenshots. (a) Full view of Editor window in OASIS where clinicians annotate meibography images. (b) Cropped view of header tools above the image view, allowing quick selection of pen, eraser, super-pixel generation, super-pixel toggle, zoom in, and zoom out buttons. (c) Cropped view of Menu 1, including buttons to mark punctum/canthus and toggle filters/layers, as well as a list of all layers currently present in image. (d) Cropped view of Menu 2, including size-adjustable pen/eraser tools, preset enhancement adjustments, and calculated metrics.
First, the clinician demarcates the medial and distal boundaries of the image annotation area by marking the punctum and canthus with red and green dots, respectively. Then, the clinician annotates the eyelid and gland masks. The inferred gland mask appeared as one color for images with automatically generated gland segmentations using the deep learning model. The clinician separates any merged neighboring glands in the segmentation with the eraser before OASIS automatically separates the mask into individual glands with unique colors. Then, OASIS generates the gland loss mask before it is manually refined if necessary.
The clinician then records their subjective Pult grade and clicks “Calculate Metrics” to generate and display the objective Pult grade and other metrics determined by OASIS. Before submitting the edits and final analysis, a comment box appears for the clinician to leave any notes, and the masks and metrics are saved to the OASIS database. From the Selector window, data for all images are exported to a .csv file with patient ID, metadata, and metrics.
Developing OASIS
In addition to the standard library of Python, the key libraries used in developing OASIS included PySide2, pyodbc, numpy, opencv-python-headless (cv2), and scikit-image. Microsoft SQL Server Management Studio 18 or equivalent is also necessary for initializing the OASIS image database. By using an Open Database Connectivity (ODBC)–compatible database, OASIS can be deployed in a multi-user environment where multiple clinicians share a central repository of images. This setup allows each user to work on their own set of images while maintaining consistent data storage and retrieval across all instances of OASIS. The OASIS graphical user interface was developed using PySide2, which is preferred over PyQt5 due to Lesser General Public License (LGPL) licensing, which permits software distribution.
Results
OASIS enables accurate and efficient semiautomatic annotation of meibography images, starting with the delineation of eyelid boundaries, automatic segmentation of individual glands, measurement of gland loss area, and calculation of clinically relevant metrics. Manually conducted, this process took approximately 15 to 20 minutes for a single trained clinician. OASIS shortened this to approximately 3 minutes for the same clinician—an 85% reduction in time. During the initial training and testing experiment, the Dice coefficient of the 100 held-out test images was 0.84 ± 0.07. Figure 5 showcases the binary gland segmentation generated by the nnU-Net deep learning model compared to the manual annotations generated using OASIS. Automatically generated gland segmentations of the larger 1579 held-out test set images were highly accurate, with an average Dice score of 0.989. Figure 6 demonstrates the edits to correct an automatically generated binary gland segmentation. Here, the trained clinician erased pixels of merged glands or corrected overlapped glands in the inferenced segmentation mask, separating the glands into individual segments. Most edits were needed for images with poor quality, reduced lid eversion, or high MGD severity. On average, the clinician reported needing 5 to 15 eraser strokes per image.
Figure 5.
Manual and automatic gland segmentations. Four example raw images with corresponding manual and automatic gland segmentations. Manual annotations identify unique glands. Automatic segmentations identify pixels associated with any gland generating a binary mask.
Figure 6.
Manual edits of automatic segmentations. Example raw images with corresponding binary gland segmentation directly from the nnU-Net output and the final binary segmentation after an optometrist manually edited the nnU-Net outputs.
There was good agreement between the reader-provided and automatically determined Pult meiboscore calculated by dividing the gland loss region by the eyelid area, as shown in the confusion matrix of Figure 7. Most discrepancies occurred for images with a gland loss percentage within 5% of the bounds of a meiboscore. The Cohen's kappa was 0.79, indicating substantial agreement between manual and automated methods. Figure 8 shows four composites of the three final masks (eyelid area, glands, and gland loss area) overlaid on corresponding raw meibography images. The eyelid masks were manually annotated, and the gland segmentations and the gland loss masks were automatically generated before being manually edited by an expert optometrist. This figure shows example images with Pult meiboscores 0 to 4 as determined by the software. OASIS and its integrated deep learning model accurately enabled the annotation of clinical meibography images with varying degrees of gland loss severity.
Figure 7.

Confusion matrix showing comparison between reader-provided Pult meiboscore following manual annotations of eyelid, gland, and gland loss masks and OASIS-determined Pult meiboscore following automatic segmentation and manual correction of gland and gland loss masks.
Figure 8.

Example images and composite eyelid, gland, and gland loss masks illustrating different Pult meiboscores from 0 to 4 as determined by OASIS. Eyelid masks are represented by the orange mask. Glands are uniquely labeled with colors between dark blue and light green. Gland color has no additional meaning. The gland loss region is identified by the red mask.
Discussion
In this study, we developed OASIS, an interactive user interface for ophthalmic anatomical analysis. Combined with deep learning gland and eyelid segmentation models, OASIS can accurately and efficiently quantify MGD using current clinical morphometrics. Clinicians and researchers employed OASIS to manually analyze more than 600 images and semiautomatically analyze 1709 images acquired from patients in a natural history study. Here, we elaborate on the development nuances of OASIS and the remarks clinicians provided following manual and semiautomatic segmentation.
During the development process, it became clear that the OASIS preset enhancement filters were especially useful for clinicians in quickly distinguishing glands from gland gaps. To assist the editing process, OASIS now automatically applies the preset filter to every image. However, the super-pixel overlay was less frequently used due to the large size of some super-pixels, requiring significant additional erasing after annotation. Clinicians initially faced a learning curve using OASIS during the manual annotation phase. Notably, after integrating the deep learning models, clinicians experienced a significant increase in the speed of analysis. A common error in model outputs was “merged” glands, especially if the original image featured narrow inter-gland regions between neighboring glands. Another challenge with certain segmentations involved overlapped glands, which differed from the usual parallel nature of the glands. These inaccuracies in the model segmentations can make identifying unique glands challenging, requiring manual editing of automatically generated gland segmentations. Compared to the literature, this step contributes to a slightly longer time for analysis.16 However, OASIS provides more quantitative information than multi-software pipelines reported in other studies.7
One bottleneck of the early analysis pipeline was the importation of deep learning segmentations into the attached SQL database. Because nnU-Net training and inference were conducted outside OASIS, the predictions were manually imported into OASIS for follow-up editing. Integrating nnU-Net, a 300-MB trained model, proved challenging due to data preparation steps and software compatibility. Thus, the integrated network was modified to Attention U-Net22 to mitigate the need for re-importing segmentation results. Furthermore, two simple eyelid segmentation networks (upper and lower eyelid models) were also developed and integrated into OASIS. With the easy-to-use Attention U-Net model, clinicians can click “Predict” in the OASIS user interface to generate eyelid and gland segmentations automatically. After refinement, the gland loss region can be automatically generated, and metrics can be compiled.
Compared to other software such as ImageJ,7 Fiji,16 Adobe Photoshop,15 and the Mediworks Firefly23–25 annotation options, OASIS provides a more continuous analysis pipeline and calculates morphometrics automatically within the software. Unlike previously mentioned software, OASIS enables the automatic generation of gland segmentations with easy-to-use editing tools for refinement. Critically, OASIS packages all the tools required for rapid, objective, and reproducible analysis of meibography images into one user-friendly application.
Similar in the rapidity of analysis, Osae et al.17 utilized a deep learning gland absence detection (GAD) model to grade meibography images instantly with a Johnson & Johnson LipiScan imager. Although this model is reliable and accurate, it exists solely in the Johnson & Johnson LipiScan ecosystem and is not packaged with software capable of seamless follow-up editing. Another comparable software package is JENVIS PRO, packaged with the Keratograph 5M (OCULUS, Wetzlar, Germany), which analyzes the ocular surface, tear film, and upper/lower eyelids. The JENVIS PRO software module for the Keratograph 5M uses the Ocular Surface Disease Index (OSDI) questionnaire to correlate a patient's subjective rating of their symptoms with metrics derived from a meibography image.26 However, it lacks segmentation steps for precise measurement of the eyelid, gland area and, thus, quantification of gland loss.
Although individual gland annotation has already been explored in prior research,27–30 OASIS integrates binary gland segmentation functionality into an efficient, user-friendly platform that may expedite the process. In the future, OASIS may be used to analyze meibomian gland morphometrics, including the 12 additional meibographical signs of MGD besides gland shortening identified in the DREAM study (gland tortuosity, distortion, hooked, drop-out, thickening, thinning, overlapping, ghosting, tadpoling, abnormal gap formation, fluffy areas, and no extension to the lid margin).9 It is important to point out that some of these morphologies, such as fluffy areas and some overlapping glands, allow for the determination of overall gland dropout and quantification of gland loss but not individual segmentation of glands.
Some limitations of OASIS must also be considered. Although we developed deep learning models for gland and eyelid segmentations, this study only evaluated the performance of the gland model and focused solely on binary segmentation. Future investigations could benefit from evaluating multiple models for both glands and eyelids with varied parameters and architectures. Specifically, implementing multi-instance learning may handle complex meibography images with higher accuracy. The analysis of over 600 manually annotated images revealed common issues such as lid eversion and blurring, significantly impacting segmentation accuracy and requiring intensive manual edits. Addressing this challenge via better training of clinicians or adjustments to the model is necessary to improve usability across varied image qualities and clinical scenarios. On the software side, the Python back-end of OASIS, reliance on a SQL database, and recent model integrations have contributed to software bloat. This has led to measurably longer times for opening an image, inferring masks with the model, and the overall editing process. Although the need for critical image processing libraries prevents migration from Python, refactoring OASIS to utilize a traditional file system for images and offloading models to the cloud would significantly hasten its operational speed.
OASIS demonstrates significant potential for enhancing the accuracy and efficiency of MGD diagnosis and management. It provides a user-friendly platform for objective and reproducible analysis of meibography images, paving the way for many future directions. The individual gland segmentations could represent ground truths for multi-instance deep learning models. This implementation could reduce the merging of neighboring glands, thereby minimizing the required manual edits. We are continuing to experiment with enhanced deep learning segmentation models and compare performances with the current OASIS model and other existing models. Furthermore, OASIS allows for collecting morphology data for each individual gland. This capability may enable automated measurement of subjective morphologies described by the DREAM study, including gland tortuosity, distortion, and thinning.8,9 Quantifiable measurements of these morphologies could serve as biomarkers for therapeutic research on MGD and aid in detecting subtle changes in MGD progression over time. Finally, looking toward potential clinical implementation, OASIS could serve as a stand-alone software for clinical trial image analyses and an integrated module within meibography imaging instrumentation. OASIS as a segmentation tool could be adapted for other ophthalmic imaging purposes requiring annotation, segmentation, and metric calculations, with built-in support for any two-dimensional U-Net segmentation model.
Conclusions
OASIS is a multifunctional software package featuring meibography image annotating, deep learning segmentation, and quantitative metric generation. OASIS allowed for manual annotations of 668 clinical meibography images, which were utilized to train a deep learning nnU-Net network to automatically segment an additional 1709 clinical meibography images. OASIS achieved this task by enabling rapid editing of automatically generated gland segmentations. For all images, OASIS computed gland area and gland loss area metrics prior to converting the latter to a Pult meiboscore. We demonstrated comparable Pult meiboscore calculations to manually determined meiboscores. Future work can utilize this software and analysis pipeline to automatically assess thousands of clinical meibography images nearly instantly, providing objective evaluations of MG health. We plan to improve OASIS by implementing quantitative metrics for gland morphologies that may be used as biomarkers for future MGD therapeutics.
Acknowledgments
Disclosure: N. Joseph, None; V. Shivade, None; J. Chen, None; I. Marshall, None; E. Avery, None; D. Jennings, None; H. Menegay, None; R. Ramamirtham, None; D. Wilson, None; B.A. Benetz, None; T. Stokkermans, None
References
- 1. Butovich IA, Millar TJ, Ham BM.. Understanding and analyzing meibomian lipids—a review. Curr Eye Res. 2008; 33(5-6): 405–420. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2. Foulks GN, Bron AJ.. Meibomian gland dysfunction: a clinical scheme for description, diagnosis, classification, and grading. Ocul Surf. 2003; 1(3): 107–126. [DOI] [PubMed] [Google Scholar]
- 3. McCulley JP, Shine WE.. Meibomian gland function and the tear lipid layer. Ocul Surf. 2003; 1(3): 97–106. [DOI] [PubMed] [Google Scholar]
- 4. Farrand KF, Fridman M, Stillman IÖ, Schaumberg DA.. Prevalence of diagnosed dry eye disease in the United States among adults aged 18 years and older. Am J Ophthalmol. 2017; 182: 90–98. [DOI] [PubMed] [Google Scholar]
- 5. Sheppard JD, Nichols KK.. Dry eye disease associated with meibomian gland dysfunction: focus on tear film characteristics and the therapeutic landscape. Ophthalmol Ther. 2023; 12(3): 1397–1418. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6. Knop E, Knop N, Brewitt H, et al.. [Meibomian glands : part III. Dysfunction – argument for a discrete disease entity and as an important cause of dry eye]. Ophthalmologe. 2009; 106(11): 966–979. [DOI] [PubMed] [Google Scholar]
- 7. Pult H, Riede-Pult B.. Comparison of subjective grading and objective assessment in meibography. Cont Lens Anterior Eye. 2013; 36(1): 22–27. [DOI] [PubMed] [Google Scholar]
- 8. O'Dell L, Halleran C, Schwartz S, et al.. An assessment of subjective meibography image grading between observers and the impact formal gland interpretation training on inter-observer agreement of grading scores. Invest Ophthalmol Vis Sci. 2020; 61(7): 486. [Google Scholar]
- 9. Daniel E, Maguire MG, Pistilli M, et al.. Grading and baseline characteristics of meibomian glands in meibography images and their clinical associations in the Dry Eye Assessment and Management (DREAM) study. Ocul Surf. 2019; 17(3): 491–501. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10. Llorens-Quintana C, Rico-del-Viejo L, Syga P, Madrid-Costa D, Iskander DR. A novel automated approach for infrared-based assessment of meibomian gland morphology. Transl Vis Sci Technol. 2019; 8(4): 17. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11. Xiao P, Luo Z, Deng Y, Wang G, Yuan J.. An automated and multiparametric algorithm for objective analysis of meibography images. Quant Imaging Med Surg. 2021; 11(4): 1586–1599. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12. Wang J, Yeh TN, Chakraborty R, Yu SX, MC Lin. A deep learning approach for meibomian gland atrophy evaluation in meibography images. Transl Vis Sci Technol. 2019; 8(6): 37. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13. Prabhu SM, Chakiat A, Shashank S, Vunnava KP, Shetty R. Deep learning segmentation and quantification of Meibomian glands. Biomed Signal Process Control. 2020; 57: 101776. [Google Scholar]
- 14. Khan ZK, Umar AI, Shirazi SH, Rasheed A, Qadir A, Gul S.. Image based analysis of meibomian gland dysfunction using conditional generative adversarial neural network. BMJ Open Ophthalmol. 2021; 6(1): e000436. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15. Saha RK, Chowdhury AMM, Na KS, et al.. AI-based automated Meibomian gland segmentation, classification and reflection correction in infrared meibography. arXiv. 2022, 10.48550/arXiv.2205.15543. [DOI]
- 16. Setu MAK, Horstmann J, Schmidt S, Stern ME, Steven P.. Deep learning-based automatic meibomian gland segmentation and morphology assessment in infrared meibography. Sci Rep. 2021; 11(1): 7649. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17. Osae E. Developing an algorithm to assess meibomian gland structure from infrared meibography images using deep-learning convolution neural networks. In: Proceedings of the American Academy of Optometry 2023 Annual Meeting. Orlando, FL: American Academy of Optometry. [Google Scholar]
- 18. Zuiderveld KJ. Contrast limited adaptive histogram equalization. Graphics Gems. 1994; IV: 474–485. [Google Scholar]
- 19. Achanta R, Shaji A, Smith K, Lucchi A, Fua P, Süsstrunk S. SLIC Superpixels compared to state-of-the-art superpixel methods. IEEE Trans Pattern Anal Mach Intell. 2012; 34(11): 2274–2282. [DOI] [PubMed] [Google Scholar]
- 20. Isensee F, Jaeger PF, Kohl SAA, Petersen J, Maier-Hein KH.. nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat Methods. 2021; 18(2): 203–211. [DOI] [PubMed] [Google Scholar]
- 21. Pult H, Riede-Pult BH. Non-contact meibography: keep it simple but effective. Cont Lens Anterior Eye. 2012; 35(2): 77–80. [DOI] [PubMed] [Google Scholar]
- 22. Oktay O, Schlemper J, Folgoc LL, et al.. Attention U-Net: learning where to look for the pancreas. arXiv. 2018, 10.48550/arXiv.1804.03999. [DOI]
- 23. Ballesteros-Sánchez A, Gargallo-Martínez B, Gutiérrez-Ortega R, Sánchez-González JM.. Intraobserver repeatability assessment of the S390L Firefly WDR Slitlamp in patients with dry eye disease: objective, automated, and noninvasive measures. Eye Contact Lens. 2023; 49(7): 283–291. [DOI] [PubMed] [Google Scholar]
- 24. Mediworks. High-quality opthalmic digital slit lamp microscope S390L. Available at: https://www.mediworks.biz/en/product/opthalmic-slit-lamp-microscope-s390l. Accessed June 30, 2024.
- 25. Eyefficient. Firefly slit lamp imaging system. Available at: https://www.eyefficient.com/diagnostic-imaging/firefly-imaging-system. Accessed June 30, 2024.
- 26. Rayner DJ. Using the OCULUS Keratograph 5M and JENVIS Pro Dry Eye Report to diagnose DRY EYE. Available at: https://en.oculus.de/uploads/media/OCULUS_Best_Practice_K5M_JENVIS_Rayner_EN_1121.pdf. Accessed August 14, 2025.
- 27. Li Y, Chiu PW, Tam V, Lee A, Lam EY.. Dual-mode imaging system for early detection and monitoring of ocular surface diseases. IEEE Trans Biomed Circuits Syst. 2024; 18(4): 783–798. [DOI] [PubMed] [Google Scholar]
- 28. Swiderska K, Blackie CA, Maldonado-Codina C, Morgan PB, Read ML, Fergie M.. A deep learning approach for meibomian gland appearance evaluation. Ophthalmol Sci. 2023; 3(4): 100334. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29. Huang B, Fei F, Wen H, et al.. Impacts of gender and age on meibomian gland in aged people using artificial intelligence. Front Cell Dev Biol. 2023; 11: 1199440. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30. Wang J, Li S, Yeh TN, et al.. Quantifying meibomian gland morphology using artificial intelligence. Optom Vis Sci. 2021; 98(9): 1094–1103. [DOI] [PMC free article] [PubMed] [Google Scholar]






