Skip to main content
Sensors (Basel, Switzerland) logoLink to Sensors (Basel, Switzerland)
. 2021 Mar 26;21(7):2326. doi: 10.3390/s21072326

Automated Keratoconus Detection by 3D Corneal Images Reconstruction

Hanan A Hosni Mahmoud 1, Hanan Abdullah Mengash 2,*
Editors: Anastasios Doulamis, Ansar-Ul-Haque Yasar
PMCID: PMC8036293  PMID: 33810578

Abstract

This paper presents a technique for the detection of keratoconus via the construction of a 3D eye images from 2D frontal and lateral eye images. Keratoconus is a disease that affects the cornea. Normal case eyes have a round-shaped cornea, while patients who suffer from keratoconus have a cone-shaped cornea. Early diagnosis can decrease the risk of eyesight loss. Our aim is to create a method of fully automated keratoconus detection using digital-camera frontal and lateral eye images. The presented technique accurately determines case severity. Geometric features are extracted from 2D images to estimate depth information used to build 3D images of the cornea. The proposed methodology is easy to implement and time-efficient. 2D images of the eyes (frontal and lateral) are used as input, and 3D images from which the curvature of the cornea can be detected are produced as output. Our method involves two main steps: feature extraction and depth calculation. Machine learning from the 3D images dataset Dataverse, specifically taken by the Cornea/Anterior Segment OCT SS-1000 (CASIA), was performed. Results show that the method diagnosed the four stages of keratoconus (severe, moderate, mild, and normal) with an accuracy of 97.8%, as compared to manual diagnosis done by medical experts.

Keywords: keratoconus detection, 3D eye construction, cornea, depth calculation, machine learning

1. Introduction

Keratoconus describes non-inflammatory disease usually depicted by progressive thinning, which cause cornea deformation. Affected patients will incur some extent of distorted vision and photophobia (light sensitivity) [1,2]. Keratoconus causes corneal cone shaping deformation, which leads to abnormalities, such as diverging of light on the retina that might lead to loss of vision. Recently, it has been noted that an increased number of people are being diagnosed with keratoconus [1]. Keratoconus affects 1 in 1500 individuals as prevailing from many epitomic reports [1,2,3].

With this high occurrence percentage needing medical attention in detection and timely treatment, automated rapid detection has become necessary.

Keratoconus is detected clinically using many signs, such as the Vogt striae and Fleischer signs. Keratoconus patients also display Munson and scissoring reflex signs. Medical examinations are usually done in a clinical setting using slit lamp examination techniques [1,2]. Imaging tools that implement Scheimpflug and corneal topography are also usually used. All of these examination methods are expensive and are performed by only experienced optometrists. Automated rapid detection has become necessary due to the increased number of keratoconus sufferers, as it is estimated that 1 in 1500 people in the population are affected.

Some computerized techniques have been proposed to detect keratoconus using corneal topography maps. These methods utilize machine learning, deep learning, and neural networks. Most perform keratoconus detection by using topography maps of both eyes [2,3].

Other approaches of 3-D Cornea construction from Microscopy Images have been introduced [4]. In Reference [5], the authors introduced automated measurement of corneal haze in optical tomography images. In addition, feature fusion techniques have been utilized for Keratoconus diagnosis 2-D photographed images [6].

In this paper, we propose an automated detection of Keratoconus by constructing 3-D images from the frontal and lateral 2-D images of the eye.

The rest of this paper is organized as follows. The background and literature survey are presented in Section 2. Section 3 presents the proposed features-extraction algorithm, depth calculation, and 3D corneal image construction. It also delineates the proposed keratoconus detection method. Results are discussed in Section 4 by comparing existing methods. Finally, the paper concludes in Section 5.

2. Background and Literature Survey

An automated keratoconus detection method can provide a solution for needed medical attention and timely treatment for Keratoconus diagnosis. Digital image processing can provide a solution for automated keratoconus detection, especially in rural areas. In Reference [6], Daud et al. devised a keratoconus detection method via digital image analysis processing. They tested their method on 140 cases captured by a smartphone, and achieved an average accuracy of 96.03%. Ali et al., in Reference [7], developed a technique that utilizes specific details from topographic maps. They used image analysis and processing to detect keratoconus in a computerized manner, identifying 12 features from the topographic maps. They collected data from 40 patients, and attained an accuracy of 90%. In Reference [8], the authors proposed a depth optimization method for 2D-to-3D conversion based on RGB-D images. These measurements can aid a great deal in keratoconus correction procedures. In Reference [9], the authors presented an affordable method of keratoconus detection using a smartphone. Experimental results of their proposed method show detection of severe and moderate keratoconus cases with accuracies of 93% and 67%, respectively. The authors in Reference [10] present a depth optimization technique for 2D-to-3D reconstruction based on RGB images that can be used in ocular disease detection.

Table 1 compares automated keratoconus detection methods according to the methodology used, the dataset, and the accuracy attained in experimental results.

Table 1.

Automated keratoconus detection methods.

Reference Achievement Method Dataset Results
[5] Dhaini, A.R.; et al., 2018 Corneal haze and demarcation line measurement. Image analysis and machine learning 140 Keratoconus eyes for actual patients The mean demarcation line is 295.9 ± 59.8 microns, and it is 314.5 ± 48.4 microns by medical personal.
[6] Daud, M.M.; et al., 2020 Keratoconus
detection method
Digital image analysis processing 140 cases captured by smartphone Accuracy of 96.03%
[7] Ali, A.H.; et al., 2018 Keratoconus supervised learning and detection Support vector machine using image processing techniques 240 cases were attained from Al-Amal Eye Clinic in Baghdad utilizing a Pentacam Accuracy of 90%
[9] Askarian, B.; et al., 2018 Diagnostic method for keratoconus detection Usage of a smartphone 175 images of keratoconus cases Accuracies of 93%, 67% in severe, and moderate cases, respectively.
The proposed method Automated keratoconus
detection
Depth calculation from 3D corneal image and machine learning 268 Corneal images of Keratoconus and normal cases Accuracy is 97.8%

3D eye-image construction is of great importance in the field of eye disease detection. 3D imagery has had a major impact in fields such as lateral disease detection and machine vision. 3D images incorporate depth information that can aid in different applications. For example, depth information can help diagnose many diseases that require depth calculations such as brain tumors and Cornea diseases [11,12]. However, 3D images are not always available. Thus, 3D image construction is usually utilized for such applications.

Many researchers have introduced techniques to reproduce 3D images from 2D images.

The authors in Reference [11] presented 3D edge reconstruction from 2D images utilizing a correlation algorithm with very high accuracy. The authors in Reference [12] recently proposed a novel 3D imaging approach for reconstructing 3D surfaces of a moving particle by recording its rolling motion through a digital microscope. The project discussed in Reference [13] constructed 3D images from 2D X-ray images of patients’ bones. They generated 3D images utilizing a machine learning method that is independent of the X-ray imaging setup. The authors in Reference [14] achieved depth-finding in real time utilizing statistical-based learning techniques, concluding that accuracy grew with an increased number of images in the training set. In Reference [15], Scarpa et al. obtained a 3D cornea reconstruction utilizing image sequences from a confocal microscope. They proved that it is possible to examine the cornea as a 3D complex and obtain imaging along the x, y, and z-axes. Table 2 describes some 3D image construction techniques that can aid in corneal disease detection.

Table 2.

3D Image construction.

Reference Achievement Method Dataset Results
[11] Patoom. K., et al., 2019. Reconstruction of 3D edge from 2D image Correlation based algorithm Images are a real object taken by a camera Attains 100% accuracy
[12] Wang, S., et al., 2020. 3-D particle reconstruction Utilizing multi-view 2-D motion and shape shading 3D camera images of moving wear particles The performance undergoes degradation if occlusion occurs between particles
[13] Dixit, S., et al., 2019. 3D image-building from 2D X-Ray images Conversion machine learning Thirty samples of patient data (femur bones) have been acquired Moderate accuracy with less processing time
[14] Jeni, A.L., et al., 2017. Depth finding in real time Statistical-based learning techniques 512 images of vertices Accuracy increases by the number of images in the training set
[15] Scarpa, F., et al., 2007 3D Corneal image reconstruction Fusion of confocal microscopy images Images covering the whole corneal thickness in normal subjects with the Confoscan4 confocal microscope Failed in 3% of images

The contributions of our proposed paper are explained below.

We emphasize on the 3D construction of corneal images to diagnose keratoconus and its stages by computing the curvature inclination angle of the cornea. Images of the structure of the eye, both normal and affected by keratoconus, are shown in Figure 1 and Figure 2. A lateral view of two images, a normal eye and an eye that has keratoconus, is seen. It is apparent from Figure 2 that the affected cornea forms a cone-shaped figure (Images used with permission [16,17]).

Figure 1.

Figure 1

Normal eye.

Figure 2.

Figure 2

Keratoconus eye.

The presented method proposes the utilization of only two images of the eye, frontal and lateral, to efficiently generate a 3D image of the cornea. We utilized the Dataverse dataset found in Reference [16] for training. In total, 450 corneal images (250 keratoconus and 200 normal), including Ectasia Screening Index (ESI) keratoconus indices, curvature angle, and elevation, are provided by the dataset.

The second contribution is the utilization of scale-invariant feature transform (SIFT) features, which are invariant to the scale and orientation of the images. They are also invariant to illumination differences and robust to noise. SIFT can make it feasible to match inputs against other images with local features. This aids the training phase to a large extent. It has to be noted that Gaussian multivariate distribution is utilized to compute in-depth information.

The third contribution is that the proposed technique can achieve rotation of the cornea at any side view angle. This feature can help in the measurement of the stage of keratoconus and can compute the angle of curvature efficiently.

The block diagram of the structure of the proposed technique is presented in Figure 3.

Figure 3.

Figure 3

Block diagram of the proposed method.

In the first step, two 2D frontal view and side view images of the eye are captured and used as input. Then, SIFT feature extraction is employed on the input images to identify image landmarks from local geometric deformations in multi-oriented and multi-scaled planes. Depth information is then extracted from the lateral view image of the eye to determine the points of interest. A 3D reconstruction of the cornea is built from two 2D segmented images imaged in orthogonal directions (frontal and lateral). Then, the training stage using the proposed Convolutional Neural Network (CNN), which takes 3D images as input and produce keratoconus detection. Detailed description of the steps in the block diagram are depicted in the following subsections.

3. The Proposed Method

In this section, we describe the proposed method in detail. The complete technique starts with cornea detection and feature extraction from 2D images, which is followed by an in-depth calculations’ algorithm leading to the reconstruction of the 3D image.

3.1. Cornea Detection and Features Extraction

In the first step, two 2D images of each eye are captured as input: a frontal view and one side view. The images are then scanned to detect the cornea and encompass specific marks on the eye region. Frontal and side view corneal images from the Harvard Dataverse database [16] were utilized for experimental purposes. The method uses circular Hough transform to detect the iris from the frontal image [17,18].

Hough transform can detect a circular Iris shape in the eye image.

For a circle with radius r, the characteristic equation of a circle at center (m, n) is given by:

xm2+ yn2 = r2 (1)

This circle is described by Equations (2) and (3).

x=m+rcos (2)
y=n+rcos (3)

where xcos+ysin=r

Hough transform searches for the parameters (m, n, r), which leads to the points xi and yi. Those points lie on the circumference of the required circle.

The center and radius of the iris are computed from the detected iris circle. These features will be utilized in the side view image of the eye to compute landmarks on the iris area, which will then be used to generate a 3D image of the eye. Input images pass through a preprocessing phase that detects the eye area to be used for further processing. Multiple 2D images can be utilized to recover 3D information by cracking a pixel-wise correspondence that can be extracted from the SIFT algorithm on two frontal and lateral views of the cornea.

3.2. Features Extraction and Depth Calculation

3.2.1. Features Extraction

Finding distinct features of the eye area is crucial for detecting the iris and its dimensions. To this end, we employ the infamous scale-invariant feature transform [17,18]. SIFT has been used in feature selection in corneal images [19]. Keratoconus detection using curvature-based topography and SIFT feature extraction are well-used methodologies [20,21,22].

Other feature extraction methods, comparable to SIFT, are Speeded Up Robust Features (SURF) and Oriented FAST and Rotated BRIEF (ORB) [23,24]. The computational cost of ORB is lower than SIFT and SURF [25]. However, the problem with ORB is that it produces huge quantity of features, which leads to an enlarged time for matching and will increase the overall image matching computational cost. In the comparison results published in Reference [26], in image matching applications, SIFT and SURF are the best in scale invariant feature indicators on the foundation of scale variations. On the other hand, ORB is the least scale invariant. However, other variations of ORB, namely ORB (1000), is the best with respect to the rotation invariant. ORB is the best affine invariant as compared to the other algorithms. The algorithm SIFT has better accuracy for rotational variations. Although ORB is very efficient in detecting a very large number of features, it was found that it requires prolonged matching time. The huge number of features increases the total image matching computational time with respect to SIFT [24].

Local features are identified through a staged filtering technique detecting the points in scale space that are stable and identifying image landmarks from local geometric deformations in multi-oriented and multi-scaled planes. Reconstructing the 3D shape of the eye relies heavily on these features. The stages of the feature extraction method are described in Figure 4.

Figure 4.

Figure 4

The stages of the feature extraction method.

The SIFT technique identifies locally distributed extreme points by relating pixel points in the Gaussian space with adjacent points in the domain region. The extreme points establish all feature points.

Key points of the two images are identified by finding the nearest neighbor-points. It is often the case that the second match is very close to the first. It can happen because of noise or, in such a case, the ratio between the first and second closest-distance is computed and considered.

This approach is effective in producing several dense features. Figure 5a presents the output of the implementation of the SIFT algorithm between frontal and side views of an eye image where key point features are matched between the two views. Figure 5b presents outputs of the SIFT algorithm in side view images of normal and affected eyes. Black circles show the positions of feature points of the eyes and iris regions. Black lines are added by the algorithm to convey depth information, which can be calculated to aid in the 3D construction of the eye.

Figure 5.

Figure 5

(a) Example of the output of the scale-invariant feature transform (SIFT) algorithm. (b) Dense features extraction.

3.2.2. Depth Calculation

Depth information is collected from the lateral view image of the eye, as illustrated in Figure 6.

Figure 6.

Figure 6

Lateral view of the cornea.

Hough transform will detect the curvature of the cornea from the frontal view and will be aligned to the side view, where points on the circumference of the cornea will be detected. Tangents to the rightest point are drawn. The highest point on the curve and the lowest points, namely p1, p2, p3, are determined. From this point, a right-angled triangle is drawn, where p1 is the vertex of the triangle and p2 is the other vertex. The length of the side d is calculated and it is the depth of the cornea from its highest point.

The steps are as follows:

  1. Determine points p1, p2, and p3 using Hough transform

  2. Calculate C: the mid-point between p2 and p3

  3. Calculate d using the Pythagoras theorem

3.3. Reconstruction of the 3D Image

After calculating depth information, 3D data of the cornea regions are now available. A 3D reconstruction of the cornea is built from two 2D segmented images imaged in orthogonal directions (frontal and lateral). An orthogonal plane can be fused without assuming prior knowledge. The intersections of the images in the orthogonal planes can be used to recognize alignment reference points, using homogenous translational transform T by factors x, y, and z. Points in the orthogonal images in 2D can be converted into 3D points in the 3D plane using Equations (4) and (5).

The local points in the frontal image, the local point in the lateral orthogonal plane, and the global corresponding point in the 3D plane are, respectively:

x1y101, x2y201and x3y3z31 Where, x3y3z31= Tx1y101 and x3y3z31= Tx2y201 (4)
x1y101= 0    0     1     z3x10    1    0     y3y1 1    0     0      x3x20    0     0               1  x2y201 (5)

where x1, y1 and x2, y2 are the points in the 2D plane to be fused into the 3D point x3, y3, z3.

The problem in Equation (5) is formulated as an optimization problem to minimize the total Euclidean distance, in 3D space, between all of the corresponding points.

We tested our alignment procedure on data from the Dataverse dataset [16], which is available online. The data consist of 200 normal cornea cases and 250 keratoconus cases in which each have a frontal and lateral RGB image and a 3D image of the cornea. The testing procedure is to choose a patient case from the Dataverse dataset (normal or keratoconus), and to use the frontal and lateral images to generate a 3D reconstruction of the cornea. We established point-to-point correspondence with our proposed geometrical method. The objective of the testing was to minimize the sum of square differences of pixels between the processed 3D image and the Dataverse’s stored 3D image to establish a match. The error function is shown in Equation (6).

minstoredE=x,y,zIconstructx,y,zIstoredx,y,z 2 (6)

where Iconstruct (x, y, z) is the intensity level of the constructed image at point x, y, z and Istored (x, y, z) is the intensity level of the stored image at point x, y, z.

Evaluation of the construction of 3D corneal images from two frontal and lateral images is presented in Section 4.

3.4. Keratoconus Detection Algorithm

3.4.1. Implementation of the Convolutional Neural Network (CNN)

There are few studies utilizing neural networks in keratoconus automated detection. Researchers in Reference [21] presented keratoconus detection methods using a neural network approach. Scheimpflug tomography in topographically normal patients and keratoconus cases was used as described in Reference [22].

In this section, we present our proposed CNN methodology for automated keratoconus detection. The keratoconus-detection algorithm, using 3D reconstructed corneal images, was implemented and tested utilizing a convolutional neural network. 3D RGB corneal images were the inputs to the proposed algorithm. This section presents the learning process related to the convolutional neural network.

The implemented CNN processes the 3D images as input data, utilizing weights on neurons’ inner connections. Continuous tuning of these weights in the learning process is performed to decrease an error in the classification process as well the learning process.

3.4.2. The Proposed CNN for Keratoconus Classification from the 3D Constructed Cornea Images

The proposed CNN is used to extract features in an automated way. The CNN consists of four convolutional, four max pooling, and two fully connected layers. The kernels sizes are: 3 × 3 × 3 in the convolutional layers, and 2 × 2 × 2 in the pooling layers. The feature kernels are 96, 128, 256, and 512 in the convolutional layers, respectively. In the first convolutional layer, 3 × 3 × 3 ninety-six filters are applied to 227 × 227 × 227 input images. The max pooling layer applies 2 × 2 × 2 filters to reduce the size of the preceding convolutional layer output. The reduced images are handled by the following convolutional layers after applying the second and third pooling layer. The model applies the rest of the layers until finally reaching the two fully connected layers with all neurons connected to neurons of the previous fully connected layer. A SoftMax classifier is utilized to classify the 3D cornea images. The output of the last fully connected layer is fed to the Softmax classifiers as an input. The number of samples in each training cycle is set to 32 with a learning rate of 0.01.

The architecture of the proposed CNN is demonstrated in Figure 7, and the description of the CNN layers is described in Table 3. The proposed CNN is trained in 30 epochs.

Figure 7.

Figure 7

Architecture of the proposed convolutional neural network (CNN).

Table 3.

Summary of the proposed convolutional neural network (CNN) architecture.

Layer Number of Kernels Kernel Size Output Size
Convolutional layer 96 3 × 3 × 3 96 × 114 × 114 × 114
Pooling layer 2 × 2 × 2 96 × 113 × 113 × 113
Convolutional layer 128 3 × 3 × 3 128 × 111 × 111 × 111
Pooling layer 2 × 2 × 2 128 × 56 × 56 × 56
Convolutional layer 256 3 × 3 × 3 256 × 50 × 50 × 50
Pooling layer 2 × 2 × 2 256 × 25 × 25 × 25
Convolutional layer 512 3 × 3 × 3 512 × 12 × 12 × 12
Pooling layer 2 × 2 × 2 512 × 6 × 6 × 6
FC 1000 × 1 × 1 × 1
FC 1000 × 1 × 1 × 1

Feature extraction is performed through the convolution layer by using the 3D image of the cornea as an input and generating the image matrix as an output. The image matrix is a representation of the pixels’ relationship through machine learning techniques, which capture the characteristics of the images through filters.

In the 3D corneal images, corneal gray scale intensities are utilized. If the cornea is affected by keratoconus, the gray scale intensity levels exhibit a larger area and an elevation factor. Dark grey intensity levels specify high elevation. Light intensity levels specify a flat elevation level.

Preprocessing of the input images is performed to obtain the same resolution. Images are then divided into three sets: the training set for the CNN, the validation set, and the testing set. The complete algorithm is depicted in Figure 8.

Figure 8.

Figure 8

The implementation of the convolutional neural network (CNN).

Preprocessing is a necessary step to normalize input images to a unique space. All images are normalized to a unique spatial space by resampling, which is followed by extraction of the ROI that encloses the cornea area. The 3D images are filtered utilizing 3D Gaussian filter to reduce high-frequency noise.

3.4.3. Geometrical Features Extraction

The proposed method utilizes geometric measurements to compute corneal curvature slopes and angles. Curved, steep areas are characterized by an increased slope (with respect to the vertical axis), while flat corneal areas are of much lesser slope, as shown in Figure 9.

Figure 9.

Figure 9

Corneal curvature slopes and angles computation.

After classifying the image as keratoconus-affected, we use the feature extraction algorithm to identify the stage of keratoconus.

The slopes are normalized according to the relative scale of the cornea-enclosed rectangle and give a perspective of the corneal curvature. The method performs classification of the 3D corneal images, according to the slope of the curvature in three dimensions. This enables the identification of keratoconus stages.

There are three stages that characterize keratoconus. These stages are detected through geometrical metrics of the outer shape of the affected cornea, or by measuring its thickness. In our research, we used the geometry of the outer shape of the cornea by measuring the steepness of the greatest corneal curvature from the reconstructed 3D corneal images.

The angle of curvature = 1800 θ, where θ is calculated from the three displacements d1, d2, and d3, as in Equation (7).

θ=cos1d22+d32d12/2×d2×d3 (7)

The steepness of the greatest curvature is measured as angles of the slope of straight lines drawn adjacent to the cornea. If the degree of steepness is less than 45°, the instance is diagnosed as a mild case of keratoconus. From 45° to 52°, it is classified as an advanced case. If greater than 52°, it is a severe case.

4. Experimental Results

4.1. Simulation Results for Computing the Angle of Curvature of Keratoconus

In this subsection, we present simulation results for computing the angle of curvature of keratoconus by computing the correlation between the actual angle of curvature of keratoconus in patients from a physician’s diagnosis, and the angle of curvature extracted from 3D corneal images of the patients.

Figure 10 is a presentation of the simulation results of the correlation between the actual angle of curvature of keratoconus in patients with an actual physician’s diagnosis, and the angle of curvature of keratoconus extracted from the 3D corneal images of the patients.

Figure 10.

Figure 10

The correlation between the actual angle of curvature of keratoconus in patients, and the angle of curvature of keratoconus extracted from 3D corneal images of the patients.

Figure 11 is a presentation of the Bland-Altman plot of the actual angle of curvature of keratoconus in patients from an actual physician diagnosis versus the angle of curvature of keratoconus extracted from the 3D corneal images of the patients.

Figure 11.

Figure 11

Bland-Altman plot.

The correlation plot in Figure 10 depicts the positive correlation between the actual angle of curvature diagnosed by a physician, and the angle of curvature measured by our algorithm. The experiment shows a strong linear relationship between the physician’s diagnosis and our predicted detection.

A strong positive correlation using Pearson’s r (Equation (8)) is indicated, as r is greater than 0 and approaches 1.

r=1n1 i=1nxix¯SDx yiy¯SDy (8)

We computed r in Equation (7) to be 0.95, which implies a high positive rate between the actual angle of curvature of keratoconus in patients, and the angle of curvature of keratoconus predicted by our algorithm using the data in Figure 10.

It should be noted that the Bland-Altman metric depicts the agreement of two measurements. The Bland-Altman metric is usually used in comparing a predicted computerized medical diagnosis (P) with actual diagnosis by medical personnel (A). The two measurements are the predicted angle of curvature and the actual angle of curvature of the keratoconus. The difference between P and A is where the upper and lower dotted lines denote 1.96 SD, which is the 95% limit of agreement. The difference between P and A is plotted against the mean of the manual P and A. The Bland-Altman metric between A and P indicates that the trend of the values of the two tests are of high similarity.

4.2. CNN Training Process Accuracy and Error

The proposed CNN has four convolutional and pooling layers with completely full connected layers, and a SoftMax layer. The proposed algorithm measures the slope of corneal curvature geometrically and calculates the angle of curvature.

Results: We extracted the medical diagnoses done by a physician for 250 cases. Medical evaluations were found fully in the dataset Dataverse.

Test validation was performed utilizing k-Fold cross validation and hold out testing.

  • K-Fold Cross-Validation:

The performance of the proposed CNN is measured using a k-fold cross-validation. The data is distributed into k partitions of nearly equal sizes. CNN training as well as validation are executed in k iterations. Each fold is employed for testing per iteration, while k-1 folds are employed for training. The model accuracy is computed as the average accuracy over all iterations. The results of the first experiments of the CNN utilizing 3D Cornea image results are depicted in Table 4.

Table 4.

Cross-validation for keratoconus detection.

K-Fold Testing Accuracy Precision Recall F1-Score
K = 8 93.30% 92.71% 93.17% 92.83%
K = 10 97.8% 96.41% 97.23% 97.01%
K = 12 92.45% 91.80% 91.90% 90.86%
K = 14 88.98% 88.66% 88.72% 88.55%

The results show an enhanced accuracy in the 3D Cornea images, especially in the 10-fold. We separated the data into 10 equal folds. Each fold contains cases from each Keratoconus cases (normal, severe, moderate, and mild). Experiments established that k-fold testing with k = 10 has the higher accuracy. The cross-validation testing for different k is depicted in Table 4 for 3D constructed Cornea images.

For the 10-fold validation, the classification has an average of 2.2% error approximately, which is considered a very low classification error rate, especially when mild cases are included.

The confusion matrix for k = 10 is illustrated in Table 5. The database includes 58 images of mild cases, and 70 images of moderate cases. It contains 40 3D Cornea images of severe cases. Another 100 of normal 3D Cornea images are included.

Table 5.

Keratoconus detection confusion matrix.

Predicted Cases
Mild Moderate Severe Normal Total TP FN TN FP
Actual Cases Mild 48 2 0 8 58 48 10
Moderate 2 66 2 0 70 70 0
Severe 0 0 40 0 40 40 0
Normal 2 2 0 96 100 96 4 96 4

The confusion matrix for k = 10 is illustrated in Table 5, where comparisons of the results of actual medical diagnoses with our proposed method is depicted. Our CNN demonstrates the ability to predict correct diagnoses of the four stages of keratoconus (severe, moderate, and mild) as well as normal, keratoconus-free cases, as indicated in Table 5 and Table 6.

Table 6.

Accuracy, sensitivity, and specificity of the proposed system.

Metric Computation
Accuracy 97.80%
Sensitivity 98.45%
Specificity 96.00%

The classifier is evaluated according to accuracy, sensitivity, and specificity, as defined in Equations (9)–(11) respectively.

Accuracy=TP+TNTP+FP+FN+TN (9)
Sensitivity= TPTP+FP (10)
Specificity= TNTN+FN (11)

where TP indicates true positive cases, TN indicates true negative cases, FP indicates a false positive detection, and FN indicates false negative cases.

The accuracy is defined as the percentage of correctly predicted cases in the test set, the sensitivity is defined as true positives rate, and the specificity is defined as a true negatives rate. These measures are computed and illustrated in Table 6.

Table 7 illustrates the comparison of results of The CNN for Keratoconus classification in the literature and our proposed CNN using k-Fold Cross-Validation, k = 10: training set (80%) and testing set (20%).

Table 7.

Comparison of results of the convolutional neural network (CNN) for Keratoconus classification and our proposed CNN, using k-Fold Cross-Validation, k = 10: training set (80%), testing set (20%).

Method Accuracy Sensitivity Specificity
[5] Dhaini, A.R., et al., 2018 93.68% 94.60% 91.43%
[6] Daud, M.M., et al., 2020 96.03% 95.18% 94.44%
[7] Ali, A.H., et al., 2018 90.68% 92.10% 93.52%
[9] Askarian, B., et al., 2018 93.68% 94.60% 91.43%
The proposed methodology 97.80% 98.45% 96.00%

5. Conclusions

This paper presents an automated technique that reconstructs 3D corneal images from two 2D frontal and lateral corneal digital images. The 3D reconstructed images are used to detect keratoconus in an automated way. The proposed technique consists of four main steps: cornea detection in the 2D images of the eye and features extraction, features extraction and depth calculation, construction of the corneal 3D images, and automated detection of keratoconus and determination of its severity using the angle of curvature of the 3D reconstructed corneal images. Our experimental results show that the proposed method identifies keratoconus and its stages efficiently. The proposed method was validated by performing experiments and making comparisons with manual methods performed by professional medical experts, which are presumed to be ground truth. Quantitative results are compared with ground truth to prove the accuracy of this method.

By examining its accuracy, it can be seen that the proposed method can detect keratoconus comparably to the results of an actual medical diagnosis. The technique shows the ability to predict correct diagnoses of the four stages of keratoconus (severe, moderate, and mild) as well as normal keratoconus-free cases, with an accuracy of 97.8% in a total of 268 cases. These results are shown to be better than those achieved by the systems using image processing in References [6,9]. 3D reconstruction of the cornea can also be utilized to detect other diseases of the cornea and in educational paradigms.

For future work, we will concentrate on the computational cost of the proposed scheme. The complexity of the proposed scheme can leave many opportunities for enhancing speed-up of the individual steps. In this paper, we concentrated more on accuracy and precision rather than exploiting time complexity especially when the application does not have a real time requirement.

The proposed scheme can be utilized as an aid for ophthalmology personnel. Mass screening can be done using an automated system based on this scheme, since the specificity of the proposed method is relatively high at around 96%, which is the percentage of the true negatives that are correctly recognized by the proposed scheme. Therefore, the proposed system can identify a normal condition in an automated way and will not be needed for further investigation by medical experts. Ophthalmologists will concentrate on the positive cases that are detected by the proposed method.

Author Contributions

Conceptualization, H.A.H.M. and H.A.M. Data curation, H.A.H.M. Formal analysis, H.A.H.M. and H.A.M. Funding acquisition, H.A.H.M. and H.A.M. Investigation, H.A.H.M. and H.A.M. Methodology, H.A.H.M. and H.A.M. Project administration, H.A.M. Resources, H.A.H.M. and H.A.M. Software, H.A.H.M. Supervision, H.A.M. Validation, H.A.H.M. and H.A.M. Visualization, H.A.H.M. and H.A.M. Writing—original draft, H.A.H.M. Writing—review & editing, H.A.H.M. and H.A.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Deanship of Scientific Research at Princess Nourah bint Abdulrahman University through the Fast-track Research Funding Program.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Footnotes

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Galvis V., Sherwin T., Tello A., Merayo J., Barrera R., Acera A. Keratoconus: An inflammatory disorder? Nat. Eye. 2015;29:843–859. doi: 10.1038/eye.2015.63. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Accardo P.A., Pensiero S. Neural network-based system for early keratoconus detection from corneal topography. J. Biomed. Inform. 2002;35:151–159. doi: 10.1016/S1532-0464(02)00513-0. [DOI] [PubMed] [Google Scholar]
  • 3.Valdés-Mas M.A. A new approach based on Machine Learning for predicting corneal curvature (K1) and astigmatism in patients with keratoconus after intracorneal ring implantation. Comput. Methods Programs Biomed. 2014;116:39–47. doi: 10.1016/j.cmpb.2014.04.003. [DOI] [PubMed] [Google Scholar]
  • 4.Auvinet E., Meunier J., Ong J., Durr G., Gilca M., Brunette I. Methodology for the construction and comparison of 3D models of the human cornea; Proceedings of the 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society; San Diego, CA, USA. 28 August–1 September 2012; pp. 5302–5305. [DOI] [PubMed] [Google Scholar]
  • 5.Dhaini A.R., Chokr M., El-Oud S.M., Fattah M.A., Awwad S. Automated detection and measurement of corneal haze and demarcation line in spectral-domain optical coherence tomography images. IEEE Access. 2018;6:3977–3991. doi: 10.1109/ACCESS.2018.2789526. [DOI] [Google Scholar]
  • 6.Daud M.M., Zaki W.M., Hussain A., Mutalib H.A. Keratoconus Detection Using the Fusion Features of Anterior and Lateral Segment Photographed Images. IEEE Access. 2020;8:142282–142294. doi: 10.1109/ACCESS.2020.3012583. [DOI] [Google Scholar]
  • 7.Ali A.H., Ghaeb N.H., Musa Z.M. Support vector machine for keratoconus detection by using topographic maps with the help of image processing techniques. IOSR J. Pharm. Biol. Sci. 2017;12:50–58. [Google Scholar]
  • 8.Hua C., Tao L., Mengshu G., Bojin Z. A depth optimization method for 2D-to-3D conversion based on RGB-D images; Proceedings of the 4th IEEE International Conference on Network Infrastructure and Digital Content; Beijing, China. 19–21 September 2014; pp. 223–227. [DOI] [Google Scholar]
  • 9.Askarian B., Fatemehsadat T., Amin A., Chong J.W. An affordable and easy-to-use diagnostic method for keratoconus detection using a smartphone. Proc. Spie. 2018;10575:1057512. [Google Scholar]
  • 10.Arizona Eye Specialists. [(accessed on 12 December 2020)]; Available online: https://arizonaeyes.net/services/cornea-center/keratoconus/
  • 11.Patoommakesorn K., Vignat F., Villeneuve F. The 3D Edge Reconstruction from 2D Image by Using Correlation Based Algorithm; Proceedings of the 2019 IEEE 6th International Conference on Industrial Engineering and Applications (ICIEA); Tokyo, Japan. 12–15 April 2019; pp. 372–376. [DOI] [Google Scholar]
  • 12.Wang S., Wu T., Wang K., Peng Z., Kwok N., Sarkodie-Gyan T. 3-D Particle Surface Reconstruction from Multiview 2-D Images With Structure From Motion and Shape From Shading. IEEE Trans. Ind. Electron. 2021;68:1626–1635. doi: 10.1109/TIE.2020.2970681. [DOI] [Google Scholar]
  • 13.Dixit S., Pai V.G., Rodrigues V.C., Agnani K., Priyan S.R.V. 3D Reconstruction of 2D X-Ray Images; Proceedings of the 4th International Conference on Computational Systems and Information Technology for Sustainable Solution (CSITSS); Bengaluru, India. 20–21 December 2019; pp. 1–5. [DOI] [Google Scholar]
  • 14.Jeni A.L., Jeffrey F.C., Takeo K. Dense 3D face alignment from 2D video for real-time use. Image Vis. Comput. 2017;58:13–24. doi: 10.1016/j.imavis.2016.05.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Scarpa F., Fiorin D., Ruggeri A. In Vivo Three-Dimensional Reconstruction of the Cornea from Confocal Microscopy Images; Proceedings of the 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society; Lyon, France. 22–26 August 2007; pp. 747–750. [DOI] [PubMed] [Google Scholar]
  • 16.Dataharvest, GitHub DM2LL/Cornea Dataset. [(accessed on 12 December 2020)]; Available online: https://github.com/DM2LL/Cornea.
  • 17.Lowe D.G. Object recognition from local scale-invariant features; Proceedings of the Seventh IEEE International Conference on Computer Vision; Kerkyra, Greece. 20–27 September 1999; pp. 1150–1157. [DOI] [Google Scholar]
  • 18.Setiawan W., Wahyudin A., Widianto G.R. The use of scale invariant feature transform (SIFT) algorithms to identification garbage images based on product label; Proceedings of the 3rd International Conference on Science in Information Technology (ICSITech); Bandung, Indonesia. 25–26 October 2017; pp. 336–341. [DOI] [Google Scholar]
  • 19.Elbita A., Qahwaji R., Ipson S., Ahmed T.Y., Ramaesh K., Colak T. Applied Signal and Image Processing: Multidisciplinary Advancements. IGI Global; Hershey, PA, USA: 2011. Recent Advances in Corneal Imaging; pp. 271–287. [Google Scholar]
  • 20.Shi Y. Strategies for improving the early diagnosis of keratoconus. Clin. Optom. 2016;8:13–21. doi: 10.2147/OPTO.S63486. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Smolek M.K., Klyce S.D. Current keratoconus detection methods compared with a neural network approach. Investig. Ophthalmol. Vis. Sci. 1997;38:2290–2299. [PubMed] [Google Scholar]
  • 22.Vázquez P.R., Galletti J.D., Minguez N., Delrivo M., Bonthoux F.F., Pförtner T., Galletti J.G. Pentacam Scheimpflug tomography findings in topographically normal patients and subclinical keratoconus cases. Am. J. Ophthalmol. 2014;158:32–40. doi: 10.1016/j.ajo.2014.03.018. [DOI] [PubMed] [Google Scholar]
  • 23.Rublee E., Rabaud V., Konolige K., Bradski G. ORB: An efficient alternative to SIFT or SURF; Proceedings of the IEEE International Conference on Computer Vision; Barcelona, Spain. 6–13 November 2011; pp. 2564–2571. [Google Scholar]
  • 24.Chien H., Chuang C., Chen C., Klette R. When to use what feature? SIFT, SURF, ORB, or A-KAZE features for monocular visual odometry; Proceedings of the IEEE International Conference on Image and Vision Computing; Palmerston North, New Zealand. 21–22 November 2016; pp. 1–6. [Google Scholar]
  • 25.Bay H., Ess A., Tuytelaars T., Gool L.V. Speeded-up robust features (SURF) Comput. Vis. Image Underst. 2008;110:346–359. doi: 10.1016/j.cviu.2007.09.014. [DOI] [Google Scholar]
  • 26.Tareen S.K., Saleem Z. A comparative analysis of SIFT, SURF, KAZE, AKAZE, ORB, and BRISK; Proceedings of the International Conference on Computing, Mathematics and Engineering Technologies (iCoMET); Sukkur, Pakistan. 3–4 March 2018; pp. 1–10. [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Not applicable.


Articles from Sensors (Basel, Switzerland) are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES