Skip to main content
. 2017 Dec 17;17(12):2933. doi: 10.3390/s17122933

Table 1.

Comparison of existing recognition methods and the new method proposed by this study (d’ means the d-prime value of Equation (4). EER means equal error rate, and its concept is explained in Section 3.4). LBP, local binary patterns; RBWT, reverse biorthogonal wavelet transform; WCPH, weighted co-occurrence phase histograms; CLAHE, contrast-limited adaptive histogram equalization.

Category Method Periocular Region Accuracy Advantage Dis-Advantage
NIR camera- based Personalized weight map [14] Not using EER of 0.78% (A) Better image quality and recognition performance than the visible-light camera method - Large and expensive NIR illuminator with NIR camera
- Additional power usage by NIR illuminator
SVM with hamming distance (HD) [15] EER of 0.09% (B)
EER of 0.12% (C)
Fusion (AND rule) of left and right irises [16] Accurate EER is not reported (EER of about 18–21% (D))
Adaptive bit shifting for matching by in-plane rotation angles [17] EER of 4.30% (D)
LBP with iris and periocular image in polar coordinate [21] Using EER of 10.02% (D)
Log-Gabor binary features with geometric key encoded features [41] Not using EER of 19.87%,
d’ of 1.72 (E)
Same algorithm for NIR and visible light iris images Using manual hand-crafted features
EER of 3.56%,
d’ of 5.32 (D)
CNN-based method
(Proposed method)
Using EER of 3.04–3.10% (D) Intensive CNN training is necessary
Visible light camera- based Log-Gabor binary features with geometric key encoded features [41] Not using EER of 16.67%,
d’ of 2.08 (F)
Using manual hand-crafted features
RBWT [29] d’ of 1.09 (G) Recognition is possible with general visible-light camera without additional NIR illuminator - Image brightness is affected by environ- mental light
- Greater ghost effect caused by reflected light from environ- mental objects
Non-circular iris detection based on RANSAC [30] d’ of 1.32 (G)
Fusion of LBP and BLOBs features [31] d’ of 1.48 (G)
WCPH-based representation of local texture pattern [32] d’ of 1.58 (G)
CLAHE-based image enhancement [34] EER of 18.82% (G)
Pre-classification based on eyes and color [35] EER of 16.94%,
d’ of 1.64 (G)
LBP-based periocular recognition [36] Using EER of 18.48%,
d’ of 1.74 (G)
AdaBoost training by multi-orient 2D Gabor feature [37] Not using d’ of 2.28 (G)
Combining color and shape descriptors [40] EER of about 16%,
d’ of about 2.42 (G)
CNN-based method
(Proposed method)
Using EER of 10.36%,
d’ of 2.62 (G)
Same algorithm for NIR and visible light iris images Intensive CNN training is necessary
EER of 16.25–17.9%,
d’ of 1.87–2.26 (H)

A: Institute of automation of Chinese academy of sciences (CAISA)-IrisV3-Lamp database; B: CASIA-Iris-Ver.1 database; C: Chek database; D: CASIA-Iris-distance database; E: Face recognition grand challenge (FRGC) database; F: University of Beira iris (UBIRIS).v2 database; G: NICE.II training dataset; H: Mobile iris challenge evaluation (MICHE) database.