Skip to main content
Sensors (Basel, Switzerland) logoLink to Sensors (Basel, Switzerland)
. 2012 Mar 29;12(4):4324–4338. doi: 10.3390/s120404324

Integrating Iris and Signature Traits for Personal Authentication Using User-Specific Weighting

Serestina Viriri 1,*, Jules R Tapamo 2
PMCID: PMC3355413  PMID: 22666032

Abstract

Biometric systems based on uni-modal traits are characterized by noisy sensor data, restricted degrees of freedom, non-universality and are susceptible to spoof attacks. Multi-modal biometric systems seek to alleviate some of these drawbacks by providing multiple evidences of the same identity. In this paper, a user-score-based weighting technique for integrating the iris and signature traits is presented. This user-specific weighting technique has proved to be an efficient and effective fusion scheme which increases the authentication accuracy rate of multi-modal biometric systems. The weights are used to indicate the importance of matching scores output by each biometrics trait. The experimental results show that our biometric system based on the integration of iris and signature traits achieve a false rejection rate (FRR) of 0.08% and a false acceptance rate (FAR) of 0.01%.

Keywords: biometrics fusion, multi-modal biometrics, iris, signature, user-specific weighting

1. Introduction

Multi-modal biometric systems address the shortcomings of uni-modal systems. For instance, the problem of non-universality: it is possible for a subset of users to not possess a particular biometrics trait. For example, the feature extraction module of an iris authentication system may be unable to extract features from iris images associated with specific individuals, due to either the occlusion of the iris region of interest or poor quality of the images. Multi-modal systems ascertain that a live user is indeed authenticated. It is very difficult for intruders to circumvent multiple biometric traits simultaneously [1]. Thus, a challenge-response type of authentication can be facilitated using multi-biometric systems.

Furthermore, multi-modal biometric systems are expected to be more reliable due to the presence of multiple pieces of evidence [2]. Multi-modal systems should be able to meet the stringent performance requirements imposed by various applications [3]. In fact, research has proved that combining biometric techniques for human identification is more effective, but challenging [4]. Therefore, the problem of information fusion still needs attention in order to optimize the success rate of multi-modal biometric systems.

In this paper, a framework for modeling bi-modal biometric systems based on iris (a physiological trait) and the signature (a behavioral trait) for personal authentication is proposed. These two biometric traits are not correlated. Moreover, iris is proving to be one of the most reliable biometric traits while signatures continue to be widely used for personal authentication.

2. Related Work

Multi-modal biometrics was pioneered by Anil K. Jain; and there has been substantial research carried out in this area. A variety of biometric fusion schemes, which use classifiers, have been described in the literature to combine multiple biometric trait scores. These include majority voting, sum and product rules, k-NN classifiers, SVMs, and decision trees [46]. For instance, Ross et al. [1,7] combine the matching scores of the face, fingerprint and hand geometry using three different techniques, the sum rule, decision tree, and linear discriminant analysis. Experiments indicate that the fusion scheme using the sum rule with normalized scores gives the best performance. These results are further improved by learning user-specific matching thresholds and weights for individual biometric traits.

Other multi-modal biometric fusion approaches include: the HyperBF network approach used to combine the normalized scores of five different classifiers operating on the voice and face feature sets of an individual for identification [8]. Bigun et al. develop a statistical framework based on Bayesian statistics to integrate the speech (text-dependent) and face data of a user [9]. The estimated biases of each classifier is taken into account during the fusion process. Hong and Jain associate different confidence measures with the individual matchers when integrating the face and fingerprint traits of a user [3]. They also suggest an indexing mechanism wherein face information is used to retrieve a set of possible identities and the fingerprint information is then used to select a single identity. A commercial product called BioID [10] uses the voice, lip motion and face features of a user to verify identity. Brunelli and Falavigna also addressed an important aspect of fusion; the normalization of scores obtained from different domains [8]. Normalization maps the scores obtained from different ranges into a common range.

Although several score fusion techniques have been proposed in the literature, Ross et al. [11] grouped all of them into three main categories:

  • Density-based score fusion: this technique estimates the conditional densities p(sgenuine) and p(simpositor), where s = [s1, s2, …, sn] is the vector of matching scores, computes the probabilities P(genuines) and P(impositors), and can use the Bayesian rule to make a decision.

  • Transformation-based score fusion: this approach transforms the match scores from different matchers into a common domain using normalization techniques.

  • Classifier-based score fusion: learning pattern classifiers are used to determine the relationship between the vector of match scores, s = [s1, s2, …, sn] and the posteriori probabilities, P(genuines) and P(impositors).

In this paper, an enhanced user-specific weighting technique is proposed, which is based on the different degrees of importance for different traits of an individual to integrate the physiological trait, the iris and behavioral trait, the signature. The user-specific weights for individual biometric traits are calculated based on the score of each biometric trait of an individual user. The proposed approach is an alternative to the estimation of user-specific weights by exhaustive search.

The rest of the paper is structured as follows: Section 3 explores various fusion techniques for combining biometric traits; Section 4 describes an overall multi-modal biometrics system; Section 5 describes the weighting techniques and normalization strategies; Section 6 presents experimental results; and Section 7 draws the conclusions and future work.

3. Multi-Modal Biometrics System

Multi-modal biometric systems are based on the consolidation of information presented by multiple evidences that stem from multiple traits. Some of the limitations imposed by uni-modal biometric systems (that is, biometric systems that rely on the evidence of a single biometric trait) can be overcome by using multiple biometric modalities [4,8,9]. Such systems, known as multi-biometric systems, are expected to be more reliable due to the presence of multiple, fairly independent pieces of evidence.

A variety of factors should be considered when designing a multi-biometric system. These include the choice and number of biometric traits; the level in the biometric system at which information provided by multiple traits should be integrated; the methodology adopted to integrate the information; and the cost vs. matching performance trade-off.

A simple multi-modal biometrics system has five important components as depicted in Figure 1, in which different biometric traits are fused at match score level:

  1. Sensor module, acquires the biometric data of an individual. An example is the ePadInk tablet that captures the signature.

  2. Feature extraction module, the acquired biometric data is processed to extract distinctive feature values.

  3. Matching module, the extracted feature values are compared against those in the template by generating a matching score.

  4. Fusion module, combines the biometric trait scores.

  5. Decision module, a claimed identity is either accepted or rejected based on the fusion matching score generated in the fusion module.

Figure 1.

Figure 1.

Multi-modal Biometrics System (Iris & Signature).

4. Fusion in Biometrics

There are various levels of fusion for combining biometric traits. The three possible levels of fusion are [1, 11]:

  1. Fusion at the sensor level: The consolidation of evidence captured by multiple sources of the input data before feature extraction.

  2. Fusion at the feature extraction level: The data obtained from each sensor is used to compute a feature vector. If the features extracted from one biometric trait are independent of those extracted from the other, it is better to concatenate the two vectors into a single new vector. The new feature vector now has a higher dimensionality and represents a person's identity in a different hyperspace. Feature reduction techniques may be employed to extract useful features from the larger set of features.

  3. Fusion at the matching score level: Each subsystem provides a matching score indicating the proximity of the feature vector with the template vector. These scores can be combined to assert the veracity of the claimed identity. Fusion techniques such as logistic regression may be used to combine the scores reported by different sensors. These techniques attempt to minimize the FRR for a given FAR [12].

  4. Fusion at the rank level: The consolidation of the ranks output by individual biometric subsystems in order to drive a consensus rank for each identity [11].

  5. Fusion at the decision level: Each sensor can capture multiple biometric data and the resulting feature vectors are individually classified into the two classes: accept or reject. A majority vote scheme, such as that employed in [13] can be used to make the final decision.

5. Integrating Iris and Signature Traits

A brief description of the two biometric traits used in this research work is given below.

5.1. Iris Recognition

Iris recognition is proving to be one of the most reliable biometric traits for personal identification since iris patterns have stable, invariant and distinctive features. Several techniques have been proposed for iris segmentation, coding and matching. The most common approach used in iris recognition is to generate feature vectors corresponding to individual iris images and perform iris matching based on some distance measures [14,15]. In this research work, an algorithm that detects the largest non-occluded rectangular part of the iris as region of interest (ROI) is used [16]. A cumulative-sum-based grey change analysis technique is applied to the ROI to extract features for recognition [17]. Then, the Hamming Distance is computed as the iris matching score.

5.2. Signature Verification

Signature continues to be an important biometric trait because it remains widely used primarily for authenticating the identity of human beings. An efficient text-based directional signature recognition algorithm which verifies signatures, even when they are composed of symbols and special unconstrained cursive characters which are superimposed and embellished is used [18]. This algorithm extends the character-based signature verification technique. The text-based directional algorithm integrates the direction information extracted from the structure of the whole signature text image contours with the transition information between background and foreground pixels in the signature text image. The extracted features represent the distinguishing cursive handwriting styles. Then, the Mahalanobis Distance is computed as the signature matching score.

5.3. Combining Iris and Signature Traits

The iris and signature traits are fused at the matching score level, where the matching scores output of each of these two traits are weighted and combined. Fusion at the matching score level is usually preferred, as it is relatively easy to access and combine the scores presented by the different modalities [4]. There are two distinct approaches for the match score level fusion: the classification problem approach [6], where a feature vector is constructed using the matching scores output by the individual matchers, and the combination problem approach, where the individual matching scores are combined to generate a single scalar score, which is then used to make the final decision. The literature shows that the combination approach performs better than the classification approach [1]; hence, it is adopted in this paper. The combining process is summarized in Algorithm 1.

Algorithm 1 Fusion of Iris and Signature Traits.
1: for each fusion per User do
2: for each User do
3:    if iris then
4:      SirisHammingDistance {//Iris Score Generation}
5:    else
6:      SsigMahalanobisDistance {//Signature Score Generation}
7:    end if
8: end for
9: for each score do
10:    if Siris then
11:       SirisNormalization(Siris)
12:    else
13:       SsigNormalization(Ssig)
14:    end if
15: end for
16: for each normalized score do
17:    if Siris then
18:       WirisWeighting(Siris)
19:    else
20:       WsigWeighting(Ssig)
21:    end if
22: end for
23: SfusWirisSiris+WsigSsig
24: end for

Score Generation

Iris matching scores are computed from string iris feature codes extracted by the cumulative-sum-based grey change analysis technique. To verify the similarity of two iris codes, Hamming Distance (HD) based on the matching algorithm [19] is used. The smaller the HD, the higher the similarity of the compared iris codes. The HD denotes the iris raw matching score, Siris, which is computed as:

Siris=12N[(i=1NAh(i)Bh(i))+(i=1NAv(i)Bv(i))]only whenAh(i)0Bh(i)0,Av(i)0Bv(i)0 (1)

where Ah(i) and Av (i) denote the enrolled iris code over horizontal and vertical directions, respectively, Bh(i) and Bv(i) denote the new input iris code over the horizontal and vertical directions respectively. N is the total number of cells, and ⊕ is the XOR operator.

Signature matching scores are generated from the signature feature vectors. To verify the similarity of two signatures, Mahalanobis Distance (MD) based on correlations between signatures is used. It differs from Euclidean distance in that it takes into account the correlations of the data set and is scale-invariant. The smaller the MD, the higher the similarity of the compared signatures. The MD denotes the signature raw matching score, Ssig, which is computed as in Equation (2).

Ssig(x,y)=(xy)TS1(xy) (2)

where x⃗ and y⃗ denote the enrolled feature vector and the new signature feature vector to be verified, with the covariance matrix S.

Score Normalization

Given a set of n raw matching scores {Sk}, k = 1,2, …, n, the corresponding normalized scores Sk are given by:

  • Min-max normalization: retains the original distribution of scores and maps all the scores into the [0, 1] range.
    Sk=Skmin({Sk})min({Sk})min({Sk}) (3)
    where min({Sk}) and max({Sk}) are the minimum and maximum, respectively, of the given set {Sk} of matching scores.
  • Z-score normalization: transforms the scores to a distribution with arithmetic mean of 0 and standard deviation of 1.
    Sk=Skμσ (4)
    where μ and σ are the mean and standard deviation, respectively, of the set {Sk}.
  • Tanh normalization: is a robust statistical technique [20] which maps the raw scores into the [0, 1] range.
    Sk=12{tanh(0.01(Skμσ))+1} (5)
    where μ and σ are the mean and standard deviation, respectively, of {Sk}.

The ROC curves depicting the performance of the individual score normalization techniques implemented on iris biometrics trait is shown in Figure 2. The CASIA iris database [21] is used for comparing and contrasting these normalization algorithms. A similar experiment was conducted on the signature trait using the GPDS signature database [22], and it obtained comparable results to the iris trait. As a result, tanh normalization technique performs better than min-max and Z-score techniques.

Figure 2.

Figure 2.

ROC curves showing the performance of each of the three normalization techniques on the Iris trait.

Score Weighting

Let siris and ssig be the normalized scores of the iris and signature traits, respectively. The fusion score, sfus is computed as

sfus=wirissiris+wsigssig (6)

where wiris and wsig are the weights associated with the degrees of importance of the two traits per individual, and

wiris+wsig=1 (7)

Different iris scores and signature scores are given different degrees of importance for different users. For instance, by reducing the weight wiris of an occluded iris and increasing the weight wsig associated with the signature trait, the false reject error rate of the particular user can be reduced. The biometric system learns user-specific parameters by observing system performance over a period of time [4]. Two techniques are used to compute the user-specific weights: an exhaustive search technique, and a user-score-based technique.

The Exhaustive Search Technique

Let wirisi and wsigi, be the weights associated with the ith user in the database. The algorithm operates on the training set as follows [7]:

  • For the ith user in the database, vary weights wirisi and wsigi over the range [0, 1], with the constraint wirisi+wsigi=1. Compute sfusi=wirisisirisi+wsigissigi. This computation is performed over all scores associated with the ith user.

  • Choose that set of weights that minimizes the total error rate. The total error rate is the sum of the false acceptance and false rejection rates pertaining to this user.

The set of weights, {wirisi,wsigi}, that minimizes the total error rate, with the constraint wirisi+wsigi=1, does not necessarily associate the degrees of importance for iris and signature biometric traits of the ith individual in the fusion score: sfusi=wirisisirisi+wsigissigi. An alternative user-score-based weighting technique, which computes the weights, {wirisi,wsigi}, by associating them with the degrees of importance for iris and signature biometric traits, respectively, is proposed. In this method, the weights, {wirisi,wsigi}, which are not constrained to wirisi+wsigi=1, are computed in consideration of how close the scores sirisi and ssigi are to the thresholds of the iris and signature traits, respectively. The user-score-based weighting technique is described below.

The User-Score-Based Technique

Let sirisi and ssigi be the normalized scores associated with the ith user in the database, and τ1 and τ2 are the thresholds of the iris and signature traits, respectively. The preliminary weights wirisi and wsigi per trait are computed as

wirisi=sirisiτ1+sirisi (8)

and

wsigi=ssigiτ2+ssigi (9)

where wirisi and wsigi are the initial weights associated with the iris and signature, respectively, without the constraint wirisi+wsigi=1. These weights are assigned to the scores, sirisi and ssigi after analyzing how close or farther away the scores are from their respective thresholds, τ 1 and τ2. Then, the fusion weights for the ith user are computed respectively, for the iris and signature as

wirisi=wirisiwirisi+wsigi (10)
wsigi=wsigiwirisi+wsigi (11)

with the constraint wirisi+wsigi=1, and the fusion score is computed in Equation (12).

sfusi=wirisisirisi+wirisissigi (12)

Score Fusion

The dual ν-Support Vector Machine (2ν-SVM) fusion algorithm [23] is used to integrate the matching scores of the iris siris and signature ssig, together with their corresponding weights, wiris and wsig. The weighted iris matching score miris is defined as

miris=siris×wiris (13)

and the weighted signature score msig is defined as

msig=ssig×wsig (14)

The weighted matching scores and their labels are used to train the 2ν-SVM for bimodal fusion. Let the training data be

Ziris=(miris,y) (15)

and

Zsig=(msig,y) (16)

where y ∈ {+1, −1}, such that +1 represents the genuine class and −1 represents the impostor class. The 2ν-SVM error parameters are calculated using Equation (17) and (18).

ν+=n+n++n (17)
ν=nn++n (18)

where n+ and n are the number of genuine and impostor, respectively. The training data is mapped into a higher dimension feature space such that Zφ (Z), where φ(.) is the mapping function. The optimal hyperplane separates the data into two different classes in the higher dimensional feature space.

In the classification phase, the bi-modal fusion matching score sfus is computed in Equation (19),

sfus=firis(miris)+fsig(msig) (19)

where

firis(miris)=airisφ(miris)+biris (20)
fsig(msig)=asigφ(msig)+bsig (21)

where airis, asig, biris and bsig are parameters of the hyperplane. The solution of Equation (19) is the signed distance of sfus from the separating hyperplane given by the two 2ν-SVM for the two biometric modalities. The decision function defined in Equation (22) verifies the identity.

Decision(sfus)={Accept,ifsfus>0Reject,otherwise (22)

6. Experimental Results and Discussions

The performance of the investigated bi-modal biometrics system is evaluated by calculating its false acceptance rate (FAR) and false rejection rate (FRR) at various thresholds. These two factors are integrated together in a receiver operating characteristic (ROC) curve that plots the FRR or the genuine acceptance rate (GAR) against the FAR at different thresholds. The FAR and FRR are computed by generating all possible genuine and impostor matching scores and then setting a threshold for deciding whether to accept or reject a match.

The bi-modal database used in the experiments was constructed by merging CASIA iris database [21] with GPDS signature database [22]. An alternative bi-modal database was constructed from CASIA iris database and a database created from signatures captured using the ePadInk tablet. Seven iris images of the same user were obtained from a set of 50 users from the CASIA database. Fifteen signatures (ten genuine and 5 forgeries) were obtained from a different set of 50 users from the GPDS database, and another set of signatures were captured using ePadInk tablet. The mutual independence assumption of the iris and signature biometric traits allows us to randomly pair the users from the two different data sets. In this way, two bi-modal databases consisting of 50 users were constructed, either from CASIA with GPDS, or CASIA with signatures captured using ePadInk tablet.

Firstly, the matching scores of the iris and signature traits are computed as defined in Equations (1) and (2). These matching scores are normalized and weighted as defined in subsections of 5.3. Various normalization techniques were investigated. The ROC curves depicting the performance of the individual score normalization techniques is shown in Figure 2. The Tanh Normalization technique performs better than the Min-Max and Z-Score techniques.

Table 1 shows the scores for the iris and signature biometric traits, and their respective weights, for the sample of ten different individuals. The raw scores are normalized by the tanh technique, and the weights are computed using Equations (10) and (11). For instance, from Table 1, we observe that for user 5, W1i=0.83, a high weight attached to the iris trait, possibly due to the blurred iris. This demonstrates the importance of assigning user-specific weights to the individual biometric trait.

Table 1.

User-specific Scores and Weights of different traits for 10 users.

User Iris Score Signature Score Normalized Iris Score Normalized Signature Score Iris Weight Signature Weight
1 0.192 0.001 0.487 0.488 0.80 0.20
2 0.277 0.001 0.490 0.488 0.86 0.14
3 0.625 2.054 0.505 0.505 0.50 0.50
4 0.446 2.438 0.506 0.496 0.44 0.56
5 0.232 0.005 0.486 0.492 0.83 0.17
6 0.473 2.383 0.498 0.507 0.47 0.53
7 0.071 0.028 0.484 0.493 0.67 0.33
8 0.522 2.474 0.505 0.507 0.47 0.53
9 0.366 1.358 0.497 0.502 0.48 0.52
10 0.451 1.774 0.502 0.506 0.50 0.50

Figure 3 shows the average true positive rates achieved by the exhaustive search technique and the user-score-based approach, respectively, on uni-modal biometric traits based on iris and signature. The exhaustive search technique obtained true positive rates of 92.4% and 82.0% on the iris and signature traits, respectively. The user-score-based approach obtained true positive rates of 99.25% and 94.0% on the iris and signature traits, respectively. The overall average true positive rate achieved by the user-score-based is 99.6%. Therefore, the results show an improvement in accuracy when the user-score-based weighting technique is used.

Figure 3.

Figure 3.

Average true positive rate of the iris and signature Modalities.

6.1. Validation of the User-Score-Based Weighting Algorithm

The ROC curves in Figure 4, show the performance of the uni-modal biometric traits based on iris and signature, respectively, and the 2ν-SVM fused based bi-modal traits weighted by the exhaustive search technique and the user-score-based approach, respectively. The overall results show an improvement in performance when scores are combined using the user-score-based weighting technique. For a given FAR of 0.01%, user-score-based weighting achieve a very low FRR of 0.08%, compared to exhaustive search weighting with a FRR of 0.75%, as shown in Table 2.

Figure 4.

Figure 4.

Tanh normalized-based ROC curves showing the performance of using Iris, Signature, Iris + Signature (Exhaustive), and Iris + Signature (User-score-based).

Table 2.

Exhaustive search vs. User-score-based technique.

Weighting Technique FAR (%) FRR (%)
Exhaustive search 0.01 0.75
User-score-based 0.01 0.08

The user-score-based weighting algorithm computes the weights of the iris and signature traits by analyzing how close the two matching scores are to their respective thresholds, hence associating the weights with the different degrees of importance for the bi-modal biometric traits involved. Comparatively, the exhaustive search weighting technique calculates weights that simply minimize the total error rate. This minimum error rate (the sum of FAR and FRR) does not necessarily reflect the different degrees of importance for the bi-modal biometric traits fused.

6.2. Comparison with Existing Bi-Modal Biometric Systems

Table 3 shows the performance of the user-score-based weighted 2ν-SVM fusion algorithm, compared to other bi-modal biometric fusion algorithms in the literature. The quality based sum-rule [23] obtained an accuracy rate of 97.39%, when used to fuse the face and iris modalities, whereas the fusion of the iris and signature modalities based on the user-score-based weighted 2ν-SVM technique achieves an accuracy rate of 99.6%.

Table 3.

Comparative table of the weighted based fusion algorithms.

Biometric Modalities Weighted Fusion Algorithm Verification Accuracy (%)
Face + Iris Quality based Sum-rule [23] 97.39
Face + Speech k-NN based fusion [24] 99.72
Face + Iris Quality based [23] 98.91
Iris + Signature User-Score-based Weighted 2ν-SVM 99.6

7. Conclusions

In this paper, an enhanced user-specific weighting technique of integrating a physiological biometrics trait, the iris, with a behavioral trait, the signature, is proposed. The proposed user-score-based approach calculates weights for each biometrics trait per user in proportion to the scores of the biometric traits of the same user. This enhanced user-specific weighting improves the accuracy rate of bi-modal biometric systems by reducing false reject rate (FRR) on a low false accept rate (FAR). Experimental results show that the proposed approach achieved a minimal FRR of 0.08% on a FAR of 0.01%. Further investigation of the effect of the proposed approach with other different biometric modalities is envisaged.

Acknowledgments

Portions of the research in this paper use the CASIA iris image database collected by the Institute of Automation of the Chinese Academy of Sciences, and the Grupo de Procesado Digital de Sennales GPDS signature database collected by the Universidad de Las Palmas de Gran Canaria, Spain.

References

  • 1.Ross A., Jain A.K. Information fusion in biometrics. Pattern Recogn. Lett. 2003;24:2115–2125. [Google Scholar]
  • 2.Hong L., Jain A.K. Can multibiometrics improve performance? Proce. AutoID. 1999;99:59–64. [Google Scholar]
  • 3.Hong L., Jain A.K. Integrating faces and fingerprints for personal identification. IEEE Trans. Pattern Anal. Mach. Intell. 1998;20:1295–1307. [Google Scholar]
  • 4.Jain A.K., Ross A. Multibiometric systems. Commun. ACM. 2004;47:34–40. [Google Scholar]
  • 5.Kittler J., Hatef M., Duin R., Matas J. On combining classifiers. IEEE Trans. Pattern Anal. Mach. Intell. 1998;20:226–239. [Google Scholar]
  • 6.Verlinde P., Cholet G. Comparing Decision Fusion Paradigms Using K-NN Based Classifiers, Decision Trees and Logistic Regression in a Multi-Modal Identity Verification Application. Proceedings of the International Conference on Audio and Video-Based Biometric Person Authentication; Washington DC, USA. 22–24 March 1999; pp. 188–193. [Google Scholar]
  • 7.Jain A.K., Ross A. Learning User-Specific Parameters in a Multi-Biometric System. Proceedings of IEEE International Conference on Image Processing; Rochester, NY, USA. 22–25 September 2002; pp. 57–60. [Google Scholar]
  • 8.Brunelli R., Falavigna D. Person identification using multiple cues. IEEE Trans. Pattern Anal. Mach. Intell. 1995;12:955–966. [Google Scholar]
  • 9.Bigun E.S., Bigun J., Duc B., Fischer S. Expert Conciliation for Multimodal Person Authentication Systems Using Bayesian Statistics. Proceedings of the International Conference on Audio and Video-Based Biometric Person Authentication; Crans-Montana, Switzerland. 12–14 March 1997; pp. 291–300. [Google Scholar]
  • 10.Frischholz R.W., Dieckmann U. Bioid: A multimodal biometric identification system. IEEE Comput. 2000;33:64–68. [Google Scholar]
  • 11.Ross A.A., Nandakumar K., Jain A.K. Handbook of Multibiometrics. Springer; Berlin, Heidelberg, Germany: 2006. [Google Scholar]
  • 12.Jain A.K., Prabhakar S., Chen S. Combining multiple matchers for a high security fingerprint verification system. Pattern Recogn. Lett. 1999;20:1371–1379. [Google Scholar]
  • 13.Zuev Y., Ivanon S. The Voting as a Way to Increase the Decision Reliability. Proceedings of the Foundations of Information/Decision Fusion with Applications to Engineering Problems; Washington DC, USA. 7–9 August 1996; pp. 206–210. [Google Scholar]
  • 14.Miyazawa K., Ito K., Aoki T., Kobayashi K., Nakajima H. A Phase-Based Iris Recognition Algorithm. Vol. 3832. Springer; Berlin, Heidelberg, Germany: 2005. pp. 356–365. [Google Scholar]
  • 15.Bowyer K.W., Hollingsworth K., Flynn P.J. Image understanding for iris biometrics: A survey. Comput. Vis. Image Underst. 2008;110:281–307. [Google Scholar]
  • 16.Viriri S., Tapamo J.-R. Improving Iris-Based Personal Identification Using Maximum Rectangular Region Detection. Proceedings of the 2009 International Conference on Digital Image Processing; Bangkok, Thailand. 7–9 March 2009; pp. 421–425. [Google Scholar]
  • 17.Ko J.-G., Gil Y.-H., Yoo J.-H., Chung K.-L. A Novel and efficient feature extraction method for iris recognition. ETRI J. 2007;29:399–401. [Google Scholar]
  • 18.Viriri S., Tapamo J.-R. Signature verification based on handwritten text recognition. Commun. Comput. Inf. Sci. 2009;61:98–105. [Google Scholar]
  • 19.Daugman J.G. High confidence visual recognition of persons by a test of statistical independence. IEEE Trans. Pattern Anal. Mach. Intell. 1993;15:1148–1161. [Google Scholar]
  • 20.Jain A.K., Nandakumar K., Ross A. Score normalization in multimodal biometric systems. Pattern Recogn. Lett. 2005;38:2270–2285. [Google Scholar]
  • 21.Casia Iris Image Database (CASIA) Available online: http://www.sinobiometrics.com/ (accessed on 8 June 2008)
  • 22.GPDS Signature Database. Available online: http://www.gpds.ulpgc.es/download/index.htm/ (accessed on 20 February 2010)
  • 23.Vatsa M., Singh R., Noore A. Integrating image quality in 2ν-SVM biometric match score fusion. Int. J. Neural Syst. 2007;17:343–351. doi: 10.1142/S0129065707001196. [DOI] [PubMed] [Google Scholar]
  • 24.Teoh A., Samad S.A., Hussain A. Nearest neighbourhood classifiers in a bimodal biometric verification syatem fusion decision scheme. J. Res. Pract. Inf. Technol. 2004;36:47–62. [Google Scholar]

Articles from Sensors (Basel, Switzerland) are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES