Skip to main content
Scientific Reports logoLink to Scientific Reports
. 2025 Aug 10;15:29237. doi: 10.1038/s41598-025-10075-1

Feature fusion and selection using handcrafted vs. deep learning methods for multimodal hand biometric recognition

Saliha Artabaz 1,, Layth Sliman 2
PMCID: PMC12336352  PMID: 40784983

Abstract

Feature fusion is a widely adopted strategy in multi-biometrics to enhance reliability, performance and real-world applicability. While combining multiple biometric sources can improve recognition accuracy, practical performance depends heavily on feature dependencies, redundancies, and selection methods. This study provides a comprehensive analysis of multimodal hand biometric recognition systems. We aim to guide the design of efficient, high-accuracy biometric systems by evaluating trade-offs between classical and learning-based approaches. For feature extraction, we employ Zernike moments and log-Gabor filters, evaluating multiple selection techniques to optimize performance. While baseline palmprint and fingerprint systems exhibit varying classification rates. Our feature fusion method achieves a consistent 99.29% identification rate across diverse classifiers. Additionally, we explore EfficientNET as an end-to-end feature extractor and classifier, comparing its fusion performance with the traditional approach. Our findings emphasize feature selection as the key of building efficient and stable recognition systems. Using the minimal optimal feature set, we achieve an equal error rate (EER) of 0.71%, demonstrating superior efficiency and accuracy.

Keywords: Multi-biometrics, Identification, Feature level fusion, Stability metrics, EfficientNET

Subject terms: Software, Computer science

Introduction

Hand biometrics, including fingerprints and palmprints, are widely used for secure identification due to their high identification accuracy and non-intrusiveness. It is one of the most used in multiple applications. Actually, multibiometrics and precisely multimodal biometrics are increasingly gaining popularity. They showcased their efficiency by delivering superior performance and increased universality. Recent applications focus on ensuring greater convenience and reduced user cooperation in the development of user-friendly biometric systems1. Hence, multimodality becomes essential to achieve user satisfaction and construct ergonomic security applications without compromising the required accuracy. Multimodal systems further enhance performance by fusing complementary features. Developing robust biometric solutions requires carefully balancing interpretability, efficiency, and accuracy. Traditional methods, using well-defined mathematical models, offer high interpretability and can be computationally efficient. However, they might fail to capture complex variations in biometric data. In contrast, deep learning2 methods excel at learning rich, hierarchical representations directly from data. Which leads to significant gain in accuracy and adaptability. However, it comes at the cost of reduced transparency and increased computational demand.

Deep learning techniques are increasingly applied in palmprint and fingerprint recognition, with distinct methodologies tailored to specific challenges. Authors3 propose PRENet, a CNN-based ROI extraction method that ensures dimensionally consistent palmprint regions. Which is paired with SYEnet, a lightweight neural network leveraging four parallel learners for efficient feature extraction. Meanwhile, Authors4 introduce a self-supervised learning framework comprising two phases: (1) pretraining on unlabeled data and (2) self-tuning the model for refinement.

For fingerprint recognition, LSTM is employed to model dynamic ridge flow patterns by analyzing spatial feature sequences5. Another approach combines Siamese networks with SIFT descriptors to isolate discriminative features from partial fingerprints6. Additionally, Gannet Bald Optimization (GBO) is utilized to train Deep CNNs, enhancing the classification of fingerprint patterns7. Table 1 summarizes the studied approaches.

Table 1.

Summary of recent literature approaches.

Method Key features Pros/Cons
Traditionnal methods Rule-based, mathematical models (minutiae, Gabor filters, Zernike Moments)

+High interpretability; computationally efficient

-Struggles with complex data variations

PRENet + SYEnet3

Image-based

CNN for ROI extraction + lightweight NN with 4 parallel learners

+Efficient feature extraction

-May lack depth for highly variable data

Self-Supervised Learning4

Image-based

Pretraining on unlabeled data followed by Self-tuning

+Reduced labeled data dependency, adaptable

-Pre-training dataset dependency.

LSTM5

Image-based

Captures dynamic ridge flow via spatial feature sequences

+Models temporal ridge patterns

-Increased computational cost

Siamese + SIFT6

Image-based

Combines metric learning (Siamese) with handcrafted descriptors (SIFT)

+Adapted to partial fingerprints

-Siamese approach not generalizable

GBO-optimized Deep CNN7

Feature-based

Gannet Bald Optimization (bio-inspired) trains CNN for classification of HOG and LBP features

+Aligning physiological with behavioral information

- Tuning complexity

In multimodal systems, the fusion of processed data can occur at various levels depending on the modalities. Decision and score levels are universally applied to all modalities810. At the feature level, fusion involves combining extracted features from each modality through data analysis techniques aimed at reducing high dimensionality11,12. Features encapsulate rich and discriminant data extracted from biometric modalities, thereby amplifying the overall effectiveness of the fusion process. Successfully executing fusion at this level holds significant promise for enhancing system performance and accuracy11. Authors13 study impact of multimodal feature proportion on matching performance. They find that unequal proportioned fused vector is more efficient than 50%−50%. As they use fingerprint and face, these results sound coherent since the two modalities have dissimilar matching performances. Moreover, if any features’ filtering is applied to get discriminant ones, the fused features will not ensure improvement of matching performance. Therefore, in feature fusion, we must consider accuracy of each uni-biometric system based on fused features. Each selected feature has an impact on the success of identifying individuals and discriminating between them. Furthermore, feature selection is necessary to preserve only discriminant features. This constitutes the key to improve identification rate and reduce template storage and processing time costs. This assertion finds support in a study conducted by Santos et al.14. This study explores data irregularities and their impact on the overall classification rate.

In this paper, we compare traditional feature extraction (Gabor, Zernike) with deep learning (EfficientNetV2) in a multimodal (fingerprint + palmprint) framework. We analyze fusion strategies and selection robustness. First we start by extracting features from the two modalities using Gabor and Zernike moments. Through intensive testing, we identify the most suitable method for each modality. We then select the adapted one along with appropriate parameters. Subsequently, we employ various selection and classification methods. We aim by that to objectively assess the impact of fusion and feature selection on the identification rate. Then, we compare with deep learning extractor using the latest optimized and fastest version of Convolutional Neural Network: EfficientNetV215. Our main objective is to show and analyse the efficacy and limitations of both techniques while dealing with fusion and selection methods. This can be a challenging task when working with unbalanced performances of fused baseline systems. In fact, an effective design of the fusion process plays a critical role in boosting system performance. This is why it is important to study thoroughly features using ranking methods. The applied selection methods achieve good results in preserving classification accuracy. Therefore, we further analyse the stability of their ranking with samples variations using multiple metrics. The implementation code and associated results are available at Github.

Selection and fusion of features

Biometric features have significant influence over recognition or identification rates, potentially contributing positively or negatively. The curse of dimensionality, primarily responsible for limitations in efficiently processing high-dimensional feature spaces16, presents a considerable obstacle. While data richness can be advantageous, redundant features within intraclass and interclass contexts may impede performance. In multibiometric systems, fusion at a single high level (Rank, Decision, Score) does not guarantee accuracy enhancement, as different matchers can yield significantly diverse performances17. Therefore, striking a balance between the quantity and quality of features becomes crucial. Ensuring sufficient high-quality features capable of effectively identifying numerous classes is imperative for optimizing identification rates using classifiers.

Feature extraction

Feature extraction is the first and one of the most important steps in the biometric process. Texture extraction methods or operators as described by Amrouni et al.18, are perfect to process rich signals like fingerprint and palmprint. These modalities are known for their uniqueness and their universality, especially the palmprint texture18,19. Gabor filters and Zernike moments are commonly used to extract feature vector from texture images20. While minutia and principal lines-based methods extract characteristic points and features21, these two methods compute subspace and statistical information which remains invariant to rotation and translation22.

Fingerprint Gabor features

Gabor filters, a class of structure-based methods, are predominantly employed for edge detection and image enhancement23. Additionally, they serve as effective tools for pattern extraction, capable of capturing both local and global information within image texture24. In this paper, we utilize a 2D log-Gabor filter based on Fourier transform to compute features derived from phase congruency. The log-Gabor filter offers several advantages25. The non-presence of the DC-component ensures that the features remain unaffected by variations in the mean value. Its large bandwidth facilitates robust feature extraction across diverse datasets. The log-Gabor filter exhibits a Gaussian frequency response when plotted on a logarithmic frequency axis. This characteristic contributes to the filter’s effectiveness in capturing intricate texture patterns across different scales.

Palmprint Zernike moments

Zernike moments are widely used for different modalities classification such as fingerprint, face, iris, hand vein, finger knuckle, and signature26. The Zernike moments are invariant to rotation and can be invariant to scaling and translation using normalization of their orthogonal polynomials. In addition, they reach a near zero redundancy thanks to their orthogonal property27. In fact, independent features are extracted from moments of different orders. However, they are computationally more complex and time-consuming according to the chosen order. This can be overcome using fast computation algorithms28. The Zernike moments of order n and repetition m are defined by calculating each nth order of m repetitions Zernike polynomials in one loop as follows:

graphic file with name 41598_2025_10075_Article_Equ1.gif 1

where R is the radial polynomial based on factorials.

Deep learner extractor and classifier EfficientNET

EfficientNET15 is a family of Convolutional Neural Networks with faster training speed and better parameter efficiency. It combines training-aware neural architecture search NAS with adaptive scaling technique called MBConv. It applies progressive learning and adapts the regularization to the image sizes. The goal of the new version EfficientNETV2 is to find a good compromise between improving training speed and increasing parameters.

Feature selection

Classification models need a comprehensive set of features that are both relevant and non-redundant, enabling effective discrimination between classes in supervised classification scenarios. Conducting feature classification analysis without classes’ label can be valuable for evaluating feature contribution. On the other hand, prioritizing the acquisition of a subset of features with superior discriminant power is paramount29. In our context, it is important to use both feature evaluation and feature subset search as we are fusing features of different modalities. These features, extracted using two distinct methods, coupled with the observed variability in classifier accuracies, underscore the significance of evaluating each feature’s independence and non-redundancy. It is equally important to evaluate each feature, as it is assumed to be independent and not redundant, then we test the effect of feature selection on the identification rate. Consequently, we assess the impact of feature selection on the identification rate, employing a diverse array of methods from various classes, including variable elimination techniques such as filter and wrapper methods, as well as embedded methods30.

Next, we describe the different existing methods for feature selection that we use in our proposed scheme. These methods were carefully chosen to encompass the primary feature selection strategies, each employing distinct measures and criteria to evaluate the relevance of features31.

Filter methods

The filter-based methods evaluate correlations between features and the class label, and feature-to-feature or mutual information. They assess intrinsic properties of features independently of the classifier employed, offering simplicity and success across numerous applications through the utilization of a relevance criterion and a selection threshold30. Here, we introduce several filter methods utilized in our experiments, organized based on the relevance criterion:

  • Correlation-based methods:

These methods utilize correlation measures between features with or without considering class membership. The principal methods selected for our experimentation include:

Multi-Class Feature Selection (MCFS): An unsupervised feature selection method that assesses correlations between features independently of the class label29. It operates as a correlation-based feature selection method (CFS) for multi-class problems, leveraging Eigenvectors and L1-regularization.

Correlation-based Feature Selection (CFS): A method based on correlation that evaluates the pairwise correlation of features and identifies relevant features that exhibit low dependence on other features. The ranking is accomplished through a heuristic search strategy32.

The Relief-F Algorithm: is one of the six extensions of the relief algorithm. It is applied in multi-class problems and estimates the relevance of features to separate between all pairs of classes. It is the best approach, and it can provide results when stopped but it yields to better results with extended time and data33.

  • Graph-based methods:

These methods build a graph model of the features to keep the relevant ones.

Laplacian: is based on Laplacian Eigenmaps and the Locality Preserving Projection34. Utilizing a nearest neighbor graph to model the local geometric structure of the data, it identifies relevant features that preserve the local structure based on their Laplacian Score.

Infinite Latent Feature Selection (ILFS): it performs feature selection using a probabilistic latent graph that considers all the feature subsets35. Each subset is represented by a path that connects the included features. The relevancy is determined as an abstract latent variable evaluated through conditional probability. This approach enables weighting the graph based on the importance of each feature.

  • Local and global Manifold structure:

These methods acknowledge the manifold structure, whether the class membership of the features is known or not.

MUTual INFormation Feature Selection (Mutinffs): is a method predicated on mutual information as a criterion of correlation36. Mutual information serves as an invariant measure of statistical independence, capable of quantifying various relationships between feature-feature or features-class, including non-linear ones, thereby yielding a relevant subset of features.

Unsupervised Discriminative Feature Selection (UDFS): seeks to select the most discriminative features using discriminative analysis and l2 1-norm minimization37. It considers the manifold structure of data based on the local discriminative information, as the class label is not used in the classification training.

Wrapper methods

A wrapper method explores the feature subset by considering the classification performance as an objective function30. It incrementally constructs the feature subset by iteratively adding and removing features to optimize the objective function and achieve the best classification performance.

Feature Selection with Adaptive Structure Learning (FSASL): is an unsupervised learning approach that seeks to identify the most relevant features while preserving the intrinsic structure of the data. As such, it simultaneously conducts feature selection and data structure learning38. FSASL leverages a matrix of Euclidean Distance induced probabilistic neighborhood for global manifold and induced Laplacian for local manifold to achieve this objective.

The Dependence Guided Unsupervised Feature Selection (DGUFS): is based on a joint learning framework that uses a projection-free with l2,1- norm39. The model, which places heightened emphasis on geometric structure and discriminative information, as well as Hilbert-Schmidt Independence Criterion, is tackled using an iterative algorithm.

Local Learning Clustering Feature Selection (LLCFS): integrates feature selection into an unsupervised Local Learning Clustering (LLC) framework, employing regression trained with neighborhood information. The nearest neighbors’ selection performed using τ-weighted square Euclidean distance and a kernel learning are applied to overcome LLC limitations against irrelevant features40.

Embedded methods

These methods integrate the feature selection into the training step to build an efficient selection model without increasing the computation time by evaluating different subsets recurrently like wrapper methods30.

Feature Selection concaVe (FSV): it is a feature selection method based on concave minimization. The concave minimization aims to minimize a bilinear function on a polyhedral set of vectors. The feature selection is integrated into the training step using a linear programming technique41.

Support Vector Machine-Recursive Feature Elimination (SVM-RFE): embeds Recursive Feature Elimination within SVM classification, utilizing weight magnitude as the ranking criterion42. A good ranking feature method does not necessarily provide a good subset of features. Therefore, evaluating the effect of removing one feature or more, that has the smallest ranking value, at a time seems to be interesting. It allows to construct gradually an optimized feature subset. The SVM-RFE employs a selection method that is an instance of the greedy backward selection.

Feature classification

We selected the following methods, widely referenced in classification literature30, with the aim of identifying the most suitable classifiers for our data distribution. We aim by this to testify the stability of the selected feature43.

Regularized linear discriminant analysis (RLDA)

This method is based on Linear Discriminant analysis that uses regularization to avoid data training failures44. Discriminant Analysis proves to be straightforward due to the relationship between the number of features and the number of samples. Specifically, for palmprint data, the number of features is less than the number of samples, while for fingerprint data, it is the opposite. Therefore, we utilize both minimal and maximal regularization to assess features’ independence and validate the competitiveness of this method compared to others.

K-Nearest neighbor (KNN)

It is a straightforward and effective supervised machine learning method that employs various distances and retains all training data while performing computations at runtim34,45 We examine the following distances: Euclidean, Cosine, Spearman and Correlation. In addition, we vary the number of neighbours in the range [2–10] and apply weighting.

Multi-Class support vector machine (MC-SVM)

There are different strategies of multi-class SVM where the principles are based on multiplying binary SVM and training them using one of the following strategies: One against One (OAO) and One Against All (OAA)46. We opt for the OAO SVM, as it is more suitable for biometric identification, employing a classifier for each pair of classes rather than one for each class.

Feature fusion

The concept of fusion has garnered significant attention across various research domains, including biometrics47. In the realm of biometrics, fusion techniques are motivated by the complementary nature of modalities and, conversely, the challenge posed by a lack of discriminant features48. Indeed, leveraging different modalities enables the construction of robust and adaptable biometric systems capable of mitigating the impact of feature scarcity. Nonetheless, fusion alone does not guarantee performance enhancement. The conventional approach of combining features through simple concatenation may lead to increased dimensionality, which can impede computational efficiency48. Recent research efforts have therefore focused on proposing quality-based fusion models49,50. Consequently, there arises a need to reduce feature dimensionality and prioritize the most relevant features for classification. Feature selection emerges as a crucial step in biometric fusion, offering performance stability without compromising classification efficiency. In this study, we aim to implement feature selection on fused vectors using methods outlined in Sect. 2.2, subsequently examining the impact of the resulting feature set on the classification process. Through the application of diverse selection methods, our objective is to evaluate feature ranking based on different criteria and stability metrics. Thus, we consider the impact of the applied selection on the identification and equal error rates.

Materials and methods

Our biometric process starts with the extraction of features from palmprint and fingerprint modalities utilizing Log-Gabor features for fingerprint and Zernike moments for palmprint.

The log-Gabor extraction step results in the derivation of rich features from nxn blocks, spanning six orientations and five scales, selected empirically to ensure coverage across the fingerprint spectrum. The extracted features encompass Mean Magnitude (Absolute Amplitude), Square Energy, and Square Entropy.

To extract discriminative features, we compute Zernike moments across varying orders n and repetitions m within predefined ranges. The optimal parameters are selected by analyzing within-class and between-class scatter for a subset of individuals, ensuring feature stability while avoiding overfitting. Since the method processes fingerprint images in small blocks, we prioritize the lowest viable order that maintains a balance between discriminative power and computational efficiency. This approach captures essential local structures without excessive complexity. The results and chosen parameters are discussed in Experiments and Discussion.

Subsequently, we apply feature selection to the extracted features after concatenation. We then assess the classification rate achieved by the application of each of the selection methods. To carry out classification, we employ Linear Discriminant Analysis (LDA), K-Nearest Neighbors (KNN) with various distances, and Multi-Class Support Vector Machine (MC-SVM). We opt for these three methods due to their diverse assumptions regarding the tested dataset. Biometric recognition involves numerous features that are inherently difficult to evaluate individually. As a result, selecting the appropriate classifier becomes a complex task unless we thoroughly analyze features’ concatenation and selection. It requires a thorough examination of features’ concatenation and selection processes, which involve managing variable feature counts to align with the provided number of training samples, a critical dataset parameter that significantly influences classifier accuracy.

Figure 1 depicts the entire process from feature extraction to the classification using LDA, KNN and SVM. We perform concatenation and selection of palmprint and fingerprint features. The selection is done using multiple methods cited in Sect. 2.2 to compare and rank features differently. Consequently, this enables the analysis of similarities between these methods according to their estimation of feature relevance. To do that we go through the following steps:

Fig. 1.

Fig. 1

Biometric process using selection and vector fusion.

  1. Apply ranking on the fused vector.

  2. Test accuracy of the selected feature selection methods. The cited feature selection methods are analyzed to get the top-rated ones.

  3. Detect the nearest peak with the less features and use the obtained numbers as a reference for each method.

  4. Perform evaluation with different scenarios using the validation and test accuracies and the EER of the Linear discriminant analysis:

    1. Test ranking of pre-fused vectors of each modality.
    2. Test ranking of fused vector and compare between the feature selection methods.
    3. Combine the pre-fusion ranking and compare the stability metrics according to the proportions of features from each modality.

The ranked features are provided using the default parameters explained in Table 2 below:

Table 2.

Parameters of the used feature selection methods.

Feature selection method Parameters explanation
RFE Classification training of features using wrapper
FSV Alpha = 5 for scale computing and linear solver.
MutInfFS Based on histogram calculation: Mutual Information
Dgufs

Similarity matrix using Euclidean distance

Regularization alpha: 0.5

Regularization beta: 0.9

ReliefF K neighbors = 20: Regression for weight acceptance
ilfs Order of the token = 6
Cfs Based on correlation matrix
UDFS

gammaCandi = 10−5

lamdaCandi = 10;=5

knnCandi = 1

llcfs

k = 5

Beta = 10−1

Number of clusters = 2

A combination of parameters can be defined

laplacian W: affinity matrix using euclidean distance
FSASL

Least Squares Loss with the L1-norm Regularization

Number of classes = 2

Lamda = 1 to update global structure

Regularization = 0.01

mcfs

k = 5

Number of eigen functions = 4

W: affinity matrix by euclidean distance

For EfficientNETV2, we used imagenet weights and adjust them on the gradients to adapt to the new learnt features. We also replace the final stage with our defined classes to perform the classification. For this, we execute double tasks and test the architecture on both identification and verification modes to ensure reliability. The parameters defined for our model are described in Table 3:

Table 3.

EfficientNET data, model and training parameters.

Parameters and hyperparameters Value
Data parameters
Image size 128*128, 96*96
Number of classes 300, 500, 140
Number of samples 12 per class
Batch size 16
Model architecture parameters
Weights imagnet
Pooling average
Dropout rate 0.3
Output activation softmax
Training parameters
Epochs 33
Additional epochs 10
Learning rate 1e-4 (initial learning rate)
Gradient clippin norm 1.0
Loss function Categorical crossentropy
Metrics accuracy
Callback parameters
ReduceLR on validation accuracy Factor = 0.1
Patience = 3
Early stopping Patience = 5
Restore best weights
Monitor = validation accuracy
Model checkpoints Max validation accuracy

We define four variants of the given model classification and evaluate them in two modes:

  • Identification: we use the output of the classifier.

  • Verification: we compute cosine distance of the computed features.

The four variants are the following:

Model variant 1: uses learning rate reduction and data generation.The data generation includes transformations like rotation, shift, zoom, shear and horizontal flip.

Model variant 2: preserves the search for the best learning rate and works on original data only.

Model variant 3: uses a fixed stable learning rate without data generation.

Model variant 4: uses a fixed stable learning rate with data generation.

We use the three first variants for palmprint and the last for fingerprint.

Experiments and discussion

For fingerprint experimentation, we test three features extracted with log-Gabor: Amplitude, Energy and Entropy. We apply LDA classifier, KNN with different distances and different parametric SVM models to examine which extracted features to keep for the next experimentation. Additionally, we apply fusion on these features to determine which SVM parameter to utilize in the next experiments. The experiments are conducted on the FVC2006_DB1_A52 fingerprint database, with the samples divided into validation and test sets as described hereafter.

For palmprint experimentation, we test three databases by extracting the Zernike moments using parameters Identified empirically to obtain sufficient features that fit the number of classes and yield an optimal identification rate using LDA. The details of the three databases are provided in Table 4.

Table 4.

Description of the used databases for fingerprint and palmprint.

Databases Image size Number of features Validation set Test set
Fingerprint52 FVC2006_DB1_A 96*96 30*64 (1920) 10*140 2*140
Palmprint PolyU NIR53 128*128 66*16 (1056) 10*300 2*300
KTU DB154 128*128 66*16 (1056) 1171 (153 classes) 268
KTU DB255 128*128 66*16 (1056) 11,909 3444

To analyze the enhancement in between-class separation relative to the number of features (Zernike moments), we compute the percentage improvement per additional feature. Then we added basic metrics to evaluate Between-Class and Within-Class scatter matrices. For that, we used statistics and their stability to analyze the Within-Class scatter. Table 5 summarizes the obtained metrics:

Table 5.

Zernike order and repetition parameters evaluation.

[Order, Repetition] Between-Class Scatter Number of Features (Zernike Moments) Increase in Scatter (Δ) % Increase in Scatter % Increase per Feature Within-Class scatter statistics Stability Std/Mean
[8,6] 10.97 400 (Baseline) - - -

Min = 0

Max = 3.34

Std = 0.52

0.95
[12,12] 12.03 1056 1.06 9.66% 0.016% per feature

Min = 0

Max = 3.34

Std = 0.41

1.08
[16,14] 13.07 1872 2.10 19.14% 0.013% per feature

Min = 0

Max = 3.34

Std = 0.36

1.13
[24,22] 18.21 4368 7.24 66.00% 0.018% per feature

Min = 0

Max = 3.34

Std = 0.39

1.34
[40,38] 41.69 12,432 30.72 280.04% 0.025% per feature

Min = 0

Max = 15.85

Std = 0.98

2.39

Next, we evaluate the trade-off between Within-Class scatter and Between-Class scatter using the separation ratio (see Fig. 2). Our goal is to balance class separability and controlled within-class variability. Based on empirical analysis, the least sensitive order and repetition values fall within the ranges [12, 24] and [12, 22], respectively. This helps us eliminate the worst-case scenarios ([8,6] and [40,38]).

Fig. 2.

Fig. 2

Separation ratio for multiple order and repetition values.

To avoid overfitting, especially given the increasing number of classes and stable sample size per class, we prioritize the minimum effective values. While adding more features could marginally improve performance (see Table 5), the gain per feature remains the same for the three instances. Thus, instead of pursuing exhaustive feature expansion, we focus on the most discriminative yet compact feature set (order = 12, repetition = 12). This approach ensures robustness in classification and feature selection without overwhelming the model with redundant features that risk overfitting.

We select twelve as the maximum value for both variable order and repetition of the Zernike moments, based on the time evaluation study presented in51 and confirmend by our evaluation as illustrated in Fig. 3. We aim primarly to strike a balance between the number of features and computational time. Then, we compute the absolute normalized amplitude for sixteen blocks.

Fig. 3.

Fig. 3

Impact of order and repetition on feature count, execution time, and Between-Class scatter.

For the log-Gabor filter, we adopt the recommended values for scale and orientation. The selected scale ensures robustness against noise, while the orientation parameter covers six distinct angles to capture multi-directional features effectively. Table 6 summarizes our chosen parameters for both Zernike moments and the log-Gabor filter:

Table 6.

Zernike and Log-Gabor parameters.

Method Parameter Justification
Zernike moments for Palmprint Order = 12 Based on findings in51, balances feature richness and computational cost
Repetition = 12 Matched to order to maintain symmetry and balance as per51
Number of Blocks = 16 Empirically found to offer a good spatial resolution without excessive features
Gabor for fingerprint Scale = 5

Local minutia

Noise tolerance

Orientation = 6

Ridge flow

Coverage of major ridge angles

Given the considerable number of classes and the limited number of samples per class, we employ cross-validation on 80% of the data using 5-folds to obtain the trained model. Subsequently, we test the model again with the remaining samples for each class.

Fingerprint features classification

Figure 4 illustrates three combinations of the three features: Amplitude, Energy, and Entropy. The classification results in identification rates ranging between 60% and 91%. Among the features, Amplitude yields the highest identification rates. Furthermore, combining Amplitude with Energy and Entropy maintains performance while bolstering weaker classifiers, such as LDA with minimal regularization, as demonstrated in Fig. 4 (a) and(b). Utilizing minimal regularization yields inferior performance compared to employing diagonal covariance, underscoring that the features are independent and more predictive with regularization. The Amplitude and Energy features exhibit similar identification rates and outperform the Entropy features. Nevertheless, the classification of all three combinations results in a consistent range of rates, indicating that the chosen classifiers perform well in the presence of predictive features.

Fig. 4.

Fig. 4

Combination of three fingerprint features and classification using LDA, KNN and SVM, (a) Combination of Amplitude and Energy, (b) Combination of Amplitude and Entropy, (c) Combination of Energy and Entropy.

Figure 5 compares the tested classifiers: LDA, KNN, and MC-SVM. In Fig. 5(a), the identification rate progresses according to the number of blocks, while Fig. 5(b) illustrates the difference between validation and test rates for the SVM variants. KNN consistently outperforms SVM and LDA. Additionally, it shows that identification rates for KNN and LDA (with maximal regularization) increase with the number of blocks, which is not observed for SVM variants and LDA with minimal regularization. The observed phenomenon can be attributed to the limited number of samples per class. While Linear SVM exhibits slightly lower performance compared to other methods in Fig. 5(a), an analysis of the difference between validation and test rates in Fig. 5(b) reveals that Linear SVM maintains steadiness around zero. Moreover, the disparity between the four SVM methods is insignificant. This can be attributed to the fact that the number of features exceeds the number of samples. Consequently, Linear SVM effectively separates the limited samples per class and mitigates overfitting, particularly with maximal regularization where small data independent predictors are crucial.

Fig. 5.

Fig. 5

(a) Classification rates of amplitude features using LDA, KNN and SVM classifiers, (b) Difference between validation and test rates for SVM variants.

Palmprint features classification

In the palmprint experiments, 1056 features were extracted using Zernike Moments from the three databases used. The identification rates achieved with LDA, KNN, and SVM classifiers are depicted in Fig. 6 (a).

Fig. 6.

Fig. 6

(a) Classification rates of palmprint features of three palmprint databases, (b) Classification rates of palmprint features of the multispectral PolyU database.

The PolyU ROI database comprises multispectral images collected under various illuminations, including NIR, Blue, Green, and Red, providing rich and discriminative information. Figure 6 (b) illustrates the results obtained for all available illuminations. Notably, the PolyU database yields the highest identification rates due to its image richness and efficient extraction method. Consequently, we utilize this database for the subsequent fusion experiments, combining it with fingerprint features that exhibit lower rates. This enables us to examine the fusion properties in scenarios with dissimilar classification rates.

EfficientNET extraction and classification

We evaluate the training process using three configurations for palmprint recognition and one for fingerprint recognition. Figure 7 illustrates the accuracy and loss curves during this evaluation. For Model 1, data generation introduces instability in the first 10 epochs, but adjusting the learning rate helps the model gradually learn effective parameters, ensuring convergence. Model 2 demonstrates the challenges of limited data while maintaining a reduced learning rate. The model fails to sufficiently distinguish between samples and terminates prematurely before reaching full convergence. In contrast, Model 3 exhibits greater stability with a smaller learning rate (LR = 1e-5), achieving slower but steady convergence. Notably, the validation loss continues to decrease even after extended training. Finally, Model 4, applied to fingerprint recognition, achieves a more stable loss after 40 + epochs, indicating sufficient optimization. All models exhibit smooth evolution in both training and validation metrics, with loss and accuracy progressing consistently in the same direction. This parallel improvement indicates sufficient learning without divergence. That confirms that none of the configurations suffer from overfitting.

Fig. 7.

Fig. 7

Training accuracy and loss for (a) Palmprint NIR model 1, (b) NIR model 2, (c) NIR model 3, (d) Fingerprint.

Figure 8 presents the evaluation metrics and score distributions for Models 3 and 4. Model 3 was selected due to its stable training behavior. The palmprint features show a good separation between genuine and impostors scores. In fact, the scores distributions shows a very small overlap which is consistent with the low EER. Additionally, both distributions show limited spread, allowing for a wider and more flexible threshold range. In contrast, the fingerprint features display greater overlap and a wider spread in the genuine scores. Which indicates the need for a more conservative threshold and a narrower decision interval.

Fig. 8.

Fig. 8

ROC, FAR/FRR, Score distributions and DET cruves for (a) Palmprint model 3, (b) Fingerprint.

Table 7 summarizes the performance of the four models, highlighting the superiority of palmprint images and features over their fingerprint counterparts. This result is expected, as palmprints offer richer texture information and support more stable and robust training.

Table 7.

Summary of accuracies and EER for the four models.

Validation accuracy Test accuracy Test loss EER
300 classes 500 classes
Palmprint

Model 1:

Data generation

LR reduction

99.3% 99.72% 0.781 0.16% 1.77%

Model 2:

Original images

Learning Rate reduction

100% 99.67% 0.057 0.30% 1.17%

Model 3:

Original images

Optimal Learning rate

98.67% 98.17% 0.5643 0.33% 1.56%
Fingerprint Siamese-SIFT6 - - - 4%

Data generation

LR reduction

92.86% 89.64% 0.3962 2.86%

Feature fusion and selection

In the fusion experiments, we test prior concatenation of the two discriminative vectors. Then, we apply selection on the resulted vector. Figure 9 shows the identification rates for the two vectors of palmprint and fingerprint separately and the fused vector before and after selection using the RFE feature selection. The three graphs illustrate the results for the three measures: Amplitude, Entropy and Energy. The fusion improves fingerprint accuracy in most cases by adding more discriminative features of the palmprint vector. The graphs of the palmprint vector and the selected features vector after fusion are quite similar. We notice that in the case of the worst fingerprint graph corresponding to the Entropy features, the enhancement is surprisingly the best. The identification rates exceed the palmprint ones. The RFE selection method succeeds in extracting the following percentages of the two features’ vectors: 70.9% palmprint features and 29.09% fingerprint features.

Fig. 9.

Fig. 9

Comparison of classification rates before and after RFE selection of palmprint features fused with fingerprint (a) Amplitude, (b) Entropy, (c) Energy.

As we evaluate multiple feature selection methods, we conduct a comparative analysis based on the degree of similarity among the features they provide and their resulting identification rates using Linear Discriminant Analysis and Support Vector Machine. Figure 10 shows the similarity ratio of features between the used feature selection methods. The methods are organized hierarchically, with those exhibiting the highest similarity ratio positioned at the bottom, gradually descending to those with the lowest similarity ratio at the top. This diagram is constructed by initially linking the methods with the highest similarity ratio at 100%, followed by the gradual attachment of other methods based on their decreasing similarity ratio.

Fig. 10.

Fig. 10

Similarity features ratio of the used feature selection methods.

Figure 10 illustrates that distinct strategies may yield similar features due to shared criteria or underlying methodologies. For instance, Mutinffs and Dgufs employ the same criterion, resulting in similar feature proportions, while Udfs and Llcfs utilize identical projection and normalization techniques. Additionally, Mcfs and Cfs, both correlation-based methods, provide comparable proportions of fingerprint and palmprint features, exhibiting a similarity ratio of 49.73%. Interestingly, despite featuring only half of the similar palmprint features, we achieved the maximum identification rate (99.64%) with the selected palmprint features. This indicates that different feature subsets can yield equivalent performance levels. Figure 11 presents a comprehensive overview of the obtained feature proportions after selection from the fused vector, showcasing the percentage of features per modality for each method. Notably, the top-performing methods, such as Mutinffs, SVM-RFE, DGUFS, and FSV, demonstrate a preference for selecting more palmprint features over fingerprint features. Conversely, MCFS, while achieving the same identification rate, selects a higher proportion of fingerprint features.

Fig. 11.

Fig. 11

Palmprint and fingerprint features proportions obtained by the tested feature selection methods.

Figure 12 illustrates that the selection applied solely on the fingerprint features using the selection methods results in a reduction of the identification rate. However, when these selected fingerprint features are fused with palmprint features, the identification rate improves proportionally to the number of added palmprint features. Notably, fusion with a negligible number of palmprint features, as provided by Laplacian and LLCFS, does not significantly enhance the identification rate. The incremental enhancement in the identification rate correlates positively with the increasing number of palmprint features. Notably, the CFS method offers a balanced selection of features, facilitating the attainment of the highest identification rate (99.29%) with fusion, even with the lowest number of palmprint features (590). Indeed, the efficacy of the CFS method underscores its ability to preserve non-correlated features while maintaining strong correlation with class membership. Conversely, the performance degradation observed with other methods suggests that restricting the selection to a subset of predictive features may inadvertently eliminate features that contribute to multiple small areas within the studied space, thereby impairing overall performance.

Fig. 12.

Fig. 12

Identification rates obtained with the applied feature selection methods.

Figure 12 (a) demonstrates this degradation displaying the identification rate for four methods (Mutinffs, RFE, FSV, and DGUFS) as the selected features are compared with their extended counterparts, gradually incorporating top-rated features. The Mutinffs and DGUFS methods, despite their different approaches, both leverage the geometrical structure of instances to extract discriminative features. While this strategy may effectively identify common features based on the manifold structure, it could inadvertently overlook features crucial for distinguishing different areas within the instances. This issue becomes more pronounced with the growing number of areas left uncovered by the selected features. Additionally, some methods, such as Laplacian, UDFS, and LLCFS, exhibit a decrease in identification rate after fusion with palmprint features, despite the latter offering a high identification rate independently. These methods prioritize the selection of common features, emphasizing the importance of shared characteristics across instances. UDFS and LLCFS leverage local information in an unsupervised setting through various distance metrics and manifold learning techniques. Conversely, Laplacian and UDFS are filter methods that capture the local geometric structure using distinct criteria. In summary, the filter methods that effectively select discriminative features, preserving high identification rates after fusion, rely on correlation analysis. Overall, the top-performing methods utilize well-constructed manifold information analysis. Whether they succeed in fingerprint feature selection or not, these methods ensure the preservation of performance gains by fusing minimal features from both modalities.

The classification rates illustrated in Table 8 indicate that our proposed method exhibits promising performance in biometric identification tasks when compared to existing approaches. Across different spectral bands, including Blue, Red, Green, and NIR, our method demonstrates robustness, achieving a class recognition ranging from 0.914 to 1.000 without multi-instance fusion. Particularly noteworthy is our method’s performance on the KTU DB1 and KTU DB2 datasets, where it achieves performance metrics of 0.914 and 0.9445, respectively. These results offer insights into the effectiveness and versatility of our proposed method in handling a higher number of classes with the Zernike moments method.

Table 8.

Comparison of the Zernike moments with existing and similar approaches on three databases.

Reference Method Multi-spectral PolyU PolyU KTU DB1 KTU DB2
Blue Red Green NIR
(Bounneche et al. 2016)[‎ 56 Log-Gabor 0.9923 0.9930 0.9910 0.9933 / / /
(Jinrong 2013)‎57 PCA-LDA 0.9703 0.9857 0.9643 0.9933 / / /
(Gumaei et al. 2018)[‎ 58 HOG + SGF + AE + RELM 0.9876 0.9960 0.9903 0.9920 / / /
(Karar and Parekh 2012)59 Zernike moments with four classes / / / / 0.9125 / /
(Zhao and Zhang 2020)60 DDR 0.9987 1 0.9980 0.9953 / / /
(Poonia and Ajmera 2023)61 ANN on Angular and Spacial patterns / / / / 0.9852
Proposal Zernike moments (300 classes) 0.9967 1 0.9983 0.9967 / 0.914 0.9445

We compare our proposal to state-of-the-art works. Table 9 presents the identification rates achieved by various methods on different databases. Our proposed method, which combines feature fusion and selection of Log-Gabor and Zernike Moments features, achieves an identification rate of 0.9929 on the MS-PolyU and FVC2006-DB1_A databases, which consist of 300 and 140 users, respectively. In comparison, other methods like Gabor + PCA Wavelet Fusion and Score Fusion achieve lower identification rates of 0.9882 and 0.9372, respectively, on different databases with less numbers of users. Interestingly, our proposal outperforms feature fusion using CNN that attains a slightly higher identification rate of 0.9446 on a database with 358 users. Additionally, Full Identity Recognition using PCA achieves a lower identification rate of 0.8540 on their own database. These results underscore the effectiveness of our proposed method in achieving high identification rates comparing to the existing methods that considers less or almost equal number of classes.

Table 9.

Proposal compared to existing and similar methods.

Reference Method Database Identification rate
(Gayathri and Ramamoorthy 2012)[‎ 62 Gabor + PCA Wavelet Fusion PolyU + FSV (40 users) 0.9882
(Genovese et al. 2019)47 Score fusion REgim Sfax Tunisia (358 users) 0.9372
(Genovese et al. 2019)47 Feature fusion CNN REgim Sfax Tunisia (358 users) 0.9446
(Fei et al. 2023)19 Full Identity Recognition using PCA Their own database 0,8540
SelZG-Fusion Feature fusion and selection Log-Gabor + Zernike Moments MS-PolyU (300 users) + FVC2006-DB1_B (140 users) 0.9929
Effi-Fusion EfficientNETV2S / 0.9964

Effi-Fusion demonstrates strong performance in both classification accuracy and verification metrics with a small EER = 0.07%. However, the current verification evaluation includes only the classes used during training (140 classes). To more effectively assess the system’s robustness, it would be beneficial to incorporate additional classes representing new users or impostors, as was done in the initial SelZG approach. The observed spread in genuine fingerprint scores indicates that the fusion captures and retains a wide range of information and characteristics from the original features (see Fig. 13).

Fig. 13.

Fig. 13

ROC, FAR/FRR, Score distributions and DET cruves for fused palmprint and fingerprint features.

We focus on a subset of diverse feature selection methods that aim to maintain accuracy. In our analysis, we re-evaluate the provided ranking by progressively integrating features using only the LDA classifier. These methods are the most balanced and use different strategies since they belong to different classes. Figure 14 depicts their test and validation accuracies for different subsets of features. This testing was useful to identify the least number of features that preserves the validation accuracy from falling down which is similar to its symmetrical point that notes the accuracy upward. While the validation accuracy is getting down, the test accuracy is impacted later on after the inclusion of more features. This is why we need to analyse furthermore the stability of the applied feature selection methods. We use the identified number of feature minimums adapted to each method to apply the selection instead of using a constant and common one for all the methods.

Fig. 14.

Fig. 14

Accuracies of the selected feature selection methods.

Table 10 compares the applied ranking of each method, using the identified number of features, according to their EER. The selected features using MCFS obtain the lowest EER that is similar to the EER of the initial features without selection. Furthermore, we provide the similarity between their ranking using the Kuncheva similarity indices.

Table 10.

EER and Kuncheva stability for the feature selection methods subgroup.

Method Number of features EER
%
Stability metric (Kuncheva)
(1) (2) (3) (4) (5) (6) (7)
RFE (1) 875 2.11 1 0.06 1 0.97 0.14 1 0.26
CFS (2) 840 2.11 0.06 1 0.06 0.057 −0.004 0.06 −0.004
DGUFS (3) 875 2.12 1 0.06 1 0.97 0.14 1 0.26
FSV (4) 790 3.5 0.97 0.057 0.97 1 0.135 0.97 0.27
MCFS (5) 840 0.71 0.14 −0.004 0.14 0.135 1 0.14 0.009
Muttinffs (6) 870 2.11 1 0.06 1 0.97 0.14 1 0.26
Relieff (7) 900 1.41 0.26 −0.004 0.26 0.27 0.09 0.26 1
No selection 2976 0.71 / / / / / / /

Hence, the best selection is provided by the MCFS method with the fewer features that achieve the same performance of the full-selected features. ROC and DET curves illustrated in Fig. 15 show the gap between the different methods.

Fig. 15.

Fig. 15

ROC and DET curves for the top-rated feature selection methods excluding the overlapping ones.

Otherwise, on one hand, we analyse the stability of feature ranking applied before the fusion. The ranking. The selection methods are more stable on the palmprint features than on the fingerprint ones. The stability is affected by the modality proportions and their performance quality as we get better accuracies on the palmprint samples used lonely. On the other hand, we apply each method on subgroups of samples and get the Kuncheva stability metric illustrated in Fig. 16. The best selection methods are among the most stable ones even when dealing with two samples per class.

Fig. 16.

Fig. 16

Different stability metrics across variable proportion of palmprint-fingerprint features and subgroups of training samples.

Conclusion

This paper presents a novel fusion approach employing feature selection through various established methods from literature with their stability evaluation. The proposed method integrates palmprint and fingerprint features extracted using efficient techniques: Zernike moments and Log-Gabor filter applied to textural images. The study investigates the impact on identification rates through fusion and feature selection, employing different feature classification methods. These methods consider factors such as the number of extracted features and classes. The results offer insights into classification efficacy based on the chosen extractor, the applied classifier and the features relevance. It demonstrates the practical utility of feature selection vs. deep learners in maintaining high classification rates. The selection’s effectiveness varies depending on the criterion used and the efficiency of manifold structure analysis. Several methods maintain fusion improvements with a reduced feature set, mitigating the curse of dimensionality. The study emphasizes the relativeness of their ranking exactitude and stability.

Abbreviations

CFS

Correlation-based Feature Selection

CNN

Convolutional Neural Network

DGUFS

Dependence Guided Unsupervised Feature Selection

EER

Equal Error Rate

FSV

Feature Selection concaVe

FSASL

Feature Selection with Adaptive Structure Learning

GBO

Gannet Bald Optimization

HOG

Histogram of Oriented Gradients

ILFS

Infinite Latent Feature Selection

KNN

K Nearest Neighbors

LBP

Local Binary Pattern

LDA

Linear Discriminant Analysis

LLC

Local Learning Clustering

LLCFS

Local Learning Clustering Feature Selection

LR

Learning Rate

LSTM

Long Short-Term Memory

MBConv

Mobile Inverted Bottleneck Convolution

MC

Multi-Class

MCFS

Multi-Class Feature Selection

Muttinffs

MUTual INFormation Feature Selection

NAS

Neural Architecture Search

NN

Neural Network

OAA

One Against All

OAO

One against One

RFE

Recursive Feature Elimination

ROI

Region Of Interest

SIFT

Scale-Invariant Feature Transform

SVM

Support Vector Machine

UDFS

Unsupervised Discriminative Feature Selection

Author contributions

The document is written by S. Artabaz, and is completed and reviewed by Layth Sliman.

Data availability

All datasets used in this study are covered in Experiments and Methods, and corresponding public access websites are provided in the references. Our source code is available at https://github.com/sartabaz/biometric-fusion.git.

Declarations

Competing interests

The authors declare no competing interests.

Footnotes

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Yang, W. et al. Biometrics for internet-of-things security: A review. Sens. (Basel). 21 (18), 6163 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Ghilom, M. & Latifi, S. The role of machine learning in advanced biometric systems. Electronics13 (13), 2667 (2024). [Google Scholar]
  • 3.Siyuan, M., Qintai, H., Zhao Siyuan, S., Lin, C. & SYEnet, J. Simple yet effective network for palmprint recognition. Information Sciences669, 120518 (2024).
  • 4.Fan, R. & Han, X. Deep palmprint recognition algorithm based on self-supervised learning and uncertainty loss. SIViP18, 4661–4673 (2024). [Google Scholar]
  • 5.Abdulnabee Sameer, N. & Nema, B. M. LSTM and CNN hybrid model for enhanced fingerprint recognition. Emerging Trends Drugs Addictions Health5, 100174 (2025).
  • 6.Chen, Z. S. et al. J.-C. A hybrid deep learning and feature descriptor approach for partial fingerprint recognition. Electronics14 (9), 1807 (2025). [Google Scholar]
  • 7.Bhimrao Mokal, A. & Gupta, B. A deep learning approach for effective classification of fingerprint patterns and human behavior analysis. Pattern Recogn. 163, C, (2025).
  • 8.Dwivedi, R. & Dey, S. A novel hybrid score level and decision level fusion scheme for cancelable multi-biometric verification. Appl. Intell.49 (4), 1016 (2019). [Google Scholar]
  • 9.El-Latif, A. A. A., Hossain, M. S. & Wang, N. Score level multibiometrics fusion approach for healthcare. Cluster Comput.22 (Suppl 1), 2425–2436 (2019). [Google Scholar]
  • 10.Walia, G. S., Singh, T., Singh, K. & Verma, N. Robust multimodal biometric system based on optimal score level fusion model. Expert Syst. Appl.116, 364–376 (2019). [Google Scholar]
  • 11.Kamlaskar, C., Deshmukh, S., Gosavi, S. & Abhyankar, A. Novel canonical correlation analysis based feature level fusion algorithm for multimodal recognition in biometric sensor systems. Sensor Letters17, 75–86 (2019).
  • 12.Viswanatham, P., Venkata Krishna, P., Saritha, V. & Obaidat, M. S. Multimodal biometric invariant fusion techniques. In M. Obaidat, I. Traore, & I. Woungang (Eds.), Biometric-Based Physical and Cybersecurity Systems. Springer. 321–336 (2019).
  • 13.Yang, W., Wang, S., Zheng, G. & Valli, C. Impact of feature proportion on matching performance of multi-biometric systems. ICT Express. 5, 37–40 (2019). [Google Scholar]
  • 14.Santos, M. S. et al. A unifying view of class overlap and imbalance: key concepts, multi-view panorama, and open avenues for research. Information Fusion.89, 228–253 (2023).
  • 15.Tan, M. & Le, Q. EfficientNetV2: Smaller models and faster training. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research. (2021).
  • 16.Saha, A., Saha, J. & Sen, B. An expert multi-modal person authentication system based on feature level fusion of iris and retina recognition. In 2019 International Conference on Electrical, Computer and Communication Engineering (ECCE). Cox’sBazar, Bangladesh, 1–5 (2019).
  • 17.Kabir, W., Ahmad, M. O. & Swamy, M. N. S. A multi-biometric system based on feature and score level fusions. IEEE Access.7, 59437–59450 (2019). [Google Scholar]
  • 18.Amrouni, N., Benzaoui, A. & Zeroual, A. Palmprint recognition: extensive exploration of databases, methodologies, comparative assessment, and future directions. Appl. Sci.14 (1), 153 (2024). [Google Scholar]
  • 19.Fei, F., Jia, Z., Gu, C., Yang, R. & Wu, C. Biometric identification based on PCA for palmprint feature extraction. In 2023 IEEE 13th International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER). Qinhuangdao, China, 475–479 (2023).
  • 20.Alsubari, A., Lonkhande, P. & Ramteke, R. J. Fuzzy-based classification for fusion of palmprint and iris biometric traits. In S. Bhattacharyya, S. Pal, I. Pan, & A. Das (Eds.), Recent Trends in Signal and Image Processing. Springer. 113–123 (2019).
  • 21.Dai, J. F. & Zhou, J. Multifeature-based high-resolution palmprint recognition. IEEE Trans. Pattern Anal. Mach. Intell.33 (9), 945–957 (2011). [DOI] [PubMed] [Google Scholar]
  • 22.Zhong, D., Du, X. & Zhong, K. Decade progress of palmprint recognition: A brief survey. Neurocomputing328, 16–28 (2019). [Google Scholar]
  • 23.Garg, B., Chaudhary, A., Mendiratta, K. & Kumar, V. Fingerprint recognition using Gabor Filter. In 2014 International Conference on Computing for Sustainable Global Development (INDIACom). New Delhi 953–958 (2014).
  • 24.Zhang, L., Zhang, L., Zhang, D. & Zhu, H. Ensemble of local and global information for finger-knuckle-print recognition. Pattern Recogn.44 (9), 1990–1998 (2011). [Google Scholar]
  • 25.Liu, X., Zhao, H., Ma, H. & Li, J. Image Matching Using Phase Congruency and Log-Gabor Filters in the SAR Images and Visible Images. In J.S. Pan, J.W. Lin, Y. Liang, S.C. Chu (Eds.), Genetic and Evolutionary Computing: ICGEC 2019. Springer. 1107, 270–278 (2019). (2020).
  • 26.Kar, A. et al. LMZMPM: local modified Zernike moment Per-Unit mass for robust human face recognition. IEEE Trans. Inf. Forensics Secur.16, 495–509 (2021). [Google Scholar]
  • 27.Chong, C. W., Raveendran, P. & Mukundan, R. A comparative analysis of algorithms for fast computation of Zernike moments. Pattern Recogn.36 (3), 731–742 (2003). [Google Scholar]
  • 28.Zhao, Z. et al. Combined kernel for fast GPU computation of Zernike moments. J. Real-Time Image Proc.18, 431–444 (2021). [Google Scholar]
  • 29.Cai, D., Zhang, C. & He, X. Unsupervised feature selection for multi-cluster data. In Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ‘10). New York, NY, USA: Association for Computing Machinery. 333–342 (2010).
  • 30.Chandrashekar, G. & Sahin, F. A survey on feature selection methods. Comput. Electr. Eng.40 (1), 16–28 (2014). [Google Scholar]
  • 31.Giorgio Feature Selection Library (2024). https://www.mathworks.com/matlabcentral/fileexchange/56937-feature-selection-library), MATLAB Central File Exchange. Retrieved April 24.
  • 32.Hall, M. A. Correlation-based feature selection for machine learning. (2000).
  • 33.Urbanowicz, R. J., Meeker, M., La Cava, W., Olson, R. S. & Moore, J. H. Relief-based feature selection: introduction and review. J. Biomed. Inform.85, 189–203 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.He, X., Cai, D. & Niyogi, P. Laplacian score for feature selection. In Advances in Neural Information Processing Systems (2006).
  • 35.Roffo, G., Melzi, S., Castellani, U. & Vinciarelli, A. Infinite latent feature selection: A probabilistic latent graph-based ranking approach (2017).
  • 36.Vergara, J. R. & Estévez, P. A. A review of feature selection methods based on mutual information. Neural Computing and Applications. 24(1), 175–186 (2014).
  • 37.Yang, Y., Shen, H. T., Ma, Z., Huang, Z. & Zhou, X. l2,1-norm regularized discriminative feature selection for unsupervised learning. In IJCAI Proceedings-International Joint Conference on Artificial Intelligence 22, 1589 (2011).
  • 38.Du, L. & Shen, Y. D. Unsupervised feature selection with adaptive structure learning. In Proceedings of the 21st ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM 209–218 (2015).
  • 39.Guo, J. & Zhu, W. Dependence guided unsupervised feature selection. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI). New Orleans, Louisiana 2232–2239 (2018).
  • 40.Zeng, H. & Cheung, Y. M. Feature selection and kernel learning for local learning-based clustering. IEEE Trans. Pattern Anal. Mach. Intell.33 (8), 1532–1547 (2011). [DOI] [PubMed] [Google Scholar]
  • 41.Bradley, P. S. & Mangasarian, O. L. Feature selection via concave minimization and support vector machines. Proceedings of the Fifteenth International Conference on Machine Learning ICML. 82–90 (1998).
  • 42.Guyon, I., Weston, J., Barnhill, S. & Vapnik, V. Gene selection for cancer classification using support vector machines. Mach. Learn.46 (1–3), 389–422 (2002). [Google Scholar]
  • 43.Khaire, U. M. & Dhanalakshmi, R. Stability of feature selection algorithm: A review. J. King Saud Univ. - Computer Inform. Sciences. 34 (4), 1060–1073 (2022). [Google Scholar]
  • 44.Guo, Y., Hastie, T. & Tibshirani, R. Regularized linear discriminant analysis and its application in microarrays. Biostatistics (2007). [DOI] [PubMed]
  • 45.Zhang, Z. Introduction to machine learning: k-nearest neighbors. Annals Translational Med.4(11), 218 (2016). [DOI] [PMC free article] [PubMed]
  • 46.Liu, Y., Wang, R. & Zeng, Y. S. An improvement of one-against-one method for multi-class support vector machine. In 2007 International Conference on Machine Learning and Cybernetics. Hong Kong, 2915–2920 (2007).
  • 47.Genovese, A., Piuri, V., Scotti, F. & Vishwakarma, S. Touchless palmprint and finger texture recognition: A Deep Learning Fusion Approach. In IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA), 1–6 (2019).
  • 48.Ross, A. & Jain, A. Information fusion in biometrics. Pattern Recognit. Lett.24 (13), 2115–2125 (2003). [Google Scholar]
  • 49.Huang, Z., Liu, Y., Li, X. & Li, J. An adaptive bimodal recognition framework using sparse coding for face and ear. Pattern Recognit. Lett.53, 69–76 (2015). [Google Scholar]
  • 50.Muramatsu, D., Makihara, Y. & Yagi, Y. View transformation model incorporating quality measures for cross-view gait recognition. IEEE Trans. Cybernetics. 46 (7), 1602–1615 (2016). [DOI] [PubMed] [Google Scholar]
  • 51.Hwang, S. K. & Kim, W. Y. A novel approach to the fast computation of Zernike moments. Pattern Recogn.39 (11), 2065–2076 (2006). [Google Scholar]
  • 52.Cappelli, R., Ferrara, M., Franco, A. & Maltoni, D. Fingerprint verification competition 2006. Biometric Technol. Today. 15 (7–8), 7–9 (2007). [Google Scholar]
  • 53.Zhang, D., Guo, Z., Lu, G., Zhang, L. & Zuo, W. An online system of multispectral palmprint verification. IEEE Trans. Instrum. Meas.59 (2), 480–490 (2010). [Google Scholar]
  • 54.Ekinci, M. & Aykut, M. Kernel fisher discriminant analysis of Gabor features for online palmprint verification. Turkish J. Electr. Eng. Comput. Sci.24, 355–369 (2016). [Google Scholar]
  • 55.KTU CVPR Lab. Palmprint Database-2. http://ceng2.ktu.edu.tr/~cvpr/palmDB2.htm
  • 56.Bounneche, M. D., Boubchir, L., Bouridane, A. & Nekhoul, B. Ali-Chérif, A. Multi-spectral palmprint recognition based on oriented multiscale log-Gabor filters. Neurocomputing205, 274–286 (2016). [Google Scholar]
  • 57.Jinrong, C. Multispectral fusion for palmprint recognition. Optik - Int. J. Light Electron. Opt.124 (17), 3067–3071 (2013). [Google Scholar]
  • 58.Gumaei, A., Sammouda, R., Al-Salman, A. M. & Alsanad, A. An effective palmprint recognition approach for visible and multispectral sensor images. Sensors18 (5), 1575 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Karar, S. & Parekh, R. Palm print recognition using Zernike moments. Int. J. Comput. Appl.55 (16), 15–19 (2012). [Google Scholar]
  • 60.Zhao, S. & Zhang, B. Deep discriminative representation for generic palmprint recognition. Pattern Recognition, 98, 107071 (2020).
  • 61.Poonia, P. & Ajmera, P. K. Robust palm-print recognition using multi-resolution texture patterns with artificial neural network. Wirel. Pers. Commun.133, 1305–1323 (2023). [Google Scholar]
  • 62.Gayathri, R. & Ramamoorthy, P. A fingerprint and palmprint recognition approach based on multiple feature extraction. Eur. J. Sci. Res.76 (4), 514–526 (2012). [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

All datasets used in this study are covered in Experiments and Methods, and corresponding public access websites are provided in the references. Our source code is available at https://github.com/sartabaz/biometric-fusion.git.


Articles from Scientific Reports are provided here courtesy of Nature Publishing Group

RESOURCES