Next Article in Journal
Scintillation Response of Nd-Doped LaMgAl11O19 Single Crystals Emitting NIR Photons for High-Dose Monitoring
Previous Article in Journal
Design of CB-PDMS Flexible Sensing for Monitoring of Bridge Cracks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Contactless Palmprint Recognition Using Binarized Statistical Image Features-Based Multiresolution Analysis

1
LIST Laboratory, University of M’Hamed Bougara Boumerdes, Avenue of Independence, Boumerdes 35000, Algeria
2
Electrical Engineering Department, University of Skikda, BP 26, El Hadaiek, Skikda 21000, Algeria
3
PIMIS Laboratory, Electronics and Telecommunications Department, Université du 8 Mai 1945 Guelma, Guelma 24000, Algeria
4
LIMPAF Laboratory, Department of Computer Science, University of Bouira, Bouira 10000, Algeria
5
Higher School of Computer Science and Technology (ESTIN), Bejaia 06300, Algeria
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(24), 9814; https://doi.org/10.3390/s22249814
Submission received: 20 October 2022 / Revised: 3 December 2022 / Accepted: 12 December 2022 / Published: 14 December 2022
(This article belongs to the Section Sensing and Imaging)

Abstract

:
In recent years, palmprint recognition has gained increased interest and has been a focus of significant research as a trustworthy personal identification method. The performance of any palmprint recognition system mainly depends on the effectiveness of the utilized feature extraction approach. In this paper, we propose a three-step approach to address the challenging problem of contactless palmprint recognition: (1) a pre-processing, based on median filtering and contrast limited adaptive histogram equalization (CLAHE), is used to remove potential noise and equalize the images’ lighting; (2) a multiresolution analysis is applied to extract binarized statistical image features (BSIF) at several discrete wavelet transform (DWT) resolutions; (3) a classification stage is performed to categorize the extracted features into the corresponding class using a K-nearest neighbors (K-NN)-based classifier. The feature extraction strategy is the main contribution of this work; we used the multiresolution analysis to extract the pertinent information from several image resolutions as an alternative to the classical method based on multi-patch decomposition. The proposed approach was thoroughly assessed using two contactless palmprint databases: the Indian Institute of Technology—Delhi (IITD) and the Chinese Academy of Sciences Institute of Automatisation (CASIA). The results are impressive compared to the current state-of-the-art methods: the Rank-1 recognition rates are 98.77% and 98.10% for the IITD and CASIA databases, respectively.

1. Introduction

Biometric recognition methods are now the most often used for identifying or verifying people. Biometrics has supplanted traditional authentication techniques such as passwords or badges, which may be easily forgotten or lost [1]. In favor of biometric recognition systems, they cannot be stolen or forgotten since they are unique to each individual [2]. Biometric modalities can be classified into behavioral, chemical, and physical traits [3].
The palmprint as a biometric recognition modality has recently gained more popularity than other biometric characteristics; the palmprint modality is considered among the most effective tools in increasing the security of a person’s authentication [4,5]. Palmprint refers to the inside surface of the hand, which contains unique characteristics such as primary lines, wrinkles, ridges, and textures that even twins have different patterns [6]. Palmprint features remain stable and permanent during human life [7]. Biometric recognition based on palmprints is regarded as among the most practical and reliable authentication approaches compared to other physiological traits [8]. It has several advantages: the cheap cost, ease of access, acceptability by people, and higher distinctiveness [9].
Images of palmprints can be acquired in contact and non-contact (i.e., contactless) modes. In the context of a contact-based palmprint acquisition technique, the subject should place their hand in touch with the sensor attached to pegs to ensure that the hand is correctly positioned in order to take photos. In contrast, contactless acquisition is possible with commercial off-the-shelf cameras and under unconstrained conditions. The last mode provides several advantages over contact-based methods, such as increased user-friendliness and more confidentiality, and it does not provoke the hygiene risk [10].
The feature extraction stage is the most critical and delicate process in the biometric recognition system. If the algorithm used in this phase does not react appropriately, the system will automatically be flawed; this is why the feature extraction stage of the presented work piques our curiosity.
Local texture descriptors have gained significant research interest in the last few years. They describe pictures in small local patches and have been proven more efficient than global or geometric image descriptors; this is why we have chosen the local texture descriptor binarized statistical image features (BSIF) [11] in this research. In many computer vision applications, the BSIF descriptor has outperformed the performance of several homolog descriptors, such as the local binary patterns (LBP) [12], histograms of oriented gradients (HOG) [13], local phase quantization (LPQ) [14], or patterns of oriented edge magnitudes (POEM) [15]. Its principal consists of using the independent component analysis (ICA) [16] to train a collection of filters from real-world images.
The trained filters can be utilized to represent each pixel of the provided image as a binary string code by simply computing a convolution operation between its initial value and a learned filter. The binary code corresponding to the pixel may be considered a local descriptor of the picture intensity pattern in the pixel’s neighborhood. Finally, the histogram of pixel code values can be used to characterize texture features within picture sub-regions (i.e., multi-patch).
This study proposes a feature extraction approach for contactless palmprint recognition based on discrete wavelet transform (DWT) [17] multiresolution analysis and multi-scale BSIF description. The wavelet-based multiresolution analysis is a helpful tool that permits the analysis of the image and the extraction of pertinent information at multiple scales or resolutions. The BSIF descriptor is then applied to the original image in addition to the approximation coefficients (i.e., sub-images) extracted from each DWT level. It is well-known that the application of BSIF descriptor generates a feature vector in the form of a histogram representing the image.
As the BSIF descriptor is applied to the original image in addition to several DWT levels, i.e., each level will generate a corresponding histogram. In our approach, we have concatenated all histograms extracted from the same image (i.e., multi-level histograms) and generated a global histogram representing the image at several scales. So, we have performed a multiresolution image analysis and extracted the pertinent information from several resolutions.
For the classification stage, we have implemented and tested the performance of our approach with two classifiers, namely the K-nearest neighbors (K-NN) and centroid displacement-based K-nearest neighbors (CDNN). In addition, we conducted a deeper experimental analysis to find the best-performing parameters that present the higher recognition accuracy using two concurrent databases: the Indian Institute of Technology—Delhi (IITD) [18] and the Chinese Academy of Sciences Institute of Automatisation (CASIA) [19]. By comparing the obtained results against recently published state-of-the-art approaches, our method presents a competitive performance outperforming most of its contemporary approaches.
In addition, it has lower algorithmic complexity, i.e., lower computational cost than deep learning-based approaches, which take much time and require high-performance hardware usage.
In summary, the main contributions of our paper can be synthesized as follows:
-
We proposed a feature extraction approach for contactless palmprint recognition based on DWT multiresolution analysis and multi-scale BSIF description.
-
We substituted the classical approach of multi-block (i.e., multi-patch) decomposition with DWT multiresolution analysis.
-
We conducted more profound experiments and analyses on the pre-processing, feature extraction, and classification stages by testing and considering several configurations to find the best-performing parameters that maximize the recognition rate.
The remaining paper is structured as follows: Section 2 includes a synopsis of the related work. Section 3 explains the proposed methodology and each technique used in this work. Section 4 displays our experimental results and a comparative study. The conclusion is introduced in Section 5.

2. Related Work

The following methodologies are the most often employed in the research for palmprint recognition: coding-based methods, texture-descriptor-based methods, and deep-learning-based methods.

2.1. Coding-Based Methods

These methods use filters to extract characteristics from palmprint images, which are subsequently encoded into digital codes. More precisely, they convert the responses of a bank of filters into bitwise codes and employ various distance metrics to compare and match palmprint images.
Kong and Zhang [20] proposed the competitive code, a method for palmprint identification based on a competitive coding scheme and angular matching. The authors applied the real part of the neurophysiology-based 2D Gabor filters to the palmprint to derive orientation information from the palmprint. In a proposed framework by Sun et al. [19], ordinal measures have been used to solve the representation problem. The authors introduced a different representation strategy called orthogonal line ordinal features that unifies several existing palmprint algorithms into the proposed framework.
The method produces a one-bit feature code by qualitatively comparing two elongated line-like image regions that are orthogonal. Tens of thousands of ordinal feature codes make up a palmprint pattern. Zhang et al. [21] analyzed the fragile bits phenomena in the palmprint’s binary orientation co-occurrence vector (BOCV) representation. If the value of a bit fluctuates in code maps made from many images of the same palmprint, that bit is said to be fragile. Next, they extended BOCV to E-BOCV by appropriately integrating information about fragile bits.
Fei et al. [22] tackled the problems of orientation feature extraction and effective matching of palmprint images. A new dual orientation code (DOC) scheme was developed to describe the palmprint’s orientation features. A good nonlinear angular match score was created to evaluate the similarity of DOCs. Xu et al. [23] proposed a new approach for palmprint authentication based on discriminative and reliable competitive coding and using a more precise dominant orientation representation of palmprint images. The authors suggested weighting the orientation data of a nearby region to enhance the robust and dominant discriminating orientation code’s correctness and stability.

2.2. Texture-Descriptors-Based Methods

These methods decompose the encoded palmprint image into several blocks, apply the texture descriptor to each block, and extract their histograms. Then, they concatenate the extracted histograms of each block to obtain the feature vector representing the palmprint image. Finally, they compare and match palmprint images using various distance metrics.
Motivated by the public desire for clean and non-intrusive biometric technologies, Michael et al. [24] proposed a touchless palmprint model for palmprint biometrics using a low-resolution web camera to acquire pictures. The authors used the skin-color thresholding approach to derive the region of interest (ROI). Then, a valley detection process was utilized to locate the valleys of the fingers as the proper points to find the palmprint area. The discriminative palmprint features are obtained by applying the LBP texture descriptor to the directional gradient responses of the palmprint.
Morales et al. [25] analyzed the difficulties of two palmprint techniques used for contactless biometric authentication, namely the orthogonal line ordinal features (OLOF) and scale-invariant feature transform (SIFT) features. The authors evaluated their performance in the presence of large-scale, rotation, occlusion, and translation fluctuations.
Hammami et al. [26] focused on areas of the image containing the most discriminating characteristics for identification. The authors located, extracted, and preprocessed the ROI. Then, the ROI was divided into sub-regions before applying the LBP texture descriptor for feature extraction. Finally, LBP histograms are concatenated in the form of a global histogram. Wu et al. [27] suggested a SIFT-based method to extract and match features for contactless palmprint identification. They utilized two steps in the matching stage to eliminate the mismatched SIFT points. In the first stage, SIFT has been coupled with RANSAC (i.e., random sample consensus) method, and in the second stage, LPDs (i.e., local palmprint descriptors) were used.
Another contactless palmprint recognition approach has been proposed by Luo et al. [28] that is based on SIFT feature extraction. First, palmprint images are preprocessed with an isotopic filter before detecting and matching SIFT points. Next, mismatched points that do not satisfy the topological relations are eliminated using the iterative-RANSAC (I-RANSAC) algorithm. LPDs are retrieved for SIFT to eliminate the mismatched data. Wu et al. [29] proposed a variation of the LBP texture descriptor in the local line-geometry space for palmprint recognition called local line directional patterns (LLDP). By examining directional line data, LLDP encodes a neighborhood’s structure. Consequently, using the modified finite random transform (MFRAT) or Gabor filters, the line responses in the neighborhood are calculated in 12 dissimilar directions.

2.3. Deep Learning-Based Methods

These approaches often use convolutional neural networks (CNN). CNNs are composed of convolutional, pooling, and fully connected layers that can simultaneously be trained and utilized for feature extraction and classification.
The authors in [30] proposed a heuristic palmprint identification approach that extracts three types of palmprint features from the image. First, they extracted the texture, gradient, and direction features to be encoded in triple-type feature codes. Then, they created the triple feature descriptors for palmprint representation using the block-wise histograms of the triple-type feature codes. The similarity between two matched triple-type feature descriptor palmprint images was then determined using a weighted matching-score level fusion.
In a recent study by Fei et al. [5], the authors summarized the feature extraction and recognition of different palmprint images using a unified framework to categorize palmprints into four categories. Then, they analyzed the motivations and theories of the representative extraction and matching methods. Among these methods, the authors analyzed the effectiveness of the deep-learning-based models on palmprint recognition.
Zhao and Zhang [31] suggested a new deep discriminative representation (DDR) strategy. DDR is a palmprint recognition technique based on learning the discriminative deep convolutional networks (DDCN). DDCNs are trained using constrained data to derive deep discriminative patterns (global and local) from palmprint images. They utilized CRC (i.e., collaboration representation based on a classifier) for the classification stage.
Table 1 recapitulates the works synthesized in this section by approaches, databases employed, and experimental protocols. Motivated by the successes of the second class, “Texture-descriptors-based methods”, we proposed in this work a practical approach called “binarized statistical image features-based multiresolution analysis” to handle the problem of palmprint recognition. Furthermore, most methods of the second class are based on image multi-bloc decomposition to extract the pertinent information from several levels. We proposed an alternative solution to the classical multi-bloc image decomposition strategy in this work. Our solution adopts the multiresolution analysis with the help of the DWT, which effectively extracts pertinent information from several levels.

3. Proposed Approach

This paper proposes a three-step approach to address the challenging problem of contactless palmprint recognition: (1) a pre-processing, based on median filtering and contrast limited adaptive histogram equalization (CLAHE), is used to remove potential noises and equalize the images’ lighting; (2) a multiresolution analysis is applied to extract BSIF features at several DWT scales; (3) a classification stage is performed to categorize the extracted features into the corresponding class using the K-NN or CDNN classifiers. This section details the fundamental concepts of these approaches. We used the pre-processing phase before the feature extraction and classification processes, which significantly influences the performance and robustness of the recognition rate. We used median filtering and CLAHE for the pre-processing stage.

3.1. Pre-Processing

We used the pre-processing phase before the feature extraction and classification processes, which significantly influences the performance and robustness of the recognition rate. We used median filtering and CLAHE for the pre-processing stage.
Median filtering is a simple implementation of a nonlinear filter that is essential in image processing since it is widely recognized for preserving picture borders during noise reduction, particularly “Salt & Pepper” noise [32]. The median filter replaces each pixel’s gray level with the median value of the gray levels in the pixels’ neighborhood [33].
CLAHE is a line contrast enhancement pre-processing tool. CLAHE involves conducting histogram equalization on non-overlapping picture sub-areas and applying interpolation to repair irregularities across boundaries [34].
The technique includes the following steps: (1) CLAHE divides the input picture into non-overlapping sub-blocks, (2) it generates the histograms for each sub-block, (3) the clip limit notion is then used; the clip limit is a multiple of the histogram’s average content (each histogram is redistributed to ensure its height does not exceed a predetermined “clip limit”), (4) the cumulative histogram is calculated to perform the equalization, and (5) finally, bilinear interpolation is used between the blocks to eliminate block distortions [35].

3.2. Feature Extraction

In this section, we first describe the theoretical fundaments of the methods used in this study and then present the working procedure of our proposed feature extraction approach.

3.2.1. Theoretical Principle of the Employed Methods

A.
Discrete Wavelet Transform (DWT)
Wavelets have been used in many applications, including feature extraction, compression, denoising, and contour detection. DWT divides a given signal into several frames, each of which is a time series of coefficients characterizing the signal’s temporal evolution in the appropriate frequency band [36,37]. The DWT works with many mother wavelets, including Haar, Daubechies, Coiflets, symlet, etc. [36,38].
The theoretical framework to apply a DWT on a 2D image is expressed mathematically with Equations (1) to (7) and explained as follows:
Suppose ψ ( t ) is a function of the mother wavelet. The wavelet function family ψ ( s , p ) ( t ) can be obtained as [39,40]:
ψ ( s , p ) ( t ) = 1 s   Ψ ( t p s )
where s is the scale parameter, t is an instance, and p is the position parameter.
Let f ( x , y ) be an image of a size M × N . The 2D-DWT is expressed as follows:
                  W φ ( j 0 , m , n ) = 1 M N   x = 0 M 1 y = 0 N 1 f ( x , y ) φ j 0 , m , n ( x , y )                              
W ψ i ( j , m , n ) = 1 M N   x = 0 M 1 y = 0 N 1 f ( x , y ) ψ j , m , n ( x , y )
where i = { h o r i z o n t a l ,   v e r t i c a l ,   d i a g o n a l } , W φ ( j 0 , m , n ) are the coefficients that define an approximation of f ( x , y ) at a scale j 0 , W ψ i ( j , m , n ) are the coefficients that add horizontal, vertical, and diagonal details for a scale j j 0 , φ defines the scaling function, and ψ defines the wavelet function.
In a picture, several resolutions are represented by repeating cycles of scaling (low pass) and wavelet transform (high pass), as depicted in Figure 1. The scaling catches the image’s low-frequency information, while the wavelet collects the image’s high-frequency information. At each cycle of the wavelet transform, a low-resolution picture and a fine details image, each half the size of the original image, are generated. As the information in fine details images is frequently limited, the lower-resolution version captures most of the information in the original image, resulting in high image representation efficiency [41,42].
The 2D wavelet divides a picture into four sub-band images: Low-Low (LL), Low-High (LH), High-Low (HL), and High-High (HH), as displayed in Figure 1. Low frequency indicates approximation coefficients, while high frequency represents detail coefficients (horizontal, vertical, and diagonal). Mathematically, this procedure can be presented as follows.
Let φ define the scaling function, and ψ define the wavelet function. As indicated in Equations (4)–(7), the DWT produces four quarter-sized pictures at each level of decomposition: overview image φ ( x ,   y ) , horizontal details image ψ H ( x ,   y ) , vertical details image ψ V ( x ,   y ) , and diagonal details image ψ D ( x ,   y ) [39,41,42].
φ ( x , y ) = φ ( x ) φ ( y )   L L
  ψ H ( x , y ) = φ ( x ) ψ ( y ) H L
  ψ V ( x , y ) = ψ ( x ) φ ( y ) L H
ψ D ( x , y ) = ψ ( x ) ψ ( y ) H H
In the following example, DWT frequency components for two distinct scale values are generated, and the acquired components’ low-frequency sections (LL, LLLL) are used in processes.
B.
Binarized Statistical Image Features (BSIF)
The binarized statistical image features (BSIF) descriptor introduced by Kannala and Rahtu [11] was used for texture description and classification, which is based on local binary patterns (LBP) [12] and local phase quantization (LPQ) [14]. The fundamental concept behind this descriptor is to employ learned filters with independent component analysis (ICA) [16] rather than handcrafted filters to calculate a binary code string for each pixel in an image, which can then be used to maintain an efficient histogram representation.
The BSIF approach generates a binary code string from image pixels, with the code value serving as a local descriptor of the picture intensity model in the pixel’s neighbors. Histograms of pixel code values can also quantify texture qualities within picture subspaces.
By binarizing the response of a linear filter with a zero threshold, the value of each bit in a binary code string is determined [43]. Each bit is associated with a unique filter, and the number of filters used is determined by the appropriate length of the bit string. The filters are trained using a training set of natural image patches. The learning process is carried out by maximizing the statistical independence of the filter responses [44,45].
For a given image patch X of size l × l pixels and a linear filter W i of the same size, the s i   filter response is determined as follows [11]:
s i = u , v W i ( u , v )   X ( u , v ) = w i T x
where s i denotes the filter response, W i denotes a linear filter, ( u , v ) denotes the spatial coordinates, i indicates the number of the filter, and the vectors w and x contain the pixels of W i and X , respectively.
The binarized feature b i is produced by:
b i = { 1               i f   s i > 0 0         o t h e r w i s e
Given n linear filters W i , which can be stacked to a matrix W of size n × l 2 , and computing all responses at once, i.e., s  = W x , the bit string b is obtained by binarizing each element s i of s as described above.
Lastly, the BSIF features are obtained by treating each individual pixel as a combination of binary values determined from the n number of linear filters. BSIF encoded features β are obtained as follows:
β = i = 0 n 1 b i 2 i
Figure 2 depicts the entire BSIF extraction technique using an 11 × 11 BSIF filter with a bit string length of 8. Figure 2a illustrates the input ROI of a palmprint image, Figure 2b depicts pre-learned filters with a dimension of 11 × 11 and a bit string length of 8, Figure 2c represents the BSIF features derived by convolution of ROI images with BSIF filters, and the completed BSIF encoded features are shown in Figure 2d. Figure 3 depicts several BSIF findings with varying filter sizes and bit string lengths.
The filters W i are learned by improving the statistical independence of s i employing ICA. The BSIF descriptor has two parameters: the filter size l and the bit string length n [43]. The initial filters W i presented by [11,46] were learned by randomly sampling 50,000 picture patterns from 13 distinct natural photographs.

3.2.2. Proposed Feature Extraction Approach—Multiresolution Analysis

The feature extraction stage, which is the main contribution of this study, is based on a multiresolution analysis of the entered palmprint image. As shown in the graphical flowchart of Figure 4, the concept consists firstly of applying the BSIF descriptor on the original image and extracting its corresponding histogram, which defines the feature vector of the original image. Secondly, we apply the DWT decomposition on the original image, which will decompose this image into four sub-band images: Low-Low (LL), Low-High (LH), High-Low (HL), and High-High (HH) images, as explained in Section 3.2.1.
This decomposition represents the first level (or resolution) of the DWT decomposition. It is well-known that the pertinent information is concentrated in the approximations coefficients (i.e., LL sub-band), while the detail coefficients, which are high frequencies, generally contain not useful information (mostly noises or contours). In addition, the generated LL image will replace and compress the original image and permit access to more precise representations of the original image.
For the first level of decomposition, we also apply the BSIF descriptor on the L1-LL (i.e., Level 1 of Low-Low) sub-band and generate the corresponding histogram. Similarly, the process of DWT decomposition will be applied on the L1-LL sub-band to obtain the L2-LL (i.e., Level 2 of Low-Low) representation containing more useful and compressed information, i.e., the approximations coefficients of resolution 2.
This operation is followed by a BSIF application to generate the histogram of the second resolution. The process of DWT decomposition is repeated for N decomposition levels depending on the size of the image. Finally, the histograms generated from each level are merged to create a global feature vector representing the palmprint at several levels. In other words, we have applied a multiresolution analysis on the palmprint image, as shown in Figure 4.

3.3. Classification

For the classification stage, we have implemented, tested, and compared the performance of two classifiers, namely K-NN and CDNN.

3.3.1. K-Nearest Neighbors (K-NN) Classifier

The K-nearest neighbors (K-NN) classifier is an easy-to-understand and simple-to-implement classification approach. The K-NN algorithm was used for classification, pattern recognition, and image processing. This method has three main components: a set of labeled items, a distance function that assesses the difference or similarity between two items, and the amount of K , i.e., the number of nearest neighbors. To categorize an unlabeled object, the distance between it and the labeled items is first calculated to determine its K nearest neighbors, and then the item is classified based on the majority class of its K nearest neighbors [47,48]. We have employed two distance metrics in this study: Euclidean and City Block.
The Euclidean distance D ( A , B ) between two samples A and B is formally defined as [49]:
D ( A , B ) = i = 1 N ( a i b i ) 2
The City Block distance is calculated as follows:
D ( A , B ) = i = 1 N ( a i b i )
where D ( A , B ) is the distance between test sample A and specified training sample B of features ( 1 , 2 , 3 , , N ) , a i are the features of the test sample of A , b i are the features of the specified training sample of B , and N is the total number of features.

3.3.2. Centroid Displacement-Based K-Nearest Neighbor (CDNN) Classifier

We utilized CDNN for classification, which is noise and class distribution adaptable. CDNN is a modified K-NN method published by Nguyen et al. [50] that addresses the issue of the majority vote in the K-NN algorithm: if the distance between the test sample and its neighbors varies greatly, the closer neighbors more consistently predict the class label. The CDNN algorithm works on a basic principle: once the K-NN of x (test sample) list D x k is obtained, the nearest neighbors are organized into sets with the same class designation,   S x j = { ( x t , c t ) D x k   |   c t = c j ,   c j C } , where C is the class label of x , x t is the feature vector in instance t, x t = ( x t 1 , x t 2 , , x t N ), and c t is the class label of x t .
The centroid of each set and its displacement if the test sample x is added to the test are computed as follows.
The centroid of S x j is calculated as:
p x j = x t   S x j x t | S x j |
where p x j is the centroid of S x j ,   S x j = { ( x t , c t ) D x k   |   c t = c j ,   c j C } , C is the class label of x , x t is the feature vector in instance t, x t = ( x t 1 , x t 2 , , x t N ), c t is the class label of x t , and D x k is the list of K-nearest neighbors to x .
The new centroid if x is inserted into S x j is calculated as:
q x j = ( x t   S x j x t ) + x | S x j | + 1
where q x j is a new centroid if x is inserted into S x j ,   S x j = { ( x t , c t ) D x k   |   c t = c j ,   c j C } , C is the class label of x , x t is the feature vector in instance t, x t = ( x t 1 , x t 2 , , x t N ), c t is the class label of x t , and D x k is the list of K-nearest neighbors to x .
The centroid displacement is calculated as:
d i s p x j = ( p x j q x j ) 2
where d i s p x j is the centroid displacement,   p x j is the centroid of S x j ,   q x j is a new centroid if x is inserted into S x j .
Finally, the test sample is assigned to the assortment with the lowest centroid displacement.

4. Experimental Analysis

The proposed method was assessed using the IITD touchless palmprint (version 1.0) [18] and CASIA palmprint [19] datasets. This section discusses the specifications of each employed dataset and its assessment protocol. Moreover, we examine the results achieved from applying our suggested method and compare the Rank-1 recognition rates with other recent state-of-the-art approaches.
More precisely, we have examined the results achieved from applying our suggested method and compared the Rank-1 recognition rates with other recently published state-of-the-art methods: five coding-based methods including Compcode [20], OrdinalCode [19], E-BOCV [21], DOC [22], and DRCC [23]; six texture-based methods, including DGLBP [24], SIFT_OLOF [25], LBP [26], SIFT_IRANSAC_OLOF [27], LLDP_MFRAT [28], LLDP_Gabor [28], and TFD [29]; in addition to three deep learning-based methods, including FEM [30], AlexNet [5], VGG-16 [5], Inception-V3 [5], ResNet-50 [5], and DDR [31].

4.1. Datasets

4.1.1. IITD Touchless Palmprint Database (Version 1.0)

The IITD (i.e., Indian Institute of Technology—Delhi) touchless palmprint database [18] was created in the biometric research laboratory from January 2006 to July 2007. The database was collected from 230 users with several images from both hands of each user. The age of participants ranges from 12 to 57 years. The database employed user-pegs to reduce the hand-pose and image scale variations. The resolution of the original images is 800 × 600 pixels and 150 × 150 pixels for cropped and normalized palmprint images. Figure 5 shows some sample images from the IITD database.

4.1.2. CASIA Palmprint Database

CASIA palmprint database [19] was released by the Chinese Academy of Sciences Institute of Automatisation. The CASIA database contains 5502 palmprint images acquired from 312 persons who furnished 8 images for both right and left hands. All palmprint images were saved in 8 gray-level JPEG files. The sensor used for capturing the palm images provides uniform lighting but no pegs to reduce the postures and positions of palms. Some image examples are depicted in Figure 6.

4.2. Setups

In this study, all our experiments are conducted using the identification mode. Biometric identification attempts to identify the label of a query sample by comparing its feature vector to a collection of labeled samples (i.e., feature vectors) stored in a referential database. As reported in many recently published state-of-the-art papers, the evaluation protocol used in the identification experiments consists of selecting the first N images for each person to create the gallery set. The remaining examples of each person are used as query samples. Then, the Rank-1 identification rate is calculated to measure the performance of our approach and make a comparison against the related approaches.
The identification rate is calculated using the following formula:
I d e n t i f i c a t i o n _ R a t e = N u m b e r   o f   c o r r e c t   p r e d i c t i o n s T o t a l   n u m b e r   o f   t e s t i n g   s a m p l e s × 100
It is well-known that 230 persons have participated in the formation of the IITD database using the left and right hands, i.e., the database includes 460 classes because each hand is considered an independent class. As the number of images per class varies between 5 and 6 images, the N value considered in this experiment is equal to 3, i.e., the first 3 images are used in training, and the remaining images (2–3) are used in testing.
Similarly, the CASIA dataset was generated with the participation of 312 persons that have used both hands, i.e., the database holds 624 classes because each subject has 8 left images and 8 right images. The number of images per person in this database is constant and equal to 8. The N value considered in this experiment is equal to 4, i.e., the first 4 images of each subject are used for training, and the remaining 4 images are used for testing.

4.3. Experiment #1 (Effects of the BSIF Parameters)

In the first experiment, we evaluated the suggested approach by experimenting with different BSIF settings to find the optimum configuration—that is, the one that produces the most outstanding recognition accuracy. The BSIF operator depends on two parameters: bit string length n and filter kernel size l × l , as described in Section 3.2.1. In this experiment, we have not applied any pre-processing, the BSIF descriptor was applied on the original image, i.e., without any image decomposition or application of DWT multiresolution analysis, and the classical K-NN with the Euclidean distance was used as a classifier. Table 2 and Table 3 show the detailed results of this first experiment on both employed databases. The highest results are marked in bold.
Table 2 and Table 3 show that the recognition rates increase with the augmentation in the size of filter kernel l × l or with the value of the bit string n . In addition, the configuration l × l = 17 × 17 and n = 12 is the best performing; the best-achieved recognition rates using this configuration are 96.35% and 95.49% for the CASIA and IITD, respectively.

4.4. Experiment #2 (Effects of the Multiresolution Analysis)

The feature extractor BSIF was applied only to the original image without any image decomposition or multiresolution analysis in the previous experiment. This second experiment aims to check if exploiting multiresolution information improves recognition performance. We try to answer the previous question by testing and assessing the recognition performance at different DWT decomposition levels. So, several levels of the multiresolution analysis were tested and compared in this experiment.
In addition, the BSIF configuration considered in this experiment is a bit string length n = 12 bits and a filter kernel size l × l = 17 × 17 pixels (the best-performing configuration found in the previous experiment), the Haar wavelet is the DWT family considered in the experiment, no image pre-processing, and the classical K-NN with the Euclidean distance was used as a classifier. Figure 7 shows the detailed results of this experiment on both used databases.
Figure 7 shows that going deeper with DWT decomposition permits higher results until converging at level 2. We can conclude that DWT decomposition has a prominent role in extracting useful information permitting to improve recognition accuracy. We consider level 2 as the best-performing configuration because it achieves the best recognition rates for both databases, and its algorithmic complexity is minimal compared to the 3rd level (due to the less computation). Compared to the previous experiments, the recognition rates were improved from 95.49% to 96.23% and from 96.35% to 96.92% on IITD and CASIA databases, respectively.

4.5. Experiment #3 (Effects of the Wavelet Family)

Starting from the optimal parameters determined in the previous experiments, i.e.,   l × l = 17 × 17 , n = 12 , and 2nd DWT level, we assessed in this 3rd experiment the performance of recognition by testing and comparing several wavelet families. The most famous wavelet families are Haar, Daubechies (db), Biorthogonal (bior), Coiflets (coif), Symlets (sym), Fejer-Korovkin filters (fk), Reverse Biorthogonal (rbio), and Discrete Meyer. For more details on wavelet families, see [51]. Table 4 and Table 5 show the detailed results of this third experiment on both tested databases. Note that this experiment was applied using the classical K-NN with the Euclidean distance as a classifier and without any image pre-processing.
We can observe from Table 4 that Coiflets wavelet of types coif2 and coif4, as well as Daubechies wavelet type db4, achieve the best performance with a recognition rate = 96.39%, and the recognition rate was improved from 96.23% to 96.39% on IITD database. On the other hand, it can be observed from Table 5 that the Reverse Biorthogonal wavelet is the best performing, with a recognition rate of 97.12% and an improvement from 96.92% to 97.12% on the CASIA database. For the following experiments, we consider the wavelet family Coiflets type coif4 the optimal parameter as it presents a good compromise between both databases regarding the recognition rate.

4.6. Experiment #4 (Effects of the Pre-Processing)

The objective of this experiment is to show if image pre-processing can influence or increase the performance of our palmprint identification system. It is well-known in the image processing field that pre-processing covers two types of operations: filtering (i.e., eliminating noise or irrelevant information) and restoration (i.e., enhancing or equilibrating the image’s lighting). We applied median filtering before the feature extraction phase in the first attempt; median filtering is considered among the best image filtering operators, eliminating impulsion noises and preserving the image’s content. In the second attempt, we applied CLAHE to improve and restore the lighting of images (see Section 3.1 for more details about CLAHE pre-processing).
The identification results are presented in Table 6, and the best parameters found in the previous experiments are taken into consideration. We can observe from Table 6 that using median filtering as pre-processing harms the identification rate; this later decreased from 96.39% to 95.99% with the IITD database and from 97.12% to 96.60% with the CASIA database. So, applying an operation of filtering based on median filtering to our system is not suitable.
Table 6 also recorded the results of applying the CLAHE pre-processing to the IITD and CASIA databases.). With CLAHE, the recognition rates have been improved from 96.39% to 97.13% and from 97.12% to 97.21% with IITD and CASIA databases, respectively. In contrast to median filtering, CLAHE pre-processing positively impacts the identification performance.

4.7. Experiment #5 (Effects of Classification)

In the last experiment of this work, we tried to adjust and assess the parameters of classification. More precisely, we have considered two classifiers: K-NN and CDNN (see Section 3.3 for more details about these classifiers). In addition, it is well-known that K-NN should use a distance measure. For this reason, we have tested and considered two distance measures: Euclidean and City Block (explained in Section 3.3). Finally, choosing the K-value is essential as it represents an important element in finding the majority class from the nearest neighbors. For this reason, several K-values have been tested, considered, and compared.
In Table 7, we recorded the results of this experiment by varying the classifier, the distance measure, and the K-value for both databases, in addition to using the best-performing parameters found in the previous experiments.
From Table 7, it can be observed that augmenting the K-value causes a decrease in the recognition rates; the K = 1 is the best choice for both classifiers and with both distances. In addition, the classification results of the K-NN surpass the results of the CDNN with K > 1 , but both classifiers have the same result with k = 1 , which is considered the best. Finally, the City Block distance performs well than the Euclidean distance with both databases. As a recapitulation, adjusting the classification parameters has improved the recognition rates from 97.13% to 98.77% and from 97.21% to 98.10% with IITD and CASIA databases, respectively.

4.8. Comparison

Table 8 presents the final classification accuracies of the IITD and CASIA datasets using our proposed BSIF + DWT approach and a comparison study to recently published approaches. The same experimental protocol was used for all tested techniques for a fair comparison. We can notice that our approach outperforms existing deep learning-based methods (e.g., FEM [30] and ResNet-50 [5]), texture-based methods (e.g., SIFT_IRANSAC_OLOF [27]), and other approaches. Our approach has the highest accuracy, with 98.77% and 98.10%, respectively, on the IITD and CASIA palmprint datasets, except for the work of Zhao and Zhang [31], named DDR, which has achieved 99.41% for the CASIA palmprint dataset.
The best results obtained using the BSIF-based multiresolution analysis on both IITD and CASIA datasets could be explained by using the powerful feature extractor BSIF on various DWT decomposition levels and adjusting the classification parameters.
In summary, the superiority of our approach is justified by the following points:
-
The texture extractor analyses the picture pixel by pixel, i.e., we consider the advantages of local information.
-
The picture is analyzed at several levels, i.e., we exploit multi-level information.
-
The extracted occurrences from each level are collected in a histogram, i.e., we operate with global information.

5. Conclusions

Palmprint-based biometric recognition is a highly challenging issue that we attempted to solve by introducing a novel feature extraction strategy called multiresolution analysis, which is an alternative to the classical multi-patch decomposition strategy. More precisely, we have first applied the DWT on the original palmprint image, which permits the generation of several image representations (i.e., sub-bands) with different resolutions. Then, we applied the local texture descriptor BSIF on the original image as well as the generated image sub-bands of low-low resolutions. Finally, the extracted histograms from each level are merged to form the final feature vector, which represents the entered image at several resolutions.
We validated the performance of the proposed feature extraction approach by conducting more profound experiments using the recognition rate as a statistical measure as well as the IITD and CASIA palmprint databases. The best Rank-1 recognition rates achieved using our approach are 98.77% and 98.10% for the IITD and CASIA databases, respectively; these results are very competitive and surpass the performance of many recently published state-of-the-art approaches.
In the future, we intend to (1) implement the multiresolution analysis with a deep learning model as an alternative to the local texture descriptor BSIF and (2) improve the quality of training and render it more semantic by using the deep unsupervised active learning strategy.

Author Contributions

Conceptualization, N.A. and A.B.; methodology, N.A., R.B. and Y.K.; software, R.B. and O.B.; validation, A.B., Y.K. and I.A.; formal analysis, N.A. and R.B.; investigation, N.A.; resources, A.B.; data curation, A.B.; writing—original draft preparation, N.A. and Y.K.; writing—review and editing, A.B. and I.A.; visualization, O.B.; supervision, A.B.; project administration, A.B.; funding acquisition, A.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The employed data for the different experiments are taken from two public datasets and can be obtained from [18,19].

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gomez-Barrero, M.; Gomez-Barrero, M.; Drozdowski, P.; Rathgeb, C.; Patino, J.; Todisco, M.; Nautsch, A.; Damer, N.; Priesnitz, J.; Evans, N.; et al. Biometrics in the Era of COVID-19: Challenges and Opportunities. IEEE Trans. Technol. Soc. 2022, 1. [Google Scholar] [CrossRef]
  2. Adjabi, I.; Ouahabi, A.; Benzaoui, A.; Taleb-Ahmed, A. Past, present, and future of face recognition: A Review. Electronics 2020, 9, 1188. [Google Scholar] [CrossRef]
  3. Jain, A.K.; Ross, A.; Pankanti, S. Biometrics: A Tool for Information Security. IEEE Trans. Inf. Forensics Secur. 2006, 1, 125–143. [Google Scholar] [CrossRef] [Green Version]
  4. Zhong, D.; Du, X.; Zhong, K. Decade Progress of Palmprint Recognition: A Brief Survey. Neurocomputing 2019, 328, 16–28. [Google Scholar] [CrossRef]
  5. Fei, L.; Lu, G.; Jia, W.; Teng, S.; Zhang, D. Feature Extraction Methods for Palmprint Recognition: A Survey and Evaluation. IEEE Trans. Syst. Man Cybern. Syst. 2018, 49, 346–363. [Google Scholar] [CrossRef]
  6. Jain, A.K.; Feng, J. Latent Palmprint Matching. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 31, 1032–1047. [Google Scholar] [CrossRef] [Green Version]
  7. Zhang, D.; Zuo, W.; Yue, F. A Comparative Study of Palmprint Recognition Algorithms. ACM Comput. Surv. CSUR 2012, 44, 1–37. [Google Scholar] [CrossRef]
  8. Genovese, A.; Piuri, V.; Plataniotis, K.N.; Scotti, F. PalmNet: Gabor-PCA Convolutional Networks for Touchless Palmprint Recognition. IEEE Trans. Inf. Forensics Secur. 2019, 14, 3160–3174. [Google Scholar] [CrossRef] [Green Version]
  9. Li, W.; Zhang, B.; Zhang, L.; Yan, J. Principal Line-Based Alignment Refinement for Palmprint Recognition. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 2012, 42, 1491–1499. [Google Scholar] [CrossRef]
  10. Herbadji, A.; Guermat, N.; Ziet, L.; Akhtar, Z.; Cheniti, M.; Herbadji, D. Contactless Multi-Biometric System Using Fingerprint and Palmprint Selfies. Trait. Signal 2020, 37, 889–897. [Google Scholar] [CrossRef]
  11. Kannala, J.; Rahtu, E. BSIF: Binarized Statistical Image Features. In Proceedings of the 21st IEEE International Conference on Pattern Recognition (ICPR), Tsukuba, Japan, 11–15 November 2012; pp. 1363–1366. [Google Scholar]
  12. Ahonen, T.; Hadid, A.; Pietikäinen, M. Face Recognition with Local Binary Patterns. In Proceedings of the European Conference on Computer Vision (ECCV), Prague, Czech Republic, 11–14 May 2004; pp. 469–481. [Google Scholar]
  13. Dalal, N.; Triggs, B. Histograms of Oriented Gradients for Human Detection. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), San Diego, CA, USA, 20–25 June 2005; pp. 886–893. [Google Scholar]
  14. Ojansivu, V.; Heikkilä, J. Blur Insensitive Texture Classification Using Local Phase Quantization. In Proceedings of the International Conference on Image and Signal Processing (ICISP), Cherbourg-Octeville, France, 1–3 July 2008; pp. 236–243. [Google Scholar]
  15. Vu, N.-S.; Caplier, A. Face Recognition with Patterns of Oriented Edge Magnitudes. In Proceedings of the European Conference on Computer Vision (ECCV), Heraklion, Greece, 5–11 September 2010; pp. 313–326. [Google Scholar]
  16. Stone, J.V. Independent Component Analysis: An Introduction. Trends Cogn. Sci. 2002, 6, 59–64. [Google Scholar] [CrossRef] [PubMed]
  17. Shensa, M.J. The Discrete Wavelet Transform: Wedding the Trous and Mallat Algorithms. IEEE Trans. Signal Process. 1992, 40, 2464–2482. [Google Scholar] [CrossRef] [Green Version]
  18. Kumar, A. Incorporating Cohort Information for Reliable Palmprint Authentication. In Proceedings of the Sixth Indian Conference on Computer Vision, Graphics & Image Processing (ICVGIP), Bhubaneswar, India, 16–19 December 2008; pp. 583–590. [Google Scholar]
  19. Sun, Z.; Tan, T.; Wang, Y.; Li, S.Z. Ordinal Palmprint Represention for Personal Identification [Represention Read Representation]. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), San Diego, CA, USA, 20–25 June 2005; pp. 279–284. [Google Scholar]
  20. Kong, A.W.K.; Zhang, D. Competitive coding scheme for palmprint verification. In Proceedings of the 17th International Conference on Pattern Recognition (ICPR), Cambridge, UK, 23–26 August 2004; pp. 520–523. [Google Scholar]
  21. Zhang, L.; Li, H.; Niu, J. Fragile bits in palmprint recognition. IEEE Signal Process. Lett. 2012, 19, 663–666. [Google Scholar] [CrossRef] [Green Version]
  22. Fei, L.; Xu, Y.; Tang, W.; Zhang, D. Double-orientation code and nonlinear matching scheme for palmprint recognition. Pattern Recognit. 2016, 49, 89–101. [Google Scholar] [CrossRef]
  23. Xu, Y.; Fei, L.; Wen, J.; Zhang, D. Discriminative and Robust Competitive Code for Palmprint Recognition. IEEE Trans. Syst. Man Cybern. Syst. 2018, 48, 232–241. [Google Scholar] [CrossRef]
  24. Michael, G.K.O.; Connie, T.; Teoh, A.B.J. Touchless palm print biometrics: Novel design and implementation. Image Vis. Comput. 2008, 26, 1551–1560. [Google Scholar] [CrossRef]
  25. Morales, A.; Ferrer, M.A.; Kumar, A. Towards contactless palmprint authentication. IET Comput. Vis. 2011, 5, 407–416. [Google Scholar] [CrossRef] [Green Version]
  26. Hammami, M.; Ben Jemaa, S.; Ben-Abdallah, H. Selection of discriminative sub-regions for palmprint recognition. Multimed. Tools Appl. 2014, 68, 1023–1050. [Google Scholar] [CrossRef]
  27. Wu, X.; Zhao, Q.; Bu, W. A SIFT-based contactless palmprint verification approach using iterative RANSAC and local palmprint descriptors. Pattern Recognit. 2014, 47, 3314–3326. [Google Scholar] [CrossRef]
  28. Luo, Y.T.; Zhao, L.Y.; Zhang, B.; Jia, W.; Xu, F.; Lu, J.T.; Zhu, Y.H.; Xu, B.X. Local line directional pattern for palmprint recognition. Pattern Recognit. 2016, 50, 26–44. [Google Scholar] [CrossRef]
  29. Wu, L.; Xu, Y.; Cui, Z.; Zuo, Y.; Zhao, S.; Fei, L. Triple-Type Feature Extraction for Palmprint Recognition. Sensors 2021, 21, 4896. [Google Scholar] [CrossRef] [PubMed]
  30. Izadpanahkakhk, M.; Razavi, S.M.; Taghipour-Gorjikolaie, M.; Zahiri, S.H.; Uncini, A. Deep Region of Interest and Feature Extraction Models for Palmprint Verification Using Convolutional Neural Networks Transfer Learning. Appl. Sci. 2018, 8, 1210. [Google Scholar] [CrossRef] [Green Version]
  31. Zhao, S.; Zhang, B. Deep discriminative representation for generic palmprint recognition. Pattern Recognit. 2020, 98, 107071. [Google Scholar] [CrossRef]
  32. George, G.; Oommen, R.M.; Shelly, S.; Philipose, S.S.; Varghese, A.M. A Survey on Various Median Filtering Techniques for Removal of Impulse Noise from Digital Image. In Proceedings of the 2018 Conference on Emerging Devices and Smart Systems (ICEDSS), Tiruchengode, India, 2–3 March 2018; pp. 235–238. [Google Scholar]
  33. Tang, H.; Ni, R.; Zhao, Y.; Li, X. Median filtering detection of small-size image based on CNN. J. Vis. Commun. Image Represent. 2018, 51, 162–168. [Google Scholar] [CrossRef]
  34. Yadav, G.; Maheshwari, S.; Agarwal, A. Contrast Limited Adaptive Histogram Equalization Based Enhancement for Real Time Video System. In Proceedings of the 2014 International Conference on Advances in Computing, Communications and Informatics (ICACCI), Delhi, India, 24–27 September 2014; pp. 2392–2397. [Google Scholar]
  35. Chang, Y.; Jung, C.; Ke, P.; Song, H.; Hwang, J. Automatic Contrast-Limited Adaptive Histogram Equalization with Dual Gamma Correction. IEEE Access 2018, 6, 11782–11792. [Google Scholar] [CrossRef]
  36. Bendjillali, R.I.; Beladgham, M.; Merit, K.; Taleb-Ahmed, A. Improved Facial Expression Recognition Based on DWT Feature for Deep CNN. Electronics 2019, 8, 324. [Google Scholar] [CrossRef] [Green Version]
  37. Hosseinzadeh, M. Robust Control Applications in Biomedical Engineering: Control of Depth of Hypnosis. In Control Applications for Biomedical Engineering Systems; Azar, A.T., Ed.; Elsevier: Amsterdam, The Netherlands, 2020; pp. 89–125. [Google Scholar]
  38. Parida, P.; Bhoi, N. Wavelet Based Transition Region Extraction for Image Segmentation. Future Comput. Inform. J. 2017, 2, 65–78. [Google Scholar] [CrossRef]
  39. Hardalac, F.; Yaşar, H.; Akyel, A.; Kutbay, U. A Novel Comparative Study Using Multi-Resolution Transforms and Convolutional Neural Network (CNN) for Contactless Palm Print Verification and Identification. Multimed. Tools Appl. 2020, 79, 22929–22963. [Google Scholar] [CrossRef]
  40. Le, N.T.; Wang, J.W.; Le, D.H.; Wang, C.C.; Nguyen, T.N. Fingerprint Enhancement Based on Tensor of Wavelet Subbands for Classification. IEEE Access 2020, 8, 6602–6615. [Google Scholar] [CrossRef]
  41. Arbaoui, A.; Ouahabi, A.; Jacques, S.; Hamiane, M. Concrete Cracks Detection and Monitoring Using Deep Learning-Based Multiresolution Analysis. Electronics 2021, 10, 1772. [Google Scholar] [CrossRef]
  42. Zhang, D. Wavelet Transform. In Fundamentals of Image Data Mining: Analysis, Features, Classification and Retrieval; Zhang, D., Ed.; Springer International Publishing: Berlin/Heidelberg, Germany, 2019; pp. 35–44. [Google Scholar]
  43. Adjabi, I.; Ouahabi, A.; Benzaoui, A.; Jacques, S. Multi-Block Color-Binarized Statistical Images for Single-Sample Face Recognition. Sensors 2021, 21, 728. [Google Scholar] [CrossRef] [PubMed]
  44. Benzaoui, A.; Hadid, A.; Boukrouche, A. Ear biometric recognition using local texture descriptors. J. Electron. Imaging 2014, 23, 053008. [Google Scholar] [CrossRef]
  45. Khaldi, Y.; Benzaoui, A. Region of interest synthesis using image-to-image translation for ear recognition. In Proceedings of the International Conference on Advanced Aspects of Software Engineering (ICAASE), Constantine, Algeria, 28–30 November 2020; pp. 1–6. [Google Scholar]
  46. Benzaoui, A.; Adjabi, I.; Boukrouche, A. Experiments and improvements of ear recognition based on local texture descriptors. Opt. Eng. 2017, 56, 043109. [Google Scholar] [CrossRef]
  47. Jiang, L.; Cai, Z.; Wang, D.; Jiang, S. Survey of Improving K-Nearest-Neighbor for Classification. In Proceedings of the Fourth International Conference on Fuzzy Systems and Knowledge Discovery (FSKD), Haikou, China, 24–27 August 2007; pp. 679–683. [Google Scholar]
  48. Atia, N.; Benzaoui, A.; Jacques, S.; Hamiane, M.; Kourd, K.E.; Bouakaz, A.; Ouahabi, A. Particle Swarm Optimization and Two-Way Fixed-Effects Analysis of Variance for Efficient Brain Tumor Segmentation. Cancers 2022, 14, 4399. [Google Scholar] [CrossRef]
  49. Saleh, A.I.; Shehata, S.A.; Labeeb, L.M. A Fuzzy-Based Classification Strategy (FBCS) Based on Brain–Computer Interface. Soft Comput. 2019, 23, 2343–2367. [Google Scholar] [CrossRef]
  50. Nguyen, B.P.; Tay, W.L.; Chui, C.K. Robust Biometric Recognition from Palm Depth Images for Gloved Hands. IEEE Trans. Hum.-Mach. Syst. 2015, 45, 799–804. [Google Scholar] [CrossRef]
  51. Ouahabi, A. Image Denoising using Wavelets: Application in Medical Imaging. In Advances in Heuristic Signal Processing and Applications; Chatterjee, A., Nobahari, H., Siarry, P., Eds.; Springer: Basel, Switzerland, 2013; pp. 287–313. [Google Scholar]
Figure 1. The 2D wavelet decomposition. (a) Concept of one-level components. (b) Concept of two-level components. (c) Original palmprint image from IITD database. (d) The 2D-DWT at first level. (e) The 2D-DWT at second level.
Figure 1. The 2D wavelet decomposition. (a) Concept of one-level components. (b) Concept of two-level components. (c) Original palmprint image from IITD database. (d) The 2D-DWT at first level. (e) The 2D-DWT at second level.
Sensors 22 09814 g001
Figure 2. (a) Palmprint image, (b) pre-learned filter with a size of l × l = 11 × 11 and bit string length of n = 8 , (c) BSIF features, and (d) final BSIF encoded features.
Figure 2. (a) Palmprint image, (b) pre-learned filter with a size of l × l = 11 × 11 and bit string length of n = 8 , (c) BSIF features, and (d) final BSIF encoded features.
Sensors 22 09814 g002
Figure 3. Example of BSIF conversion results for (a) original image, (b)   l × l = 17 × 17 ,   n = 8 , (c)   l × l = 9 × 9 ,   n = 12 , and (d)   l × l = 3 × 3 ,   n = 8 .
Figure 3. Example of BSIF conversion results for (a) original image, (b)   l × l = 17 × 17 ,   n = 8 , (c)   l × l = 9 × 9 ,   n = 12 , and (d)   l × l = 3 × 3 ,   n = 8 .
Sensors 22 09814 g003
Figure 4. Graphical flowchart of the proposed feature extraction approach.
Figure 4. Graphical flowchart of the proposed feature extraction approach.
Sensors 22 09814 g004
Figure 5. Sample images from the IITD database.
Figure 5. Sample images from the IITD database.
Sensors 22 09814 g005
Figure 6. Sample images from the CASIA palmprint database.
Figure 6. Sample images from the CASIA palmprint database.
Sensors 22 09814 g006
Figure 7. Recognition rates using several DWT decompositions applied to the CASIA and IITD databases.
Figure 7. Recognition rates using several DWT decompositions applied to the CASIA and IITD databases.
Sensors 22 09814 g007
Table 1. Comprehensive summary of related work studies.
Table 1. Comprehensive summary of related work studies.
ApproachPublicationMethodEmployed DatasetEvaluation Protocol
Name#Sub.#img.
Coding-based methodsKong and Zhang [20]CompCodeIITD23026004 img/sub randomly selected Train and remaining Test
CASIA3125500
Sun et al. [19]OrdinalCodeIITD23026014/5 img/sub Train and remaining Test
CASIA3125502
Zhang et al. [21]E-BOCVIITD23026012 img/sub Train and remaining Test
CASIA31255023 img/sub Train and remaining Test
Fei et al. [22]DOCIITD23026014/5 img/sub Train and remaining Test
CASIA3125502
Xu et al. [23]DRCCIITD23026004 img/sub Train and remaining Test
PolyU II1937752
MPolyU2506000
Texture-based methodsMichael et al. [24]DGLBPCASIA31255004 img/sub Train and remaining Test
IITD2302601
Morales et al. [25]SIFT_OLOFCASIA31255004 img/sub Train and 4 img/sub remaining Test
IITD2302601
Hammami et al. [26]LBPCASIA28254124 img/sub Train and 4 img/sub remaining Test
PolyU1937752
Wu et al. [27]SIFT_IRANSAC_OLOFCASIA31255004 img/sub Train and 4 img/sub remaining Test
IITD2302601
Luo et al. [28]LLDP_MFRATCASIA31255004 img/sub Train and 4 img/sub remaining Test
IITD2302601
LLDP_GaborCASIA3125500
IITD2302601
Wu et al. [29]TFDCASIA31255024 img/sub Train and remaining Test
IITD2302601
Deep learning-based methodsIzadpanahkakhk et al. [30]FEMIITD23026004 img/sub Train and remaining Test
HKPU1937752Images from first session Train and Images from second session Test
Fei et al. [5]AlexNetIITD23026004 img/Sub randomly selected Train and remaining Test
VGG-16GPDS1001000
Inception-V3CASIA3125500
ResNet-50
Zhao and Zhang [31]DDRIITD23026004 img/Sub randomly selected Train and remaining Test
CASIA3125500
Table 2. Identification accuracies using all BSIF parameters applied to the CASIA database.
Table 2. Identification accuracies using all BSIF parameters applied to the CASIA database.
5 Bits6 Bits7 Bits8 Bits9 Bits10 Bits11 Bits12 Bits
3 × 349.1560.1162.3763.99////
5 × 556.9970.2674.4777.0278.5178.7678.7677.10
7 × 767.0776.5382.9287.0585.4787.6289.0390.69
9 × 970.5580.0985.5986.4089.6492.9693.8594.25
11 × 1171.8882.7688.8388.4791.3093.9794.6195.63
13 × 1373.7081.3988.8789.2392.4795.3494.9896.03
15 × 1575.2083.0989.4090.7793.3294.2995.6796.60
17 × 1775.4483.3789.0791.7093.6894.5395.9596.35
Table 3. Identification accuracies using all BSIF parameters applied to the IITD database.
Table 3. Identification accuracies using all BSIF parameters applied to the IITD database.
5 Bits6 Bits7 Bits8 Bits9 Bits10 Bits11 Bits12 Bits
3 × 325.8834.6438.4937.59////
5 × 535.8746.3553.5653.5651.5151.6752.1752.66
7 × 745.5358.7258.7266.0967.1572.4872.4875.18
9 × 948.8162.6572.8975.5176.6582.4782.8884.93
11 × 1153.4867.9778.7079.9384.9388.2089.8490.58
13 × 1357.0869.2079.5282.9686.8191.3192.3893.77
15 × 1557.7371.9980.3483.9489.0291.1593.8594.67
17 × 1757.0871.9080.8385.5889.6891.6494.3495.49
Table 4. Identification accuracies using several DWT families applied to the IITD database.
Table 4. Identification accuracies using several DWT families applied to the IITD database.
DWT FamiliesHaarDaubechiesBiorthogonalCoifletsSymletsFejer-Korovkin FiltersReverse BiorthogonalDiscrete Meyer
Results96.23db196.23bior1.196.23coif196.15sym296.15fk496.06rbio1.196.2396.15
db296.15bior2.296.23coif296.39sym396.06fk696.15rbio2.296.15
db396.06bior3.196.15coif396.15sym496.15fk896.31rbio3.196.15
db496.39bior3.996.31coif496.39sym596.15fk1496.31rbio3.996.31
Table 5. Identification accuracies using several DWT families applied to the CASIA database.
Table 5. Identification accuracies using several DWT families applied to the CASIA database.
DWT FamiliesHaarDaubechiesBiorthogonalCoifletsSymletsFejer-Korovkin FiltersReverse BiorthogonalDiscrete Meyer
Results96.92db196.92dior1.196.92coif196.88sym296.92fk496.96rbio1.196.9296.19
db296.92dior2.296.92coif297.04sym396.88fk696.92rbio2.296.88
db396.88dior3.196.96coif397.08sym496.92fk897.00rbio3.196.92
db497.04dior3.997.04coif497.08sym597.04fk1497.00rbio3.997.12
Table 6. The performance of our system without and with pre-processing (median filtering or CLAHE) using the IITD and CASIA databases.
Table 6. The performance of our system without and with pre-processing (median filtering or CLAHE) using the IITD and CASIA databases.
DatabasesWithout Pre-Processing (%)Pre-Processing with Median Filtering (%)Pre-Processing with CLAHE (%)
IITD96.3995.9997.13
CASIA97.1296.6097.21
Table 7. The performance of our system using K-NN vs. CDNN applied to IITD and CASIA databases.
Table 7. The performance of our system using K-NN vs. CDNN applied to IITD and CASIA databases.
CASIAIITD
KK-NNCDNNK-NNCDNN
EuclideanCity Block EuclideanCity BlockEuclideanCity BlockEuclideanCity Block
197.2198.1097.2198.1097.1398.7797.1398.77
396.6897.7396.6897.6995.9198.2895.9198.12
596.0897.2996.6097.4193.7897.1394.0297.13
795.1596.6895.7197.0192.6396.4893.1296.64
994.5096.0895.5096.7690.9195.3391.8995.74
Table 8. Comparison of recognition rates with different recent approaches.
Table 8. Comparison of recognition rates with different recent approaches.
PublicationYearMethodIITD (%)CASIA (%)
Coding-based methodsKong and Zhang [20]2004CompCode77.7979.27
Sun et al. [19]2005OrdinalCode73.2673.32
Zhang et al. [21]2012E-BOCV85.9384.06
Fei et al. [22]2016DOC89.9978.51
Xu et al. [23]2018DRCC88.82/
Texture-based methodsMichael et al. [24]2008DGLBP76.4478.86
Morales et al. [25]2011SIFT_OLOF89.4489.99
Hammami et al. [26]2014LBP/96.66
Wu et al. [27]2014SIFT_IRANSAC_OLOF93.2891.46
Luo et al. [28]2016LLDP_MFRAT92.7590.77
LLDP_Gabor95.1793.00
Wu et al. [29]2021TFD97.4796.88
Deep learning-based methodsIzadpanahkakhk et al. [30]2018FEM94.70/
Fei et al. [5]2019AlexNet88.1894.91
VGG-1692.1294.01
Inception-V396.2293.85
ResNet-5095.5795.21
Zhao and Zhang [31]2020DDR98.7099.41
Our Approach2022BSIF + DWT98.7798.10
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Amrouni, N.; Benzaoui, A.; Bouaouina, R.; Khaldi, Y.; Adjabi, I.; Bouglimina, O. Contactless Palmprint Recognition Using Binarized Statistical Image Features-Based Multiresolution Analysis. Sensors 2022, 22, 9814. https://doi.org/10.3390/s22249814

AMA Style

Amrouni N, Benzaoui A, Bouaouina R, Khaldi Y, Adjabi I, Bouglimina O. Contactless Palmprint Recognition Using Binarized Statistical Image Features-Based Multiresolution Analysis. Sensors. 2022; 22(24):9814. https://doi.org/10.3390/s22249814

Chicago/Turabian Style

Amrouni, Nadia, Amir Benzaoui, Rafik Bouaouina, Yacine Khaldi, Insaf Adjabi, and Ouahiba Bouglimina. 2022. "Contactless Palmprint Recognition Using Binarized Statistical Image Features-Based Multiresolution Analysis" Sensors 22, no. 24: 9814. https://doi.org/10.3390/s22249814

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop