Skip to content
BY 4.0 license Open Access Published by De Gruyter June 13, 2022

A novel fingerprint recognition method based on a Siamese neural network

  • Zihao Li , Yizhi Wang EMAIL logo , Zhong Yang , Xiaomin Tian , Lixin Zhai , Xiao Wu , Jianpeng Yu , Shanshan Gu , Lingyi Huang EMAIL logo and Yang Zhang

Abstract

Fingerprint recognition is the most widely used identification method at present. However, it still falls short in terms of cross-platform and algorithmic complexity, which exerts a certain effect on the migration of fingerprint data and the development of the system. The conventional image recognition methods require offline standard databases constructed in advance for image access efficiency. The database can provide a pre-processed image via a specific method that probably is compatible merely with the specific recognition algorithm. Then, the specific recognition algorithm starts the process of retrieving these specific pre-proessing images for recognition and inevitably will be blocked from other datasets. The proposed method in this research designed an embedded image processing algorithm based on a Siamese neural network in the recognition method that allows the proposed method to recognize images from any source without constructing a database for image storage in advance. In this research, the proposed method was applied to fingerprint recognition and evaluation of the proposed method was evaluated. The results showed that the accuracy of the proposed algorithm was up to 92%, and its F1 score was up to 0.87. Compared with the conventional fingerprint matching methods, its significant advantage in the FRR, FAR, and CR jointly indicated the remarkable correct recognition rate of the proposed method.

1 Introduction

1.1 Background

Fingerprints are patterns made up of rough lines on the pad of human fingers. Being only a small part of the skin, the information fingerprints contains is very instrumental. Among the biometric recognition technologies, fingerprint recognition boasts the highest reliability [1,2]. Fingerprint recognition has now come to everyone’s side. Mobile phone fingerprint unlocking, fingerprint payment, fingerprint punch-in, and other ways have already become part of human life [3].

Extracting Galton minutiae points is the core of traditional fingerprint recognition [4]. In addition to preprocessing, binarization, and thinning, minutiae are extracted by skeletonization [5]. A matching algorithm is another barycenter of fingerprint recognition. It uses classical algorithms such as the number and angle of the minutiae matching method, multiple reference point matching method, and vector triangle matching method [6]. Referring to the weakness of fingerprints recognition technology, on the one hand, conventional fingerprint recognition mainly adopts image processing and other fixed algorithms and usually requires manual supervision and parameter debugging [7]. It is time-consuming and can hardly adapt to the massive fingerprint databases. On the other hand, the fingerprint image matching method varies from system to system, including coordinates, directions, and accuracy of minutiae [8]. Given different coordinates, one fingerprint image is stored in different “formats” in many databases and cannot be recognized mutually, which is redundant and difficult for image immigration.

To this end, a novel fingerprint recognition method is proposed and realized in this study, which uses the Siamese neural network to compare two fingerprint images and complete the task of fingerprint recognition. After the test, the proposed technique works best in mobile phone fingerprint recognition, electronic locks, and other fingerprint devices. The contributions of this study are as follows:

  1. Compared with conventional recognition methods, the proposed method saves the step of feature extraction and can make fingerprint recognition faster and easy to develop.

  2. The fingerprint images stored in the proposed database can be directly input into other fingerprint recognition systems for use, which greatly enhances the compatibility of fingerprint system data.

  3. The proposed method returns stronger robustness, and the algorithm is human-free.

  4. Compared with other neural-network-based methods, steps of fingerprint database construction with fingerprint images are reduced via the proposed method, as it provides a comparison between two fingerprint images in real time.

1.2 Literature review

The most widely accepted method in image processing as well as other data processing such as sound recognition [9], is convolutional neural network (CNN) [10], and as an extensive method of CNN, the Siamese neural network is a coupling framework established between two artificial neural networks [11]. The Siamese network includes two CNNs that share the weight. After two images enter CNN and the comparative network with loss function, the relevancy between the images is output. As early as 2002, there were studies combining neural networks with fingerprint recognition [12]. Its recent applications are mainly associated with image processing, such as image forensics [13], diagnosis from CXR images for Covid-19 patients [14], and geographic anomalies [15], and the applications are limited to comparing the difference of collected images.

Although the fingerprint recognition algorithm based on improved CNN [16] investigated by Li has gotten somewhere with respect to speed and accuracy, minutiae still needed to be extracted and the compatibility was poor. The method based on the back-propagation (BP) neural network realized by Sung mainly applied the BP network to the matching algorithm of minutiae [17]. Fingerprint minutiae extraction was a difficult step in debugging and developing conventional methods. Yuan et al. adopted FPN (feature pyramid networks)–SE (squeeze and excitation)–Capsule (capsule networks) to realize fingerprint recognition [18]. Although the accuracy of the algorithm was enhanced, the complexity of fingerprint recognition was also increased. In addition, the recognition method was to input the fingerprints into a pre-trained network and the network would directly give the recognition result. This indicated that each time a new fingerprint was added, the network shall be retrained, and it was time-consuming to input fingerprints. What’s more, the data information was stored inside the network, so the compatibility and scalability were low, and the role that it played in real-time fingerprint recognition applications was limited. Deshpande’s team [19] employed a CNN model based on residual learning to enhance and extract minutiae. Ma et al. brought forward a deep convolutional neural network and used it to match fingerprint minutiae in parallel [20]. Both of them adopted neural networks to extract fingerprint minutiae from images. These kinds of recognition methods did promote the recognition degree of unclear fingerprint images. However, they were not compatible with other recognition systems. By contrast, the method presented in this study can directly use the images to match the results. Still, fingerprint recognition can be achieved in other systems by extracting minutiae from data images based on this study or other methods. Therefore, the discussion leads to the following two research questions:

  1. How should the complexity of the algorithm in fingerprint recognition be simplified?

  2. How should the same fingerprint database be compatible with multiple fingerprint systems?

2 Methodologies

As mentioned above, the conventional fingerprint recognition system relied on the comparison of minutiae. In this study, however, the step of minutia point extraction was omitted, and two binary fingerprint images were input into the Siamese neural network to obtain the similarity between two fingerprints and get the fingerprint recognition result. The whole system needed to complete such steps as the standardization of fingerprint image, image enhancement, binarization, input into the Siamese network, and output of the matching result. It can be divided into two parts: “fingerprint image preprocessing” and “fingerprint image matching based on Siamese network,” as shown in Figure 1.

Figure 1 
               Flowchart of fingerprint image processing.
Figure 1

Flowchart of fingerprint image processing.

The method proposed in this study can compare fingerprint images in a direct way, so it was more flexible in practical applications. The application method is shown in Figure 2, that is, to store the binarized image into the database and compare it with the input fingerprint, in order to control the subsequent programs.

Figure 2 
               Flowchart of application.
Figure 2

Flowchart of application.

2.1 Fingerprint image preprocessing

The fingerprint images obtained in fingerprint collection had the following four defects: (1) different pressing forces and directions of fingers can cause the fingerprint image to vary in shade; (2) different dryness and wetness levels of fingers can make the fingerprint images too dry or too wet and lack corresponding features; (3) the fingerprint image was intermittent due to wrinkles and scars on the fingers; and (4) there was irregular noise or stains in the image collected. Fingerprint image preprocessing is a technology that can improve the quality of fingerprint images, and fingerprint images processed with this technology can be better processed later.

The fingerprint image processing method applied in this study was divided into three steps: (1) standardization, (2) fingerprint enhancement, and (3) fingerprint binarization.

2.2 Fingerprint image matching based on Siamese network

The structure diagram of the Siamese networks is shown in Figure 3. It was composed of three networks, of which two networks (network 1 and network 2) were a pair of convolutional neural networks with share weight. The function of this pair of networks was to generate a pair of minutia vectors by extracting image features. After the vectors went through the fully connected network (network 3), they attained the final output.

Figure 3 
                  Structure diagram of Siamese network.
Figure 3

Structure diagram of Siamese network.

There were many options for the master neural network model, such as the AlexNet model that dramatically broke the recognition accuracy record of the convolutional network in the ImageNet Challenge in 2012 [21], GoogLeNet, a more outstanding model on the ImageNet platform in 2014 [22] that applied the multi-scale convolution feature fusion technology, and the more excellent VGGNet, a deep residual network model ResNet designed by He et al. [23]. Nowadays, the recognition accuracy of convolutional neural networks is even higher than that of human beings, the network layers are increasingly deep, and the technology is also increasingly sophisticated.

This study employed an improved version of the VGG16 network model, which was a convolutional network model presented by Simonyan and Zisserman [24]. This network performed very well in the ImageNet platform Challenge in 2014. The applications of face recognition and object classification in the field of image recognition have been verified and achieved outstanding results. However, this method has not yet been applied in the research of fingerprint recognition. In this study, we applied this method to fingerprint images and expanded its application scope.

The part of fingerprint image matching fulfilled the function of outputting matching degrees. Two pre-processed fingerprint images were input. Through network calculations, the matching degree m was figured out. If m was biased to 1, two fingerprints were similar and came from the same finger. If m was biased to 0, two fingerprints were different and came from different fingers.

2.3 Obtaining matching results

After comparison based on the Siamese network, a threshold result d in (0,1) was used for classification. If m was greater than d, then the fingerprints were the same. If m was less than d, then the fingerprints were different. The threshold value adopted in this study was d = 0.9.

3 Pre-processing methods of fingerprint images

3.1 Standardization of images

Fingerprint images were standardized to eliminate the problems of inconsistent clarity, gray scale, and the number of channels between different fingerprint images. The steps were as follows:

  1. Grayed the images and turned colorful images into gray images.

  2. Equalization. Due to the influence of the collection device or the pressing force of the finger, the gray value of the effective part of fingerprint image fluctuated greatly. In this case, it was a requisite to equalize the gray value of the image, so that the mean, variance, and contrast of the gray feature of the image could be kept in a small range.

Assuming that the gray value at (x, y) of the original image was gray (x, y) and the image size was M × N, the gray mean (mean) and gray variance (var) can be obtained. The calculation method is as follows:

(1) mean = i = 0 M 1 j = 0 N 1 gray ( i + x × M , j + y × N ) M × N ,

(2) var = i = 0 M 1 j = 0 N 1 [ gray ( i + x × M , j + y × N ) mean ] 2 M × N ,

(3) GRAY ( x , y ) = mean 0 + var 0 × [ gray ( x , y ) mean ( x , y ) ] 2 var , gray > mean, mean 0 var 0 × [ gray ( x , y ) mean ] 2 var ,

where var0 is the expected variance, mean0 is the expected mean, and GRAY(x, y) is the gray value that was output. In this study, the empirical values of mean0 = 80 and var0 = 200 are used, and the equalized result is shown in Figure 4.

  1. Normalization. For ease of further processing and calculation, the image needed to be normalized. Individual pixels shall be expressed by values in [0, 1]. After normalization, the gray values of individual pixels were

    (4) y = x min max min .

  2. Low-pass filtering smoothing. The image noise generated during fingerprint collection was removed and the fingerprint images were smoothed. Fast Fourier transform (FFT) was applied to the images. After the high-frequency part was eliminated, low-pass filtered images were acquired through inverse Fourier transform.

Figure 4 
                  Left: original image. Right: equalized image.
Figure 4

Left: original image. Right: equalized image.

In this study, an ideal low-pass filter H(x, y) was adopted:

(5) H ( x , y ) = 1 , D ( u , v ) D 0 0 , D ( u , v ) > D 0

where D 0 stands for the radius of the passband. D 0 = 60 was used in this study. D(u, v) is the Euclidean distance from a point in the spectrum to the center of the spectrum:

(6) D ( u , v ) = ( u M / 2 ) 2 + ( v N / 2 ) 2 .

Filtered results are shown in Figure 5.

Figure 5 
                  Left: equalized image. Right: filtered image.
Figure 5

Left: equalized image. Right: filtered image.

3.2 Fingerprint image enhancement

To remove contiguity, disconnection and other defects of fingerprints, it was a must to enhance the fingerprints. Because the orientation of fingerprint lines was very conspicuous, the orientation field of the fingerprint can be built according to fingerprint lines. Our study drew on the fingerprint enhancement solution based on the orientation field and ridge frequency proposed by Hong et al. [25]. This algorithm can adaptively improve the clarity of ridge and valley structures of input fingerprint images according to the estimated local ridge orientation and frequency.

First, gradients x (u, v) and y (u, v) of two orientations of each pixel were calculated, and then, the local orientation was figured out:

(7) v x ( i , j ) = u = i w 2 i + w 2 v = j w w j + w 2 2 x ( u , v ) y ( u , v ) ,

(8) v y ( i , j ) = u = i w 2 i + w 2 v = j w w j + w 2 2 [ x 2 ( u , v ) y 2 ( u , v ) ] ,

(9) θ ( i , j ) = 1 2 tan 1 v y ( i , j ) v x ( i , j ) ,

where θ(i, j) is the least-squares estimations of the local ridge orientation at the block. It stood for the orientation orthogonal to the principal direction of the Fourier spectrum of the corresponding window. A discrete sine waveform composed of fingerprint lines can be obtained on a line orthogonal to θ in each local window. The valley and ridge frequency can be estimated according to this feature.

When the local orientation and valley and ridge frequency of fingerprints were obtained, the Gabor filter can be adapted to filter the local window. The Gabor filter was featured by frequency selectivity and orientation selectivity, and it had the best resolution ratio in both spatial and frequency domains. On this account, it was very suitable to take the Gabor filter as a band-pass filter to enhance fingerprint images. After filtration, the local windows were spliced to get enhanced fingerprint images.

The Gabor operator is calculated using the following equation, and the result is shown in Figure 6:

(10) g ( x , y , λ , θ , ψ , σ , γ ) = exp x 2 + γ 2 y 2 2 σ 2 exp i 2 π x λ + ψ .

Figure 6 
                  Left: filtered image. Right: enhanced fingerprint image.
Figure 6

Left: filtered image. Right: enhanced fingerprint image.

3.3 Binarization

In this study, an iterative method was employed to perform binary segmentation on the enhanced image:

  1. An initial threshold T was selected;

  2. The image was divided into two parts, R1 and R2 by using T;

  3. Means m1 and m2 of R1 and R2 were calculated;

  4. A new threshold T was selected. Let T = (m1 + m2)/2;

  5. Steps 1–4 were repeated until the difference between T values obtained in two times was less than a preset value.

Figure 7 shows the result after binarization of the fingerprint image.

Figure 7 
                  Left: enhanced fingerprint image. Right: binarized fingerprint image.
Figure 7

Left: enhanced fingerprint image. Right: binarized fingerprint image.

4 Building and training of the Siamese network

4.1 Network building

In this study, the PyTorch platform was used to build the main network, and the input layer channel was set to 128 × 128 × 3. Fingerprint images with a size of 128 × 128 were used as input. The main network was structured as follows (Table 1):

Table 1

Structure of main network

S/N Layer type Kernel size Features Max pooling size Output size
1 Input 128 × 128 × 3
2 Two convolution 3 × 3 64 2 × 2 64 × 64 × 64
3 Two convolution 3 × 3 128 2 × 2 32 × 32 × 64
4 Three convolution 3 × 3 256 2 × 2 16 × 16 × 256
5 Three convolution 3 × 3 512 2 × 2 8 × 8 × 512
6 Three convolution 3 × 3 512 2 × 2 4 × 4 × 512
7 Fully connected 1 × 1 × 4,096
8 Fully connected layer was simulated by convolution 1 × 1 × 1,000

See Figure 3 for the structure diagram of the comparative network. The two main networks were exported to the loss function. The loss function used in this study was contrastive loss [26]. This kind of loss function was effective in handling the relationship between the outputs of two main networks in the Siamese network.

Its minimum loss distance is

(11) 1 2 N n = 1 N y d 2 + ( 1 y ) max ( margin d , 0 ) 2 .

The vector distance L that was output was fully connected twice after the calculation of the loss function, and the output layer adopted the sigmoid function to normalize the values obtained. The network in this segment was a comparative network (Figure 8). If the relevancy between two images was high, then the output was biased to 1. Otherwise, the output was biased to 0.

(12) Sigmoid ( z ) = 1 1 + e z .

Figure 8 
                  Structure diagram of the comparative network.
Figure 8

Structure diagram of the comparative network.

4.2 Training method

In this study, we trained the above Siamese network with 63 different fingerprints and used fingerprint images with a size of 128 × 128. The fingerprint images of the same finger in the training set were stored in the same folder. For the main network VGG, the pre-trained VGG16 network weights were used for the following network training in this study.

There are 66 kinds of fingerprint images belonging to different fingers in the training set, collected by an AS60x fingerprint collector. For each of the different fingerprints in the training set, 10 images were sampled on average. All images in the training set underwent fingerprint preprocessing as described above. For the purpose of increasing the adaptability of the training set and training, each fingerprint image was rotated five times and a total of six images (Figure 9) were achieved. Through this kind of operation training, the number of images of each fingerprint increased to about 60 on average.

Figure 9 
                  Binarized fingerprint image database.
Figure 9

Binarized fingerprint image database.

Regarding the training of the comparative network, two images of the same kind were taken out of the training set, and the output was calibrated to 1. An image of a different kind was taken out and the output was calibrated to 0, together with the images in the first two chapters. Another image of a different kind was taken out, the previous step was continued, and the dataset was calibrated. After training in this way, when two fingerprint images of the same finger were input, the network output was biased to 1. Otherwise, when two fingerprint images of different fingers were input, the network output was biased to 0.

An RTX2060 graphic processing unit (GPU) was selected for training and test, and the training parameters and results are used as follows:

  1. Batch size = 32;

  2. Learning rate = 0.001;

  3. Epoch = 1,000;

  4. Total loss = 0.1149;

  5. Training time: 3 days;

  6. Test time per match: 600 ms.

5 Results and analysis

5.1 Compatibility test

The conventional Galton algorithm was applied to the preprocessing image database proposed in this research. Results showed that the proposed database was compatible with the conventional system and the targeted system featured database was successfully generated.

5.2 Accuracy test

A total of 3,826 fingerprint images of 66 different fingerprints were used to train the network, and the network was trained five times in a row. The accuracy would be greatly improved if more data were included in the database (Figure 10).

Figure 10 
                  Fingerprint matching experiment, the two images at the top were from the same fingerprint, and the two images at the bottom were from different fingerprints.
Figure 10

Fingerprint matching experiment, the two images at the top were from the same fingerprint, and the two images at the bottom were from different fingerprints.

Figure 11 
                  The accuracy of the method proposed in this study under different thresholds.
Figure 11

The accuracy of the method proposed in this study under different thresholds.

The fingerprint images used in the detection program were not in the training database. A total of 82 fingerprint images of 10 different fingerprints were used for fingerprint detection. The images had been preprocessed and binarized as described above in order to improve the testing efficiency. Fingerprint images of the same kind in the database were placed in the same folder. The implementation of the detection program consisted of three steps:

  1. Read all fingerprint images and mark fingerprints by fingerprint category.

  2. Calculate the accuracy rate (as shown in Figure 11) and recall rate of the marked fingerprint images after pairwise matching with different thresholds.

  3. Calculate the harmonic mean of the accuracy rate and recall rate to obtain the F1 score (also known as balanced F score).

Figure 12 
                  The detailed minutiae are successfully extracted after inputting data image into the conventional fingerprint algorithm.
Figure 12

The detailed minutiae are successfully extracted after inputting data image into the conventional fingerprint algorithm.

Figure 13 
                  The accuracy rate and recall rate curves of different threshold values.
Figure 13

The accuracy rate and recall rate curves of different threshold values.

Figure 14 
                  
                     F1 score with different threshold values.
Figure 14

F1 score with different threshold values.

Figure 15 
                  FRR and FAR depending on the decision threshold.
Figure 15

FRR and FAR depending on the decision threshold.

Table 2

Comparison of experimental results

Galton-based method Multiple reference points Probabilist method Vector triangle matching Proposed method
FRR (en %) 7.7 7.85 4.7 2.72 1.11
FAR (en %) 0.5 0.06 0.7 0.05 1.41
CR (en %) 91.8 92.09 94.6 97.23 97.48

6 Results

  1. Directly input images into the Galton-based minutiae extraction algorithm, and successfully find out the detail points (as shown in Figure 12), indicating that the fingerprint storage database proposed by this method is usable for other algorithms, and highly compatible.

  1. A total of 82 test fingerprint images after marking are selected for the matching test to obtain a total of 3321 different combinations, and finally, the accuracy rate and recall rate curves are calculated (Figure 13), the accuracy rate and recall rate are synthesized, the F1 score is calculated (Figure 14). It can be concluded that the F1 score of 0.874 is the highest when the similarity threshold reaches 91%.

  1. Many researchers have utilized FRR (false rejection rate) and FAR (false acceptance rate) to measure the performance of fingerprint recognition algorithms as follows:

precision(i) = simNum(i).TP/(simNum(i).TP + simNum(i).FP);%precision rate

recall(i) = simNum(i).TP/(simNum(i).TP + simNum(i).FN);recall rate

F1(i) = (2*precision(i)*recall(i))/(precision(i) + recall(i));

FRR(i) = simNum(i).FP/num;

FAR(i) = simNum(i).FN/num;

CR(i) = 1-FRR(i)-FAR(i).

Figure 15 gives the curve of FRR and FAR of this method with a threshold. It can be seen from the figure that when the threshold is 91%, there is little difference between the FRR value and the FAR value. At this time, FRR = 1.11%, FAR = 1.41%, and CR (correctly recognized) = 100% – FRR – FAR = 97.48%. Table 2 shows the comparison between this method and other fingerprint identification algorithms as a performance reference.

The proposed method returned a slightly higher result in CR but a significantly higher result in FAR and a significantly smaller score in FAR than the method of vector triangle matching; while compared to other classic methods, such as multiple reference points, the probabilist method, and Galton-based method, the proposed method received a remarkable advantage in FRR, FAR, and CR.

7 Conclusions and future work

7.1 Conclusions

In this research, a novel image matching method based on the embedded Siamese neural network is proposed and applied to fingerprint matching, which can perform the recognition of fingerprints from any sources (databases, photographs, and pictures) with an embedded image processing algorithm, so that the steps of fingerprint image database construction can be omitted. Therefore, online recognition is realized. In addition, compared with other neural network methods, after training, the proposed method received an F1 score of 0.874 with a similarity threshold of 91%, indicating its excellent precision and recall rate; in addition, the significant large value in CR (97.48%) and FAR (1.41%), and a significant small value in FRR (1.11%) of the proposed method jointly indicates its accuracy in identifying the desired fingerprints.

The research work is summarized as follows:

  1. Compared with the conventional fingerprint recognition method, which requires extracting minutiae and corresponding matching, our study innovates by directly using the method of comparing the fingerprint images with the Siamese neural network and then outputting similarity value.

  2. This study takes into account that different fingerprint recognition systems have different data storage methods, and the data are not interoperable. Compared with the conventional fingerprint recognition methods, it is simpler, easier to understand, compatible, and convenient.

7.2 Future work

Although the design and research on the innovative fingerprint matching system was performed and realized, some prosperous research trends are left to be developed further, which include the following:

  1. Due to the limitation of hardware in the part of the fingerprint recognition system, not many training sets are used in machine learning. If more datasets were used in future research, then the accuracy would be further improved.

  2. Our study lays more emphasis on the accuracy of the fingerprint matching algorithm. There is still much room for improvement in the running time. So in further studies, we need to fully consider the requirements of real-time and rapidity.

  3. Although this study proposes a fingerprint recognition method, for the newly proposed matching algorithm, the functions implemented are still limited, and there is still a big gap in the practical applications. The database, networking, other biometric methods, single-chip microcomputer and other modules and development methods can be applied to the proposed matching method. In that case, apart from the dataset we retrieved in this research was open source, data security, attack methods, etc., should be further considered in the future applications.

  4. Despite the improvement of compatibility, in practical applications, the way of storing images as a database will increase the burden of the storage space of the database and require hardware support.

  5. In this research, the vgg16 network is used for fingerprint identification. If a new network more suitable for fingerprint processing is used, the effect will be better.

Acknowledgments

This work is partially supported by the National Natural Science Foundation of China (No. 61803188), the Fujian Key Laboratory of Functional Marine Sensing Materials, Minjiang University (Grant No. MJUKF-FMSM202103), the University-Industry Collaboration Education Foundation of Ministry of Education (No. 202102053008), the High Level Talent Foundation of Jinling Institute of Technology (No. jit-b-201713, No. jit-b-201816, and No. jit-b-202029).

  1. Funding information: This work is partially supported by the National Natural Science Foundation of China (No. 61803188), the Fujian Key Laboratory of Functional Marine Sensing Materials, Minjiang University (Grant No. MJUKF-FMSM202103), the University-Industry Collaboration Education Foundation of Ministry of Education (No. 202102053008), the High Level Talent Foundation of Jinling Institute of Technology (No. jit-b-201713, No. jit-b-201816, and No. jit-b-202029).

  2. Conflict of interest: The authors state no conflict of interest.

References

[1] Khademi AF, Zulkernine M, Weldemariam K. An empirical evaluation of web-based fingerprinting. IEEE Softw. 2015;32:46–52.10.1109/MS.2015.77Search in Google Scholar

[2] Takano A. The history of practical application of fingerprinting: networks of the British Empire and the “problem” of controlling human mobilities. JAMA Intern Med. 2015;175:257–60.Search in Google Scholar

[3] Li X. The past and present of fingerprint identification technology. China: Chinese Government General Services; 2021. p. 64–6 [Chinese].Search in Google Scholar

[4] Luo Y, Guo W., Footprinting Tutorial, People’s Public Security, University of China Press, China; 2010.Search in Google Scholar

[5] Krish RP, Fierrez J, Ramos D, Ortega-Garcia J, Bigun J. Pre-registration for improved latent fingerprint identification. Proceedings of International Conference on Pattern Recognition; 2014 Aug 1–3. p. 696–701.10.1109/ICPR.2014.130Search in Google Scholar

[6] Satheesh KP. Svm-bdt based intelligent fingerprint authentication system using geometry approach. Int J Comput Netw Inf Secur. 2021;4:1.Search in Google Scholar

[7] Fang B, Wen H, Liu RZ, Tang YY. A new fingerprint thinning algorithm. Chinese Conference on Pattern Recognition (CCPR); 2010. p. 1–4.10.1109/CCPR.2010.5659273Search in Google Scholar

[8] Wang S. Overview of fingerprint identification technology. J Inf Secur Res. 2016;2(7):343–55 [Chinese].Search in Google Scholar

[9] Wang Z, Li N, Wu T, Zhang H, Feng T. Simulation of human ear recognition sound direction based on convolutional neural network. J Intell Syst. 2021;30(1):209–23.10.1515/jisys-2019-0250Search in Google Scholar

[10] Zhu L, Zhang H, Ali S, Yang B, Li C. Crowd counting via multi-scale adversarial convolutional neural networks. J Intell Syst. 2021;30(1):180–91.10.1515/jisys-2019-0157Search in Google Scholar

[11] Bromle J, Guyon I, LeCun Y, Sackinger E, Shah R. Signature verification using a “Siamese” time delay neural network. Int J Pattern Recognit Artif Intell. 1993;11:737–44.10.1142/9789812797926_0003Search in Google Scholar

[12] Kamijo M. Classifying fingerprint images using neural network: deriving the classification state. IEEE International Conference on Neural Networks. Vol. 3, 2002 Aug 6. p. 1932–7.10.1109/ICNN.1993.298852Search in Google Scholar

[13] Mazumdar A, Bora PK. Siamese convolutional neural network-based approach towards universal image forensics. IET Image Process. 2020;14(13):3105–16.10.1049/iet-ipr.2019.1114Search in Google Scholar

[14] Wang L, Lin ZQ, Wong A. COVID-Net: a tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-ray images. Sci Rep. 2020;10(1):19549.10.1038/s41598-020-76550-zSearch in Google Scholar PubMed PubMed Central

[15] Wu B, Li X, Yuan F, Li H, Zhang M. Transfer learning and siamese neural network based identification of geochemical anomalies for mineral exploration: a case study from the Cu Au deposit in the NW Junggar area of northern Xinjiang Province, China. J Geochem Explor. 2022;232:106904.10.1016/j.gexplo.2021.106904Search in Google Scholar

[16] Li H. Feature extraction, recognition, and matching of damaged fingerprint: application of deep learning network. Concurr Comput Pract Exp. 2020;33(6):1–9.10.1002/cpe.6057Search in Google Scholar

[17] Sun H. Fingerprint recognition based on BP neural network. Chin Comput Commun. 2011;5:32–3 [Chinese].Search in Google Scholar

[18] Yuan Y, Li L, Yang Y. Fingerprint image recognition algorithm based on FPN-SE-Capsule network. Ind Control Comput. 2021;34:45–47 + 50 [Chinese].Search in Google Scholar

[19] Deshpande UU, Malemath VS, Patil SM, Chaugule SV. CNNAI: a convolution neural network-based latent fingerprint matching using the combination of nearest neighbor arrangement indexing. Front Robot AI. 2020;7:113.10.3389/frobt.2020.00113Search in Google Scholar PubMed PubMed Central

[20] Ma ZQ, Sun XX, Cheng MJ, Wang SH. Research on the application of convolutional-deep neural networks in parallel fingerprint minutiae matching. Int J Biometrics. 2021;13:96.10.1504/IJBM.2021.112220Search in Google Scholar

[21] Technicolor T, Related S. ImageNet classification with deep convolutional neural networks. https://web.cs.ucdavis.edu/∼yjlee/teaching/ecs289g-winter2018/alexnet.pdf (last retrieved 18/03/2022).Search in Google Scholar

[22] Arora S, Bhaskara A, Ge R, Ma T. Provable bounds for learning some deep representations. U.S.A: Cornell Univerisity; 2013. https://arxiv.org/pdf/1310.6343v1.pdf (last retrieved 18/03/2022).Search in Google Scholar

[23] He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. Conference on Computer Vision and Pattern Recogonition (CVPR); 2016 June 27–30. p. 770–8.10.1109/CVPR.2016.90Search in Google Scholar

[24] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. International Conference on Learning Representations (ICLR); 2015 May 7–9. p. 1–14.Search in Google Scholar

[25] Hong L, Wan Y, Jain A. Fingerprint image enhancement: algorithm and performance evaluation. IEEE Trans Pattern Anal Mach Intell. 1998;20(8):777–89.10.1109/34.709565Search in Google Scholar

[26] He K, Fan H, Wu Y, et al. Momentum contrast for unsupervised visual representation learning. CVF Conference on Computer Vision and Pattern Recognition (CVPR); 2020 June 13–19. p. 9726–35.10.1109/CVPR42600.2020.00975Search in Google Scholar

Received: 2021-12-16
Revised: 2022-03-21
Accepted: 2022-03-24
Published Online: 2022-06-13

© 2022 Zihao Li et al., published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 28.5.2024 from https://www.degruyter.com/document/doi/10.1515/jisys-2022-0055/html
Scroll to top button