Paper
16 July 2021 Personal authentication and recognition of aerial input Hiragana using deep neural network
Author Affiliations +
Proceedings Volume 11794, Fifteenth International Conference on Quality Control by Artificial Vision; 1179411 (2021) https://doi.org/10.1117/12.2585333
Event: Fifteenth International Conference on Quality Control by Artificial Vision, 2021, Tokushima, Japan
Abstract
We use Leap Motion and a deep neural network to perform personal authentication and character recognition of all hiragana characters entered in the air. We use Leap Motion to detect the index finger and store its trajectory as time series data. The input data was pre-processed to unify the data length by linear interpolation. For identification, the accuracy of Long Short Term Memory (LSTM) was compared with Support Vector Machine (SVM). As a result, SVM and LSTM achieved 97.25% and 98.18% F-measure in character recognition, respectively. In personal authentication, SVM has an accuracy of 92.45%, False Acceptance Rate (FAR) was 0.73%, and False Rejection Rate (FRR) was 41.59%. On the other hand, LSTM had an accuracy of 96.13%, FAR of 1.73% and FRR of 14.55%. Overall, the LSTM performed better than the SVM.
© (2021) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Hideyuki Mimura, Momoyo Ito, Shin-ichi Ito, and Minoru Fukumi "Personal authentication and recognition of aerial input Hiragana using deep neural network", Proc. SPIE 11794, Fifteenth International Conference on Quality Control by Artificial Vision, 1179411 (16 July 2021); https://doi.org/10.1117/12.2585333
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Neural networks

Optical character recognition

Data storage

Motion detection

Back to Top