skip to main content
10.1145/3487664.3487750acmotherconferencesArticle/Chapter ViewAbstractPublication PagesiiwasConference Proceedingsconference-collections
short-paper

NasalBreathInput: A Hands-Free Input Method by Nasal Breath Gestures using a Glasses Type Device

Published:30 December 2021Publication History

ABSTRACT

Research on hands-free input methods has been actively conducted. However, most of the previous methods are difficult to use at any time in daily life due to using speech sounds or body movements. In this study, in order to realize a hands-free input method based on nasal breath using wearable devices, we propose a method for recognizing nasal breath gestures, using piezoelectric elements placed on the nosepiece of a glasses-type device. In the proposed method, nasal vibrations generated by nasal breath are acquired as sound data from the devices. Next, the breath pattern is recognized based on the factors of breath count, time interval, and intensity. We implemented a prototype system for initial evaluation. The evaluation results for eight subjects showed that the proposed method can recognize eight types of nasal breath gestures with 0.89% of F value. Our study provides the first wearable sensing technology that uses nasal breathing for hands-free input.

References

  1. [1] Koguchi, Y., Oharada, K., Takagi, Y., Sawada, Y., Shizuki, B., and Takahashi, S, A Mobile Command Input Through Vowel Lip Shape Recognition, In International Conference on Human-Computer Interaction, pp. 297–305 (2018).Google ScholarGoogle Scholar
  2. [2] Katsutoshi Masai, Kai Kunze, Yuta Sugiura, Masa Ogata, Masahiko Inami, and Maki Sugimoto, Evaluation of Facial Expression Recognition by a Smart Eyewear for Facial Direction Changes, Repeatability, and Positional Drift, ACM Transactions on Interactive Intelligent Systems (TiiS), 7(4), pp. 1–23 (2017). Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. [3] Ando, T., Kubo, Y., Shizuki, B., and Takahashi, S, Canalsense: Face-related movement recognition system based on sensing air pressure in ear canals, In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology, pp. 679–689 (2017). Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. [4] Lutz, O. H. M., Venjakob, A. C., and Ruff, S, SMOOVS: Towards calibration-free text entry by gaze using smooth pursuit movements, Journal of Eye Movement Research, 8(1), (2015).Google ScholarGoogle ScholarCross RefCross Ref
  5. [5] Chauhan, J., Hu, Y., Seneviratne, S., Misra, A., Seneviratne, A., and Lee, Y, BreathPrint: Breathing acoustics-based user authentication, In Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services, pp. 278–291 (2017). Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. [6] Shih, C. H., Tomita, N., Lukic, Y. X., Reguera, Á. H., Fleisch, E., and Kowatsch, T, Breeze: Smartphone-based acoustic real-time detection of breathing phases for a gamified biofeedback breath training, Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 3(4), pp. 1–30 (2019). Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. [7] Tennent, P., Rowland, D., Marshall, J., Egglestone, S. R., Harrison, A., Jaime, Z.,... and Benford, S, Breathalising games: understanding the potential of breath control in game interfaces, In Proceedings of the 8th international conference on advances in computer entertainment technology, pp. 1–8 (2011). Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. [8] Sra, M., Xu, X., and Maes, P, Breathvr: Leveraging breathing as a directly controlled interface for virtual reality games, In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pp. 1–12 (2018). Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. [9] Evreinov, G., and Evreinova, T, ” Breath-Joystick”-Graphical Manipulator for Physically Disabled Users, Proc. of the ICCHP2000, pp. 193–200 (2000).Google ScholarGoogle Scholar
  10. [10] Yamamoto, M., Ikeda, T., and Sasaki, Y, Real-time analog input device using breath pressure for the operation of powered wheelchair, In 2008 IEEE International Conference on Robotics and Automation, pp. 3914–3919, (2008).Google ScholarGoogle ScholarCross RefCross Ref
  11. [11] Marshall, J., Rowland, D., Rennick Egglestone, S., Benford, S., Walker, B., and McAuley, D, Breath control of amusement rides, In Proceedings of the SIGCHI conference on Human Factors in computing systems, pp. 73–82 (2011). Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. [12] Héctor A. Cordourier Maruri, Paulo Lopez-Meyer, Jonathan Huang, Willem Marco Beltman, Lama Nachman, and Hong Lu, V-Speech: Noise-Robust Speech Capturing Glasses Using Vibration Sensors, Proc. ACM Interact. Mob. Wearable Ubiquitous Technol, 2(4), pp. 180 (2018). Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. [13] He, Jibo and Chaparro, Alex and Nguyen, Bobby and Burge, Rondell and Crandall, Joseph and Chaparro, Barbara and Ni, Rui and Cao, Shi, Texting while driving: Is speech-based texting less risky than handheld texting, Proceedings of the 5th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, pp. 124–130 (2013). Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. [14] Amesaka, T., Watanabe, H., and Sugimoto, M., Facial expression recognition using ear canal transfer function, In Proceedings of the 23rd International Symposium on Wearable Computers, pp. 1–9 (2019). Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. NasalBreathInput: A Hands-Free Input Method by Nasal Breath Gestures using a Glasses Type Device
    Index terms have been assigned to the content through auto-classification.

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      iiWAS2021: The 23rd International Conference on Information Integration and Web Intelligence
      November 2021
      658 pages

      Copyright © 2021 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 30 December 2021

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • short-paper
      • Research
      • Refereed limited

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format .

    View HTML Format