Skip to main content
Log in

Blind-environment interaction through voice augmented objects

  • Original Paper
  • Published:
Journal on Multimodal User Interfaces Aims and scope Submit manuscript

Abstract

This article presents an Java-based mobile service that enables blind-environment interaction through voice-augmented objects. To make this possible, it is necessary to tag the object with an associated radio frequency identification and record its voice-based description. The blind users can later use the service to scan surrounding augmented objects and verbalize their identity and characteristics. We use a user centred design in order to guarantee the accessibility of the service for visually impaired and blind people. The required hardware is a near field communication-enabled mobile phone with built-in accelerometer. The client-side application does not require pushing any buttons, browsing any menus, or touching any screens to select and activate any of supported modes: registration, calibration, voice recording, physical object identification, delete voice recording(s), cloud-based file sync and share. Twelve visually impaired individuals (aged 31–84, 6 men and 6 women) have tested the service in two different scenarios: (1) a test based on comparison with a PenFriend labeling unit, and (2) a users’ experience test. The results show that selected tangible, multimodal interface (object touching, phone shaking and tilt, voice output) can be used very easily (58 %) or easily (33 %) by blind and visually impaired users who have had no previous experience with other mobile services. Most of participants from the test group agreed that the service could be useful for their daily activities. The service can be used both at home and in public buildings for voice description of objects such as food, medicines, books, clothes, cosmetics, CD/DVDs, rooms, etc.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Hervás R, Bravo J, Fontecha J (2012) Awareness marks: adaptive services through user interactions with augmented objects. J Pers Ubiquit Comput 15(4):409–418

    Article  Google Scholar 

  2. Domingo MC (2012) An overview of the Internet of Things for people with disabilities. J Netw Comput Appl 35(2):584–596

    Article  Google Scholar 

  3. Guerrero LA, Horta H, Ochoa SF (2010) Developing augmented objects: a process perspective. J Univers Comput Sci 16(12):1612–1632

    Google Scholar 

  4. Leichtenstern K, Andre E (2008) User-centred development of mobile interfaces to a pervasive computing environment. First international conference on advances in computer-human interaction. Sainte Luce, Martinique, France, pp 114–119

  5. Brady E, Morris MR, Zhong Y, White S, Bigham JP (2013) Visual challenges in the everyday lives of blind people. In: Proceedings of the SIGCHI conference on human factors in computing systems. ACM, pp 2117–2126

  6. Roentgen UR, Gelderblom GJ, Soede M, de Witte LP (2008) Inventory of electronic mobility aids for persons with visual impairments: a literature review. J Vis Impair Blind 102(11):702–724

    Google Scholar 

  7. Giudice NA, Legge GE (2008) Blind navigation and the role of technology. In: Helal A, Mokhtari M, Abdulrazak B (eds) Engineering handbook of smart technology for aging, disability, and independence. Wiley, New York, pp 479–500

    Chapter  Google Scholar 

  8. Dramas F, Thorpe SJ, Jouffrais C (2010) Artificial vision for the blind: a bio-inspired algorithm for objects and obstacles detection. J Image Graph 10(4):531–544

    Article  Google Scholar 

  9. Osendorfer C, Bayer J, Urban S, van der Smagt P (2013) Unsupervised feature learning for low-level local image descriptors. arXiv:1301.2840

  10. Juan L, Gwun O (2013) A Comparison of SIFT, PCA-SIFT and SURF. J Image Process 3(4):143–152

    Google Scholar 

  11. Zhong Y, Garrigues PJ, Bigham JP (2013) Real time object scanning using a mobile phone and cloud-based visual search engine, In: Proceedings of the 15th international ACM SIGACCESS conference on computers and accessibility. doi:10.1145/2513383.2513443

  12. Yi C, Flores RW, Chincha R, Tian Y (2013) Finding objects for assisting blind people. J Netw Model Anal Health Inform Bioinform 2(2):71–79

    Article  Google Scholar 

  13. Jafri R, Ali SA, Arabnia HR (2013) Computer vision-based object recognition for the visually impaired using visual tags, In: The 2013 international conference on image processing, computer vision, and pattern recognition, pp 400–406

  14. Diekmann T, Melski A, Schumann, M (2007) Data-on-network vs. data-on-tag: managing data in complex rfid environments. In: Proceedings of the 40th annual Hawaii international conference on system sciences, IEEE, pp 224a

  15. Vazquez-Briseno M, Hirata FI, Sanchez-Lopez JD, Jimenez-Garcia E, Navarro-Cota C, Nieto-Hipolito JI (2012) Using RFID/NFC and QR-code in mobile phones to link the physical and the digital world. In: Deliyannis I (ed) Interactive multimedia. InTech, Rijeka, pp 219–242

    Google Scholar 

  16. Halgaonkar P, Jain S, Wadhai VM (2013) NFC: a review of technology, tags, applications and security. J Res Comput Commun Technol 2(10):979–987

    Google Scholar 

  17. Patel J, Kothari B (2013) Near field communication—the future technology for an interactive world. J Eng Res Sci Technol 2(2):55–59

  18. Coskun V, Ozdenizci B, Ok K (2013) A survey on near field communication (NFC) technology. J Wirel Pers Commun 71(3):2259–2294

  19. Kendrick D (2011) PenFriend and touch memo: a comparison of labeling tools. AccessWorld 12(9). http://www.afb.org/afbpress/pub.asp?DocID=aw120902. Accessed 4 April 2013

  20. Konttila A, Harjumaa M, Muuraiskangas S, Jokela M, Isomursu M (2012) Touch n’ Tag: digital annotation of physical objects with voice tagging. J Assist Technol 6(1):24–37

    Article  Google Scholar 

  21. Sánchez M, Mateos M, Fraile J, Pizarro D (2012) Touch Me: a new and easier way for accessibility using Smartphones and NFC. J Adv Intell Soft Comput 156:307–314

    Article  Google Scholar 

  22. Nielsen J (2012) Usability 101: introduction to usability. Alertbox. http://www.nngroup.com/articles/usability-101-introduction-to-usability. Accessed 20 March 2013

  23. Arora I, Gupta A (2012) Cloud databases: a paradigm shift in databases. J Comp Sci Issues 9(4):77–83

    Google Scholar 

  24. Rukzio E, Broll G, Leichtenstern K, Schmidt A (2007) Mobile interaction with the real world: an evaluation and comparison of physical mobile interaction techniques. Ambient intelligence. Springer, Heidelberg, Berlin, pp 1–18

  25. Turk M (2014) Multimodal interaction: a review. J Pattern Recognit Lett 36:189–195

    Article  Google Scholar 

  26. Giudice NA, Palani H, Brenner E, Kramer KM (2012) Learning non-visual graphical information using a touch-based vibro-audio interface. Proceedings of the 14th international ACM SIGACCESS conference on computers and accessibility. ACM, New York, pp 103–110

  27. Loomis JM, Klatzky RL, Giudice NA (2012) Sensory substitution of vision: importance of perceptual and cognitive processing. In: Manduchi R, Kurniawan S (eds) Assistive technology for blindness and low vision. CRC Press, Boca Raton, pp 162–191

    Google Scholar 

  28. López-de-Ipiña D, Vazquez JI, Jamardo I (2007) Touch computing: simplifying human to environment interaction through NFC technology. In: 1as Jornadas Científicas sobre RFID. Ciudad Real, Spain

  29. Choe BW, Min JK, Cho SB (2010) Online gesture recognition for user interface on accelerometer built-in mobile phones. 17th international conference on neural information processing: models and applications. Springer, Heidelberg, Berlin, pp 650–657

  30. Mantyjarvi J, Paternò F, Salvador Z, Santoro C (2006) Scan and tilt—towards natural interaction for mobile museum guides. In: 8th conference on human-computer interaction with mobile devices and services

  31. Leichtenstern K, Andre E (2008) User-centred development of mobile interfaces to a pervasive computing environment. In: Proceedings of the first international conference on advances in computer-human interaction, pp 114–119

  32. Tomitsch M, Schlögl R, Grechenig T, Wimmer C, Költringer T (2008) Accessible real-world tagging through audio-tactile location markers. 5th Nordic conference on human-computer interaction: building bridges. ACM, New York, pp 551–554

  33. Ivanov R (2012) RSNAVI: an RFID-based context-aware indoor navigation system for the blind. In: Proceedings of the 13th international conference on computer systems and technologies. ACM, New York, pp 313–320

  34. Ivanov R (2013) NFC-based pervasive learning service for children. Proceedings of the 14th international conference on computer systems and technologies. ACM, New York, pp 329–336

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rosen Ivanov.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ivanov, R. Blind-environment interaction through voice augmented objects. J Multimodal User Interfaces 8, 345–365 (2014). https://doi.org/10.1007/s12193-014-0166-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12193-014-0166-z

Keywords

Navigation