Abstract
In this paper, we propose a first-person localization and navigation system for helping blind and visually-impaired people navigate in indoor environments. The system consists of a mobile vision front end with a portable panoramic lens mounted on a smart phone, and a remote GPU-enabled server. Compact and effective omnidirectional video features are extracted and represented in the smart phone front end, and then transmitted to the server, where the features of an input image or a short video clip are used to search a database of an indoor environment via image-based indexing to find both the location and the orientation of the current view. To deal with the high computational cost in searching a large database for a realistic navigation application, data parallelism and task parallelism properties are identified in database indexing, and computation is accelerated by using multi-core CPUs and GPUs. Experiments on synthetic data and real data are carried out to demonstrate the capacity of the proposed system, with respect to real-time response and robustness.
Chapter PDF
Similar content being viewed by others
References
Altwaijry, H., Moghimi, M., Belongie, S.: Recognizing locations with google glass: A case study. In: 2014 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 167–174, March 2014
Aly, M., Bouguet, J.Y.: Street view goes indoors: Automatic pose estimation from uncalibrated unordered spherical panoramas. In: 2012 IEEE Workshop on Applications of Computer Vision (WACV), pp. 1–8. IEEE (2012)
Cicirelli, G., Milella, A., Di Paola, D.: Rfid tag localization by using adaptive neuro-fuzzy inference for mobile robot applications. Industrial Robot: An International Journal 39(4), 340–348 (2012)
Cummins, M., Newman, P.: Appearance-only slam at large scale with fab-map 2.0. The International Journal of Robotics Research 30(9), 1100–1123 (2011)
Davison, A.J., Reid, I.D., Molton, N.D., Stasse, O.: Monoslam: Real-time single camera slam. IEEE Transactions on Pattern Analysis and Machine Intelligence 29(6), 1052–1067 (2007)
Di Corato, F., Pollini, L., Innocenti, M., Indiveri, G.: An entropy-like approach to vision based autonomous navigation. In: 2011 IEEE International Conference on Robotics and Automation (ICRA), pp. 1640–1645. IEEE (2011)
GoPano: Gopano micro camera adapter, June 2014. http://www.gopano.com/products/gopano-micro
Hui, Z., Fei, L., Hui-juan, L.: Study on fisheye image correction based on cylinder model. Computer Applications 28(10), 2664–2666 (2008)
Kulyukin, V., Gharpure, C., Nicholson, J., Pavithran, S.: Rfid in robot-assisted indoor navigation for the visually impaired. In: Proceedings of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems. (IROS 2004), vol. 2, pp. 1979–1984. IEEE (2004)
Legge, G.E., Beckmann, P.J., Tjan, B.S., Havey, G., Kramer, K., Rolkosky, D., Gage, R., Chen, M., Puchakayala, S., Rangarajan, A.: Indoor navigation by people with visual impairment using a digital sign system. PloS one 8(10), e76783 (2013)
Manduchi, R., Coughlan, J.M.: The last meter: blind visual guidance to a target. In: Proceedings of the 32nd annual ACM conference on Human factors in computing systems, pp. 3113–3122. ACM (2014)
Molina, E., Zhu, Z.: Visual noun navigation framework for the blind. Journal of Assistive Technologies 7(2), 118–130 (2013)
Molina, E., Zhu, Z., Tian, Y.: Visual nouns for indoor/outdoor navigation. In: Miesenberger, K., Karshmer, A., Penaz, P., Zagler, W. (eds.) ICCHP 2012, Part II. LNCS, vol. 7383, pp. 33–40. Springer, Heidelberg (2012)
Murillo, A.C., Singh, G., Kosecka, J., Guerrero, J.J.: Localization in urban environments using a panoramic gist descriptor. IEEE Transactions on Robotics 29(1), 146–160 (2013)
Rivera-Rubio, J., Idrees, S., Alexiou, I., Hadjilucas, L., Bharath, A.A.: Mobile visual assistive apps: benchmarks of vision algorithm performance. In: Petrosino, A., Maddalena, L., Pala, P. (eds.) ICIAP 2013. LNCS, vol. 8158, pp. 30–40. Springer, Heidelberg (2013)
Scaramuzza, D., Martinelli, A., Siegwart, R.: A flexible technique for accurate omnidirectional camera calibration and structure from motion. In: IEEE International Conference on Computer Vision Systems, ICVS 2006, pp. 45–45. IEEE (2006)
Zhu, Z., Yang, S., Xu, G., Lin, X., Shi, D.: Fast road classification and orientation estimation using omni-view images and neural networks. IEEE Transactions on Image Processing 7(8), 1182–1197 (1998)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
Hu, F., Zhu, Z., Zhang, J. (2015). Mobile Panoramic Vision for Assisting the Blind via Indexing and Localization. In: Agapito, L., Bronstein, M., Rother, C. (eds) Computer Vision - ECCV 2014 Workshops. ECCV 2014. Lecture Notes in Computer Science(), vol 8927. Springer, Cham. https://doi.org/10.1007/978-3-319-16199-0_42
Download citation
DOI: https://doi.org/10.1007/978-3-319-16199-0_42
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-16198-3
Online ISBN: 978-3-319-16199-0
eBook Packages: Computer ScienceComputer Science (R0)