Skip to main content

A Pseudo-3D Vision-Based Dual Approach for Machine-Awareness in Indoor Environment Combining Multi-resolution Visual Information

  • Conference paper
  • First Online:
Advances in Computational Intelligence (IWANN 2017)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 10306))

Included in the following conference series:

  • 2920 Accesses

Abstract

In this paper we describe a pseudo-3D vision-based dual approach for Machine-Awareness in indoor environment. The so-called duality is provided by color and depth cameras of Kinect system, which presents an appealing potential for 3D robots vision. Placing the human-machine (including human-robot) interaction as a primary outcome of the intended visual Machine-Awareness in investigated system, we aspire proffering the machine the autonomy in awareness about its surrounding environment. Combining pseudo-3D vision, and salient objects’ detection algorithms, the investigated approach seeks an autonomous detection of relevant items in 3D environment. The pseudo-3D perception allows reducing computational complexity inherent to the 3D vision context into a 2D computational task by processing 3D visual information within a 2D-images’ framework. The statistical foundation of the investigated approach proffers it a solid and comprehensive theoretical basis, holding out a bottom-up nature making the issued system unconstrained regarding prior hypothesis. We provide experimental results validating the proposed system.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. de Greeff, J., Delaunay, F., Belpaeme, T.: Human-robot interaction in concept acquisition: a computational model. In: Proceedings of International Conference on Development & Learning, pp. 1–6 (2009)

    Google Scholar 

  2. Araki, T., Nakamura, T., Nagai, T., Funakoshi, K., Nakano, M., Iwahashi, N.: Autonomous acquisition of multimodal information for online object concept formation by robots. In: Proceedings of IEEE/IROS, pp. 1540–1547 (2011)

    Google Scholar 

  3. Kinect camera http://www.xbox.com/en-US/kinect/default.htm

  4. Han, J., Shao, L., Xu, D., Shotton, J.: Enhanced computer vision with microsoft kinect sensor: a review. IEEE Trans. Cybern. 43(5), 1318–1334 (2013)

    Article  Google Scholar 

  5. Zhang, Z.: Microsoft kinect sensor and its effect. IEEE Multimedia Mag. 19(2), 4–10 (2012)

    Article  Google Scholar 

  6. Khoshelham, K., Elberink, S.O.: Accuracy and resolution of kinect depth data for indoor mapping applications. Sensors 12(2), 1437–1454 (2012)

    Article  Google Scholar 

  7. Camplani, M., Mantecon, T., Salgad, L.: Depth-color fusion strategy for 3-D scene modeling with kinect. IEEE Trans. Cybern. 43(6), 1560–1571 (2013)

    Article  Google Scholar 

  8. Lloyd, R., Closkey, S.M.: Recognition of 3D package shapes for single camera metrology. In: Proceedings of IEEE Winter Conference on Applications of Computer Vision (IEEE-WACV 2014), pp. 99–106 (2014)

    Google Scholar 

  9. Skalski, A., Machura, B.: Metrological analysis of microsoft kinect in the context of object localization. J. Metril. Measure. Syst. 22(4), 469–478 (2015)

    Google Scholar 

  10. Zolkiewski, S., Pioskowik, D.: Robot control and online programming by human gestures using a kinect motion sensor. In: Rocha, Á., Correia, A.M., Tan, F.B., Stroetmann, K.A. (eds.) New Perspectives in Information Systems and Technologies, Volume 1. AISC, vol. 275, pp. 593–604. Springer, Cham (2014). doi:10.1007/978-3-319-05951-8_56

    Chapter  Google Scholar 

  11. Stone, E.E., Skubic, M.: Fall detection in homes of older adults using the microsoft kinect. IEEE J. Biomed. Health Inf. 19(1), 290–301 (2015)

    Article  Google Scholar 

  12. González-Jorge, H., Zancajo, S., González-Aguilera, D., Arias, P.: Application of kinect gaming sensor in forensic science. J. Forensic Sci. 60(1), 206–211 (2015)

    Article  Google Scholar 

  13. Achanta, R., Hemami, S., Estrada, F., Susstrunk, S.: Frequency-tuned salient region detection. In: Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition, pp. 1597–1604 (2009)

    Google Scholar 

  14. Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 20, 1254–1259 (1998)

    Article  Google Scholar 

  15. Harel, J., Koch, C., Perona, P.: Graph-based visual saliency. Adv. Neural. Inf. Process. Syst. 19, 545–552 (2007)

    Google Scholar 

  16. Achanta, R., Estrada, F., Wils, P., Süsstrunk, S.: Salient region detection and segmentation. In: Gasteratos, A., Vincze, M., Tsotsos, J.K. (eds.) ICVS 2008. LNCS, vol. 5008, pp. 66–75. Springer, Heidelberg (2008). doi:10.1007/978-3-540-79547-6_7

    Chapter  Google Scholar 

  17. Liu, T., Yuan, Z., Sun, J., Wang, J., Zheng, N., Tang, X., Shum, H.-Y.: Learning to detect a salient object. IEEE Trans. Pattern Anal. Mach. Intell. 33(2), 353–367 (2001)

    Google Scholar 

  18. Liang, Z., Chi, Z., Fu, H., Feng, D.: Salient object detection using content-sensitive hypergraph representation and partitioning. Pattern Rec. 45(11), 3886–3901 (2012)

    Article  Google Scholar 

  19. Ramik, D.M., Sabourin, C., Madani, K.: Hybrid salient object extraction approach with automatic estimation of visual attention scale. In: Proceedings of IEEE SITIS, pp. 438–445 (2011)

    Google Scholar 

  20. Ramik, D.M., Sabourin, C., Moreno, R., Madani, K.: A machine learning based intelligent vision system for autonomous object detection and recognition. J. Appl. Intell. 40(2), 358–375 (2014)

    Article  Google Scholar 

  21. Moreno, R., Ramik, D.M., Graña, M., Madani, K.: Image segmentation on the spherical coordinate representation of the RGB color space. IET Image Proc. 6(9), 1275–1283 (2012)

    Article  Google Scholar 

  22. Desingh, K., Madhava Krishna, K., Rajan, D., Jawahar, C.V.: Depth really matters: improving visual salient region detection with depth. In: Proceedings of 24th British Machine Vision Conference (BMVC 2013), Bristol, UK, pp. 98.1–98.11 (2013)

    Google Scholar 

  23. Peng, H., Li, B., Xiong, W., Hu, W., Ji, R.: RGBD salient object detection: a benchmark and algorithms. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8691, pp. 92–109. Springer, Cham (2014). doi:10.1007/978-3-319-10578-9_7

    Google Scholar 

  24. Cheng, Y., Fu, H., Wei, X., Xiao, J., Cao, X.: Depth enhanced saliency detection method. In: Proceedings of International Conference on Internet Multimedia Computing and Service (ICIMCS 2014), Xiamen, China, pp. 23–27 (2014)

    Google Scholar 

  25. Tang, Y., Tong, R., Tang, M., Zhang, Y.: Depth incorporating with color improves salient object detection. Vis. Comput. 32(1), 111–121 (2016)

    Article  Google Scholar 

  26. Bertasius, G., Park, H.S., Shi, J.: Exploiting egocentric object prior for 3D saliency detection (2015). Preprint arXiv:1511.02682, arxiv.org

  27. Cheng, M.M., Mitra, N.J., Huang, X.: Global contrast based salient region detection. IEEE Trans. Pattern Anal. Mach. Intell. 37(3), 569–582 (2015)

    Article  Google Scholar 

  28. Riche, N., Duvinage, M., Mancas, M., Gosselin, B., Dutoit, T.: Saliency and human fixations: state-of-the-art and study of comparison metrics. In: Proceedings of IEEE ICCV, pp. 1153–1160 (2013)

    Google Scholar 

  29. Fawcett, T.: An introduction to ROC analysis. Pattern Recogn. Lett. 27, 861–874 (2006)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kurosh Madani .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Fraihat, H., Madani, K., Sabourin, C. (2017). A Pseudo-3D Vision-Based Dual Approach for Machine-Awareness in Indoor Environment Combining Multi-resolution Visual Information. In: Rojas, I., Joya, G., Catala, A. (eds) Advances in Computational Intelligence. IWANN 2017. Lecture Notes in Computer Science(), vol 10306. Springer, Cham. https://doi.org/10.1007/978-3-319-59147-6_55

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-59147-6_55

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-59146-9

  • Online ISBN: 978-3-319-59147-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics