Skip to main content

Vision-Depth Landmarks and Inertial Fusion for Navigation in Degraded Visual Environments

  • Conference paper
  • First Online:
Advances in Visual Computing (ISVC 2018)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 11241))

Included in the following conference series:

Abstract

This paper proposes a method for tight fusion of visual, depth and inertial data in order to extend robotic capabilities for navigation in GPS-denied, poorly illuminated, and textureless environments. Visual and depth information are fused at the feature detection and descriptor extraction levels to augment one sensing modality with the other. These multimodal features are then further integrated with inertial sensor cues using an extended Kalman filter to estimate the robot pose, sensor bias terms, and landmark positions simultaneously as part of the filter state. As demonstrated through a set of hand-held and Micro Aerial Vehicle experiments, the proposed algorithm is shown to perform reliably in challenging visually-degraded environments using RGB-D information from a lightweight and low-cost sensor and data from an IMU.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Papachristos, C., Khattak, S., Alexis, K.: Autonomous exploration of visually-degraded environments using aerial robots. In: 2017 International Conference on Unmanned Aircraft Systems. IEEE (2017)

    Google Scholar 

  2. Papachristos, C., Khattak, S., Alexis, K.: Uncertainty-aware receding horizon exploration and mapping using aerial robots. In: IEEE International Conference on Robotics and Automation, May 2017

    Google Scholar 

  3. Mascarich, F., Khattak, S., Papachristos, C., Alexis, K.: A multi-modal mapping unit for autonomous exploration and mapping of underground tunnels. In: 2018 IEEE Aerospace Conference, pp. 1–7. IEEE (2018)

    Google Scholar 

  4. Papachristos, C., et al.: Autonomous exploration and inspection path planning for aerial robots using the robot operating system. In: Koubaa, A. (ed.) Robot Operating System (ROS). SCI, vol. 778, pp. 67–111. Springer, Cham (2019). https://doi.org/10.1007/978-3-319-91590-6_3

    Chapter  Google Scholar 

  5. Dang, T., Papachristos, T., Alexis, K.: Visual saliency-aware receding horizon autonomous exploration with application to aerial robotics. In: IEEE International Conference on Robotics and Automation (ICRA), May 2018

    Google Scholar 

  6. Grocholsky, B., Keller, J., Kumar, V., Pappas, G.: Cooperative air and ground surveillance. IEEE Robot. Autom. Mag. 13(3), 16–25 (2006)

    Article  Google Scholar 

  7. Balta, H., et al.: Integrated data management for a fleet of search-and-rescue robots. J. Field Robot. 34 (2017)

    Article  Google Scholar 

  8. Mur-Artal, R., Montiel, J.M.M., Tardos, J.D.: ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Trans. Robot. 31(5), 1147–1163 (2015)

    Article  Google Scholar 

  9. Kitt, B., et al.: Visual odometry based on stereo image sequences with RANSAC-based outlier rejection scheme. In: Intelligent Vehicles Symposium (IV) (2010)

    Google Scholar 

  10. Scaramuzza, D., Fraundorfer, F.: Visual odometry: part I: the first 30 years and fundamentals. IEEE Robot. Autom. Mag. 18(4), 80–92 (2011)

    Article  Google Scholar 

  11. Forster, C., Pizzoli, M., Scaramuzza, D.: SVO: fast semi-direct monocular visual odometry. In: International Conference on Robotics and Automation (2014)

    Google Scholar 

  12. Bloesch, M., Omari, S., Hutter, M., Siegwart, R.: Robust visual inertial odometry using a direct EKF-based approach. In: Intelligent Robots and Systems (IROS), pp. 298–304. IEEE (2015)

    Google Scholar 

  13. Leutenegger, S., Lynen, S., Bosse, M., Siegwart, R., Furgale, P.: Keyframe-based visual-inertial odometry using nonlinear optimization. Int. J. Robot. Res. 34(3), 314–334 (2015)

    Article  Google Scholar 

  14. Zhang, J., Singh, S.: LOAM: LiDAR odometry and mapping in real-time. In: Robotics: Science and Systems Conference, Pittsburgh, PA (2014)

    Google Scholar 

  15. Zhang, J., et al.: On degeneracy of optimization-based state estimation problems. In: IEEE International Conference on Robotics and Automation (2016)

    Google Scholar 

  16. Kerl, C., Sturm, J., Cremers, D.: Robust odometry estimation for RGB-D cameras. In: International Conference on Robotics and Automation (2013)

    Google Scholar 

  17. Labbe, M., Michaud, F.: Appearance-based loop closure detection for online large-scale and long-term operation. IEEE Trans. Robot. 29(3), 734–745 (2013)

    Article  Google Scholar 

  18. Endres, F., Hess, J., Sturm, J., Cremers, D., Burgard, W.: 3-D mapping with an RGB-D camera. IEEE Trans. Robot. 30(1), 177–187 (2014)

    Article  Google Scholar 

  19. Alismail, H., Kaess, M., Browning, B., Lucey, S.: Direct visual odometry in low light using binary descriptors. IEEE Robot. Autom. Lett. 2(2), 444–451 (2017)

    Article  Google Scholar 

  20. Nascimento, E.R., et al.: BRAND: a robust appearance and depth descriptor for RGB-D images. In: International Conference on Intelligent Robots and Systems (2012)

    Google Scholar 

  21. Wu, K., et al.: RISAS: a novel rotation, illumination, scale invariant appearance and shape feature. In: International Conference on Robotics and Automation (2017)

    Google Scholar 

  22. Levinson, J., Thrun, S.: Automatic online calibration of cameras and lasers. In: Robotics: Science and Systems, vol. 2 (2013)

    Google Scholar 

  23. Rublee, E., Rabaud, V., Konolige, K., Bradski, G.: ORB: an efficient alternative to sift or surf. In: International conference on Computer Vision (ICCV) (2011)

    Google Scholar 

  24. Keselman, L., et al.: Intel (R) realsense (TM) stereoscopic depth cameras. In: Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE (2017)

    Google Scholar 

  25. Furgale, P., Maye, J., Rehder, J., Schneider, T., Oth, L.: Kalibr (2014). https://github.com/ethz-asl/kalibr

  26. Kamel, M., Stastny, T., Alexis, K., Siegwart, R.: Model predictive control for trajectory tracking of unmanned aerial vehicles using Robot Operating System. In: Koubaa, A. (ed.) Robot Operating System (ROS). SCI, vol. 707, pp. 3–39. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-54927-9_1

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shehryar Khattak .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Khattak, S., Papachristos, C., Alexis, K. (2018). Vision-Depth Landmarks and Inertial Fusion for Navigation in Degraded Visual Environments. In: Bebis, G., et al. Advances in Visual Computing. ISVC 2018. Lecture Notes in Computer Science(), vol 11241. Springer, Cham. https://doi.org/10.1007/978-3-030-03801-4_46

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-03801-4_46

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-03800-7

  • Online ISBN: 978-3-030-03801-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics