Abstract
This paper proposes a method for tight fusion of visual, depth and inertial data in order to extend robotic capabilities for navigation in GPS-denied, poorly illuminated, and textureless environments. Visual and depth information are fused at the feature detection and descriptor extraction levels to augment one sensing modality with the other. These multimodal features are then further integrated with inertial sensor cues using an extended Kalman filter to estimate the robot pose, sensor bias terms, and landmark positions simultaneously as part of the filter state. As demonstrated through a set of hand-held and Micro Aerial Vehicle experiments, the proposed algorithm is shown to perform reliably in challenging visually-degraded environments using RGB-D information from a lightweight and low-cost sensor and data from an IMU.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Papachristos, C., Khattak, S., Alexis, K.: Autonomous exploration of visually-degraded environments using aerial robots. In: 2017 International Conference on Unmanned Aircraft Systems. IEEE (2017)
Papachristos, C., Khattak, S., Alexis, K.: Uncertainty-aware receding horizon exploration and mapping using aerial robots. In: IEEE International Conference on Robotics and Automation, May 2017
Mascarich, F., Khattak, S., Papachristos, C., Alexis, K.: A multi-modal mapping unit for autonomous exploration and mapping of underground tunnels. In: 2018 IEEE Aerospace Conference, pp. 1–7. IEEE (2018)
Papachristos, C., et al.: Autonomous exploration and inspection path planning for aerial robots using the robot operating system. In: Koubaa, A. (ed.) Robot Operating System (ROS). SCI, vol. 778, pp. 67–111. Springer, Cham (2019). https://doi.org/10.1007/978-3-319-91590-6_3
Dang, T., Papachristos, T., Alexis, K.: Visual saliency-aware receding horizon autonomous exploration with application to aerial robotics. In: IEEE International Conference on Robotics and Automation (ICRA), May 2018
Grocholsky, B., Keller, J., Kumar, V., Pappas, G.: Cooperative air and ground surveillance. IEEE Robot. Autom. Mag. 13(3), 16–25 (2006)
Balta, H., et al.: Integrated data management for a fleet of search-and-rescue robots. J. Field Robot. 34 (2017)
Mur-Artal, R., Montiel, J.M.M., Tardos, J.D.: ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Trans. Robot. 31(5), 1147–1163 (2015)
Kitt, B., et al.: Visual odometry based on stereo image sequences with RANSAC-based outlier rejection scheme. In: Intelligent Vehicles Symposium (IV) (2010)
Scaramuzza, D., Fraundorfer, F.: Visual odometry: part I: the first 30 years and fundamentals. IEEE Robot. Autom. Mag. 18(4), 80–92 (2011)
Forster, C., Pizzoli, M., Scaramuzza, D.: SVO: fast semi-direct monocular visual odometry. In: International Conference on Robotics and Automation (2014)
Bloesch, M., Omari, S., Hutter, M., Siegwart, R.: Robust visual inertial odometry using a direct EKF-based approach. In: Intelligent Robots and Systems (IROS), pp. 298–304. IEEE (2015)
Leutenegger, S., Lynen, S., Bosse, M., Siegwart, R., Furgale, P.: Keyframe-based visual-inertial odometry using nonlinear optimization. Int. J. Robot. Res. 34(3), 314–334 (2015)
Zhang, J., Singh, S.: LOAM: LiDAR odometry and mapping in real-time. In: Robotics: Science and Systems Conference, Pittsburgh, PA (2014)
Zhang, J., et al.: On degeneracy of optimization-based state estimation problems. In: IEEE International Conference on Robotics and Automation (2016)
Kerl, C., Sturm, J., Cremers, D.: Robust odometry estimation for RGB-D cameras. In: International Conference on Robotics and Automation (2013)
Labbe, M., Michaud, F.: Appearance-based loop closure detection for online large-scale and long-term operation. IEEE Trans. Robot. 29(3), 734–745 (2013)
Endres, F., Hess, J., Sturm, J., Cremers, D., Burgard, W.: 3-D mapping with an RGB-D camera. IEEE Trans. Robot. 30(1), 177–187 (2014)
Alismail, H., Kaess, M., Browning, B., Lucey, S.: Direct visual odometry in low light using binary descriptors. IEEE Robot. Autom. Lett. 2(2), 444–451 (2017)
Nascimento, E.R., et al.: BRAND: a robust appearance and depth descriptor for RGB-D images. In: International Conference on Intelligent Robots and Systems (2012)
Wu, K., et al.: RISAS: a novel rotation, illumination, scale invariant appearance and shape feature. In: International Conference on Robotics and Automation (2017)
Levinson, J., Thrun, S.: Automatic online calibration of cameras and lasers. In: Robotics: Science and Systems, vol. 2 (2013)
Rublee, E., Rabaud, V., Konolige, K., Bradski, G.: ORB: an efficient alternative to sift or surf. In: International conference on Computer Vision (ICCV) (2011)
Keselman, L., et al.: Intel (R) realsense (TM) stereoscopic depth cameras. In: Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE (2017)
Furgale, P., Maye, J., Rehder, J., Schneider, T., Oth, L.: Kalibr (2014). https://github.com/ethz-asl/kalibr
Kamel, M., Stastny, T., Alexis, K., Siegwart, R.: Model predictive control for trajectory tracking of unmanned aerial vehicles using Robot Operating System. In: Koubaa, A. (ed.) Robot Operating System (ROS). SCI, vol. 707, pp. 3–39. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-54927-9_1
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Khattak, S., Papachristos, C., Alexis, K. (2018). Vision-Depth Landmarks and Inertial Fusion for Navigation in Degraded Visual Environments. In: Bebis, G., et al. Advances in Visual Computing. ISVC 2018. Lecture Notes in Computer Science(), vol 11241. Springer, Cham. https://doi.org/10.1007/978-3-030-03801-4_46
Download citation
DOI: https://doi.org/10.1007/978-3-030-03801-4_46
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-03800-7
Online ISBN: 978-3-030-03801-4
eBook Packages: Computer ScienceComputer Science (R0)