Abstract
Since a RGB-D sensor provides rich information about the scene, various object recognition schemes and low-level image descriptors can be used to improve the SLAM performance. However, a cleaning robot, which is usually flat and thus the camera is close to the floor, usually only has a partial view of the objects in front of the camera; therefore, conventional object recognition schemes based on the complete view of objects are not suitable. To address this problem, we introduce a novel object surface recognition algorithm based on the proposed surface component ratio histogram (SCRH). SCRH is a surface descriptor which describes the geometrical shape of the partial view of the object. Without any pre-trained model of the objects, SCRH provides a way to deal with the unexpected objects which the robot encounters during the navigation. SCRH was evaluated using several objects lying on the floor of which the identities are not known in advance. The experimental results show that objects are successfully discriminated based on their surfaces and SCRH is robust for object surface recognition.
Similar content being viewed by others
References
R. Mur-Artal and J. D. Tardos, “ORB-SLAM2: an opensource SLAM system for monocular, stereo and RGB-D cameras,” IEEE Transaction on Robotics, vol. 33, no. 5, pp. 1255–1262, 2017.
F. Endres, J. Hess, J. Sturm, D. Cremers, and W. Burgard, “3-D mapping with an RGB-D camera,” IEEE Trans. on Robotics, vol. 30, no. 1, pp. 177–187, 2014.
S. Y. Hwang and J. B. Song, “Monocular vision-based SLAM in indoor environment using corner, lamp, and door features from upward-looking camera,” IEEE Transaction on Industrial Engineering, vol. 58, no. 10, pp. 4804–4812, 2011.
G. Zhang and I. H. Suh, “A vertical and floor line-based monocular SLAM system for corridor environments,” International Journal of Control, Automation, and Systems, vol. 10, no. 3, pp. 547–557, 2012.
R. F. Salas-Moreno, R. A. Newcombe, H. Strasdat, P. H. J. Kelly, and A. J. Davison, “SLAM++: simultaneous localization and mapping at the level of objects,” Proc. of IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 1352–1359, 2013.
T. T. Q. Bui, T. T. Vu, and K. S. Hong, “Extraction of sparse features of color images in recognizing objects,” International Journal of Control, Automation, and Systems, vol. 14, no. 2, pp.616-627, 2016.
H. H. Kim, J. K. Park, J. H. Oh, and D. J. Kang, “Multitask convolutional neural network system for license plate recognition,” International Journal of Control, Automation, and Systems, vol. 15, no. 6, pp.2942-2949, 2017.
J. G. Kang, S. Y. An, W. S. Choi, and S. Y. Oh, “Recognition and path planning strategy for autonomous navigation in the elevator environment,” International Journal of Control, Automation, and Systems, vol. 8, no. 4, pp.808-821, 2010.
H. Liu, F. Sun, B. Fang, and S. Lu, “Multi-modal measurements fusion for surface material categorization,” IEEE Transactions on Instrumentation and Measurement, vol. 67, no. 2, pp.246-256, 2018.
H. Liu, Y. Wu, F. Sun, B. Fang, and D. Guo, “Weaklypaired multi-modal fusion for object recognition,” IEEE Transactions on Automation Science and Engineering, vol. 15, no. 2, pp.784-795, 2018.
H.W. Chae, H. Yu, C. Park, and J. B. Song, “Object recognition for SLAM in floor environments using a depth sensor,” Proc. of Intl. Conf. on Ubiquitous Robots and Ambient Intelligence, pp. 405–410, 2016.
R. B. Rusu, N. Blodow, and M. Beetz, “Fast point feature histograms (FPFH) for 3D registration,” Proc. of IEEE Intl. Conf. on Robotics and Automation (ICRA), pp. 3212–3217, 2009.
R. B. Rusu, Z. C. Marton, N. Blodow, and M. Beetz, “Learning informative point classes for the acquisition of object model maps,” Proc. of 10th Intl. Conf. on Control, Automation, Robotics and Vision (ICARCV), Hanoi, Vietnam, pp. 643–650, 2008.
J. Lee and H. Kang, “Flood fill mean shift: a robust segmentation algorithm,” International Journal of Control, Automation, and Systems, vol. 8, no. 6, pp.1313-1319, 2010.
E. Rublee. V. Rabaud, K. Konolige, and G. Bradski, “ORB: an efficient alternative to SIFT or SURF,” Proc. of IEEE Intl. Conf. on Computer Vision, pp. 2564–2571, 2011.
Author information
Authors and Affiliations
Corresponding author
Additional information
Recommended by Associate Editor Huaping Liu under the direction of Editor Euntai Kim. This research was supported by the (MOTIE) under the Industrial Foundation Technology Development Program, supervised by the (KEIT) (No. 10084589).
Hee-Won Chae received the B.S. degree in Mechanical Engineering from Korea University in 2013. He is now the M.S. and Ph.D. candidate in the School of Mechanical Engineering at Korea University. His research interests include robot navigation, computer vision, and software engineering.
Hyejun Yu received the B.S. degree in School of Robotics from Kwangwoon University in 2015 and the M.S. degree in Mechatronics from Korea University in 2017. His research interests include robot navigation, computer vision, and robot operation system.
Jae-Bok Song received the B.S. and M.S. degrees in Mechanical Engineering from Seoul National Univ., Seoul, Korea, in 1983 and 1985, respectively, and the Ph.D. degree in Mechanical Engineering from MIT, Cambridge, MA, in 1992. He joined the faculty of the Department of Mechanical Engineering, Korea University, Seoul, Korea in 1993. His research interests include robot navigation and the design and control of robotic systems.
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Chae, HW., Yu, H. & Song, JB. Onboard Real-time Object Surface Recognition for a Small Indoor Mobile Platform Based on Surface Component Ratio Histogram. Int. J. Control Autom. Syst. 17, 765–772 (2019). https://doi.org/10.1007/s12555-018-0084-z
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12555-018-0084-z