ABSTRACT
The ability to categorize objects and outcomes of events using auditory signals is rather advanced in humans. When it comes to robots, limitations in sensing pose many challenges for this type of categorization specifically required in many robotic applications. In this paper, we propose auditory scene analysis methods for robots in order to monitor events to detect failures and learn from their experiences. Audio data are convenient for these purposes to detect environmental changes surrounding a robot and especially complement visual data. In our study, we investigate supervised learning methods using informative features from sound data for efficient categorization in manipulation scenarios. Furthermore, we use these data for robots to detect execution failures in runtime to prevent potential damages to their environment, objects of interest and even themselves. Firstly, the most distinguishing features for categorization of object materials from a set including glass, metal, porcelain, cardboard and plastic are determined, and then the performances of two supervised learning methods on these features for material categorization are evaluated. In our experimental framework, the performances of the learning methods for categorization of failed action outcomes are evaluated with a mobile robot and a robotic arm. Particularly, drop and hit events are selected for this analysis since these are the most likely failure outcomes that occur during the manipulation of objects. Using the proposed techniques, material categories as well as the interaction events can be determined with high success rates.
- S. Karapinar, and S. Sariel-Talay. Cognitive Robots Learning Failure Contexts Through Real-world Experimentation, Autonomous Robots, Vol. 39, No.4 pp.469--485, 2015. Google ScholarDigital Library
- D. Altan, and S. Sariel-Talay. Probabilistic Failure Isolation for Cognitive Robots, The 27th International FLAIRS Conference, Pensacola Beach, Florida, USA, pp.370--375, 2014.Google Scholar
- J. Sinapov, C. Schenck, K. Staley, V. Sukhoy, and A. Stoytchev. Grounding Semantic Categories in Behavioral Interactions: Experiments with 100 Objects, Robotics and Autonomous Systems, Volume 62, Issue 5, pp. 632--645, May 2014.Google ScholarCross Ref
- C. Schenck, and A. Stoytchev. The Object Pairing and Matching Task: Toward Montessori Tests for Robots, In Proceedings of the Humanoids 2012 Workshop on Developmental Robotics, Osaka, Japan, pp.1--6, November 29, 2012.Google Scholar
- M.D. Ozturk, M. Ersen, M. Kapotoglu, C. Koc, S. Sariel-Talay, H. Yalcin. Scene Interpretation for Self-Aware Cognitive Robots, AAAI-14 Spring Symposium on Qualitative Representations for Robots, pp. 89--96 2014.Google Scholar
- L. Calmes, H. Wagner, S. Schiffer, G. Lakemeyer. Combining Sound-Localization and Laser-based Object Recognition, AAAI Spring Symposium: Multidisciplinary Collaboration for Socially Assistive Robotics, pp.1--6, 2007.Google Scholar
- J. Sinapov, and A. Stoytchev. From Acoustic Object Recognition to Object Categorization by a Humanoid Robot, In Proceedings of the RSS Workshop: Mobile Manipulation in Human Environments, Seattle, WA, Jun. 28, pp.1--8, 2009.Google Scholar
- J. Sinapov, A. Stoytchev. Object category recognition by a humanoid robot using behavior-grounded relational learning, Robotics and Automation (ICRA), IEEE International Conference, pp.184--190, 9--13 May 2011.Google Scholar
- J.-M. Valin, F. Michaud, J. Rouat, D. Letourneau. Robust sound source localization using a microphone array on a mobile robot, Intelligent Robots and Systems. (IROS). IEEE/RSJ International Conference, pp.1228--1233, 27--31 Oct. 2003.Google Scholar
- M. Weber, A.M. Welling, P. Perona. Unsupervised Learning of Models for Recognition, Springer Berlin Heidelberg, Jan. 2000.Google ScholarCross Ref
- A. Lillard, N. Else-Quest. Evaluating Montessori education. Science-New York Then Washington,2006.Google Scholar
- C.C. Chang, and C.J. Lin. LIBSVM: a library for support vector machines. Software available at http://www.csie.ntu.edu.tw/cjlin, 2001. Google ScholarDigital Library
Index Terms
- Scene analysis through auditory event monitoring
Recommendations
Sound and Visual Tracking for Humanoid Robot
Mobile robots capable of auditory perception usually adopt the “stop-perceive-act” principle to avoid sounds made during moving due to motor noise. Although this principle reduces the complexity of the problems involved in auditory processing for mobile ...
Environmental sound recognition for robot audition using matching-pursuit
IEA/AIE'11: Proceedings of the 24th international conference on Industrial engineering and other applications of applied intelligent systems conference on Modern approaches in applied intelligence - Volume Part IIOur goal is to achieve a robot audition system that is capable of recognizing multiple environmental sounds and making use of them in human-robot interaction. The main problems in environmental sound recognition in robot audition are: (1) recognition ...
Robot Audition from the Viewpoint of Computational Auditory Scene Analysis
ICKS '08: Proceedings of the International Conference on Informatics Education and Research for Knowledge-Circulating Society (icks 2008)We have been engaged in research on computational auditory scene analysis to attain sophisticated robot/computer human interaction by manipulating real-world sound signals. The objective of our research is the understanding of an arbitrary sound mixture ...
Comments