Elsevier

Robotics and Autonomous Systems

Volume 54, Issue 2, 28 February 2006, Pages 150-158
Robotics and Autonomous Systems

Quantifying patterns of agent–environment interaction

https://doi.org/10.1016/j.robot.2005.09.024Get rights and content

Abstract

This article explores the assumption that a deeper (quantitative) understanding of the information-theoretic implications of sensory–motor coordination can help endow robots not only with better sensory morphologies, but also with better exploration strategies. Specifically, we investigate by means of statistical and information-theoretic measures to what extent sensory–motor coordinated activity can generate and structure information in the sensory channels of a simulated agent interacting with its surrounding environment. The results show how the usage of correlation, entropy, and mutual information can be employed (a) to segment an observed behavior into distinct behavioral states; (b) to analyze the informational relationship between the different components of the sensory–motor apparatus; and (c) to identify patterns (or fingerprints) in the sensory–motor interaction between the agent and its local environment.

Introduction

Manual haptic perception is the ability to gather information about objects by using the hands. Haptic exploration is a task-dependent activity, and when people seek information about a particular object property, such as size, temperature, hardness, or texture, they perform stereotyped exploratory hand movements [1]. The same holds for visual exploration. Eye movements, for instance, depend on the perceptual judgement that people are asked to make, and the eyes are typically directed toward areas of a visual scene or an image that delivers useful and essential perceptual information [2]. To get a better grasp on the organization of saccadic eye movements, Lee and Yu [3] proposed a theoretical framework based on information maximization. The basic assumption of their theory is that, due to the small size of our foveas (high resolution part of the eye), our eyes have to move continuously to maximize the information intake from the world. Differences between tasks obviously influence the statistics of visual and tactile inputs, as well as the way people acquire information for object discrimination, recognition, and categorization.

Clearly, the common denominator underlying our perceptual abilities seems to be a process of sensory–motor coordination coupling perception and action. It follows that coordinated movements must be considered an essential part of the perceptual system [4] and, whether the sensory stimulation is visual, tactile, or auditory, perception always includes associated movements of eyes, hands, arms, head and neck[5], [6]. Sensory-motor coordination is important, because (a) it induces correlations between various sensory modalities (such as vision and haptics) that can be exploited to form cross-modal associations, and (b) it generates structure in the sensory data that facilitates the subsequent processing of that data [7], [8], [9], [10]. In other words, humans, robots, and other agents are not exposed passively to sensory information, but they can actively shape such information. Our long-term goal is to understand quantitatively what sort of coordinated motor activities lead to what sort of information. We also aim to identify “fingerprints” (or patterns of sensory or sensory–motor activation) characterizing the agent–environment interaction. Our approach builds on previous studies on category learning[11], [12], as well as on work on the information-theoretic and statistical analysis of sensory and motor data [7], [10], [13].

The experimental tool in the study presented in this article is a simulated robotic agent, programmed to search its local environment for red objects, approach them, and explore them for a while. The analysis of the recorded sensory and motor data shows that different types of sensory–motor activities display distinct fingerprints reproducible across many experimental runs. In the two following sections, we give a detailed overview of our experimental setup, and describe the actual experiments. Then, in Section 4, we expose our methods of analysis. In Section 5, we present our results and discuss them. Eventually, in Section 6, we conclude and point to some future research directions.

Section snippets

Experimental setup

The study was conducted in simulation. The experimental setup consisted of a two-wheeled robot and a closed environment cluttered with randomly distributed, colored cylindrical objects. A bird’s eye view of the robot and its ecological niche is shown in Fig. 1(a). The robot was equipped with eleven proximity sensors (d010) which measured the distance to the objects and a pan-controlled camera unit (image sensor) (see Fig. 1(b)). The pan-angle of the camera was constrained to vary in an angular

Experiments

A top view of a typical experiment is shown in Fig. 1(a). At the outset of each experimental run, the robot’s initial position was set to the final position of the previous experiment (except for the first experiment, where the robot was placed at the origin of the xy plane), and the behavioral state was reset to “exploring”. In this particular state, the robot randomly explored its environment while avoiding obstacles. Concurrently, the robot’s camera panned from side to side (by 60 degrees

Methods

First, we introduce some notation. Correlation quantifies the amount of linear dependency between two random variables X and Y, and is given by Corr(X,Y)=(xXyYp(x,y)(xmX)(ymY))/σXσY, where p(x,y) is the joint probability density function, mX and mY are the mean, and σX and σY are the standard deviation of x and y computed over X and Y (note that the analyses were performed by fixing the time lag between the two time series to zero). The entropy of a random variable X is a measure of its

Data analysis and results

We analyzed the collected datasets by means of three measures: correlation, mutual information, and entropy (which is a particular instance of mutual information). In this section we describe, and in part discuss, the results of our analyses.

Further discussion and conclusion

To summarize, coordinated motor activity leads to informational structure in the sensory data that can be used to characterize the robot–environment interaction. Statistical measures, such as correlation and mutual information can be effectively employed to extract and quantify patterns induced by the coupling of robot and environment. In the “circling” behavioral state, for instance, the average correlation (evaluated over 16 experimental runs) normalized by the number of distance sensors or

Acknowledgments

Max Lungarella would like to thank the University of Tokyo for funding and Yasuo Kuniyoshi for support and encouragement. For Gabriel Gómez, funding has been provided by the EU-Project ADAPT (IST-2001-37173).

Danesh Tarapore is a graduate student at the Autonomous Systems Laboratory of the Ecole Polytechnique Federal de Lausanne (EPFL), Switzerland. Before joining EPFL, he completed his M.Tech. degree at the Indian Institute of Technology Bombay in India. He has been been working on various projects in the areas of computer vision, artificial intelligence and evolutionary robotics. His main research interests are embodied artificial intelligence, artificial evolution and cooperative robotics.

References (17)

There are more references available in the full text version of this article.

Cited by (6)

Danesh Tarapore is a graduate student at the Autonomous Systems Laboratory of the Ecole Polytechnique Federal de Lausanne (EPFL), Switzerland. Before joining EPFL, he completed his M.Tech. degree at the Indian Institute of Technology Bombay in India. He has been been working on various projects in the areas of computer vision, artificial intelligence and evolutionary robotics. His main research interests are embodied artificial intelligence, artificial evolution and cooperative robotics.

Max Lungarella received his Ph.D. at the Artifical Intelligence Laboratory of the University of Zurich (Switzerland) in 2004. From 2002 to 2004, he was an invited researcher at the Neuroscience Research Institute of AIST (Japan). Currently, he is a postdoctoral researcher at the Department of Mechano-Informatics of the University of Tokyo. He has been involved in various projects related to artificial intelligence, robotics, and electrical engineering. His research interests include, but are not limited to, embodied artificial intelligence, developmental robotics, motor control, and electronics for perception and action.

Gabriel Gómez is a Ph.D. candidate at the Artificial Intelligence Laboratory of the University of Zurich (Switzerland). He received his master degree in Computer Science at EAFIT University (Colombia) in 2000, and his bachelor degree at the University of Antioquia (Colombia) in 1996. He has been a scholarship holder from the “Federal Commission for Scholarships for foreign students” of the Swiss government from 1999 until 2004. His research interests include embodied cognitive science, adaptive learning mechanisms, sensory–motor coordination, computer vision, tactile sensing and multimodal perception.

View full text