Quantifying patterns of agent–environment interaction
Introduction
Manual haptic perception is the ability to gather information about objects by using the hands. Haptic exploration is a task-dependent activity, and when people seek information about a particular object property, such as size, temperature, hardness, or texture, they perform stereotyped exploratory hand movements [1]. The same holds for visual exploration. Eye movements, for instance, depend on the perceptual judgement that people are asked to make, and the eyes are typically directed toward areas of a visual scene or an image that delivers useful and essential perceptual information [2]. To get a better grasp on the organization of saccadic eye movements, Lee and Yu [3] proposed a theoretical framework based on information maximization. The basic assumption of their theory is that, due to the small size of our foveas (high resolution part of the eye), our eyes have to move continuously to maximize the information intake from the world. Differences between tasks obviously influence the statistics of visual and tactile inputs, as well as the way people acquire information for object discrimination, recognition, and categorization.
Clearly, the common denominator underlying our perceptual abilities seems to be a process of sensory–motor coordination coupling perception and action. It follows that coordinated movements must be considered an essential part of the perceptual system [4] and, whether the sensory stimulation is visual, tactile, or auditory, perception always includes associated movements of eyes, hands, arms, head and neck[5], [6]. Sensory-motor coordination is important, because (a) it induces correlations between various sensory modalities (such as vision and haptics) that can be exploited to form cross-modal associations, and (b) it generates structure in the sensory data that facilitates the subsequent processing of that data [7], [8], [9], [10]. In other words, humans, robots, and other agents are not exposed passively to sensory information, but they can actively shape such information. Our long-term goal is to understand quantitatively what sort of coordinated motor activities lead to what sort of information. We also aim to identify “fingerprints” (or patterns of sensory or sensory–motor activation) characterizing the agent–environment interaction. Our approach builds on previous studies on category learning[11], [12], as well as on work on the information-theoretic and statistical analysis of sensory and motor data [7], [10], [13].
The experimental tool in the study presented in this article is a simulated robotic agent, programmed to search its local environment for red objects, approach them, and explore them for a while. The analysis of the recorded sensory and motor data shows that different types of sensory–motor activities display distinct fingerprints reproducible across many experimental runs. In the two following sections, we give a detailed overview of our experimental setup, and describe the actual experiments. Then, in Section 4, we expose our methods of analysis. In Section 5, we present our results and discuss them. Eventually, in Section 6, we conclude and point to some future research directions.
Section snippets
Experimental setup
The study was conducted in simulation. The experimental setup consisted of a two-wheeled robot and a closed environment cluttered with randomly distributed, colored cylindrical objects. A bird’s eye view of the robot and its ecological niche is shown in Fig. 1(a). The robot was equipped with eleven proximity sensors () which measured the distance to the objects and a pan-controlled camera unit (image sensor) (see Fig. 1(b)). The pan-angle of the camera was constrained to vary in an angular
Experiments
A top view of a typical experiment is shown in Fig. 1(a). At the outset of each experimental run, the robot’s initial position was set to the final position of the previous experiment (except for the first experiment, where the robot was placed at the origin of the plane), and the behavioral state was reset to “exploring”. In this particular state, the robot randomly explored its environment while avoiding obstacles. Concurrently, the robot’s camera panned from side to side (by 60 degrees
Methods
First, we introduce some notation. Correlation quantifies the amount of linear dependency between two random variables and , and is given by , where is the joint probability density function, and are the mean, and and are the standard deviation of and computed over and (note that the analyses were performed by fixing the time lag between the two time series to zero). The entropy of a random variable is a measure of its
Data analysis and results
We analyzed the collected datasets by means of three measures: correlation, mutual information, and entropy (which is a particular instance of mutual information). In this section we describe, and in part discuss, the results of our analyses.
Further discussion and conclusion
To summarize, coordinated motor activity leads to informational structure in the sensory data that can be used to characterize the robot–environment interaction. Statistical measures, such as correlation and mutual information can be effectively employed to extract and quantify patterns induced by the coupling of robot and environment. In the “circling” behavioral state, for instance, the average correlation (evaluated over 16 experimental runs) normalized by the number of distance sensors or
Acknowledgments
Max Lungarella would like to thank the University of Tokyo for funding and Yasuo Kuniyoshi for support and encouragement. For Gabriel Gómez, funding has been provided by the EU-Project ADAPT (IST-2001-37173).
Danesh Tarapore is a graduate student at the Autonomous Systems Laboratory of the Ecole Polytechnique Federal de Lausanne (EPFL), Switzerland. Before joining EPFL, he completed his M.Tech. degree at the Indian Institute of Technology Bombay in India. He has been been working on various projects in the areas of computer vision, artificial intelligence and evolutionary robotics. His main research interests are embodied artificial intelligence, artificial evolution and cooperative robotics.
References (17)
Animate vision
Artificial Intelligence
(1991)Power and limit of reactive agents
Neurocomputing
(2002)- et al.
Sensory-motor coordination: the metaphor and beyond
Robotics and Autonomous Systems
(1997) Estimating the errors on measured entropy and mutual information
Physica D
(1999)- et al.
Haptic exploration and object representation
Eye Movements and Vision
(1967)- et al.
An information-theoretic framework for understanding saccadic behaviors
- et al.
A Dynamic Systems Approach to the Development of Cognition and Action
(1994)
Cited by (6)
Complexity Measures: Open Questions and Novel Opportunities in the Automatic Design and Analysis of Robot Swarms
2019, Frontiers in Robotics and AIThe measure of all minds: Evaluating natural and artificial intelligence
2017, The Measure of All Minds: Evaluating Natural and Artificial IntelligenceHow universal can an intelligence test be?
2014, Adaptive BehaviorInformation dynamics of evolved agents
2010, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)Evolution and analysis of a robot controller based on a gene regulatory network
2010, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)Evolving coordinated group behaviours through maximisation of mean mutual information
2008, Swarm Intelligence
Danesh Tarapore is a graduate student at the Autonomous Systems Laboratory of the Ecole Polytechnique Federal de Lausanne (EPFL), Switzerland. Before joining EPFL, he completed his M.Tech. degree at the Indian Institute of Technology Bombay in India. He has been been working on various projects in the areas of computer vision, artificial intelligence and evolutionary robotics. His main research interests are embodied artificial intelligence, artificial evolution and cooperative robotics.
Max Lungarella received his Ph.D. at the Artifical Intelligence Laboratory of the University of Zurich (Switzerland) in 2004. From 2002 to 2004, he was an invited researcher at the Neuroscience Research Institute of AIST (Japan). Currently, he is a postdoctoral researcher at the Department of Mechano-Informatics of the University of Tokyo. He has been involved in various projects related to artificial intelligence, robotics, and electrical engineering. His research interests include, but are not limited to, embodied artificial intelligence, developmental robotics, motor control, and electronics for perception and action.
Gabriel Gómez is a Ph.D. candidate at the Artificial Intelligence Laboratory of the University of Zurich (Switzerland). He received his master degree in Computer Science at EAFIT University (Colombia) in 2000, and his bachelor degree at the University of Antioquia (Colombia) in 1996. He has been a scholarship holder from the “Federal Commission for Scholarships for foreign students” of the Swiss government from 1999 until 2004. His research interests include embodied cognitive science, adaptive learning mechanisms, sensory–motor coordination, computer vision, tactile sensing and multimodal perception.