Next Article in Journal
Multimodal Technologies in LEGO House: A Social Semiotic Perspective
Next Article in Special Issue
Tango vs. HoloLens: A Comparison of Collaborative Indoor AR Visualisations Using Hand-Held and Hands-Free Devices
Previous Article in Journal
Takeover Requests in Highly Automated Truck Driving: How Do the Amount and Type of Additional Information Influence the Driver–Automation Interaction?
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Design and Evaluation of a Mixed-Reality Playground for Child-Robot Games

by
Maria Luce Lupetti
1,*,
Giovanni Piumatti
2,
Claudio Germak
1 and
Fabrizio Lamberti
2
1
DAD, Politecnico di Torino, 39-10125 Torino, Italy
2
DAUIN, Politecnico di Torino, 24-10129 Torino, Italy
*
Author to whom correspondence should be addressed.
Multimodal Technol. Interact. 2018, 2(4), 69; https://doi.org/10.3390/mti2040069
Submission received: 16 August 2018 / Revised: 20 September 2018 / Accepted: 2 October 2018 / Published: 6 October 2018
(This article belongs to the Special Issue Mixed Reality Interfaces)

Abstract

:
In this article we present the Phygital Game project, a mixed-reality game platform in which children can play with or against a robot. The project was developed by adopting a human-centered design approach, characterized by the engagement of both children and parents in the design process, and situating the game platform in a real context—an educational center for children. We report the results of both the preliminary studies and the final testing session, which focused on the evaluation of usability factors. By providing a detailed description of the process and the results, this work aims at sharing the findings and the lessons learned about both the implications of adopting a human-centered approach across the whole design process and the specific challenges of developing a mixed-reality playground.

1. Introduction

Today, technological developments are increasing the multiplicity of scenarios in which people can interact with robots. From production to service environments, from technical applications to gaming, human-robot interaction (HRI) is becoming more and more complex, involving more than just the technical aspects of a robot’s abilities. In fact, in daily life, HRI can raise psychological, moral, and ethical issues. As a consequence, user and social studies are becoming crucial. Research in HRI is already addressing these issues by applying principles of Human-Centered Design (HCD). In 2007, Schaal [1] emphasized this phenomenon even more by claiming that the new robotics was going to be human-centered. In fact, an HCD approach can contribute to the development of acceptable robotic solutions by exploring the possibility of applying user acceptance theories through the development of the projects [2]. Robots’ acceptability, indeed, is not solely determined by functional aspects such as the perceived usefulness and ease of use [3] of the solution, or its appearance [4]. As a matter of fact, a perfectly functioning robot that is meant to answer a real need and is perfectly designed in terms of affordability and appeal, can still be unsuccessful in terms of acceptance [5]. This is because in many cases the economic, social, and ethical preconditions are missing. HCD can contribute in this sense by understanding and creating the conditions for short-term touch points between robotics and society. To this end, two actions appear to be fundamental: involving [5,6] and situating [5].
Participatory design methods can be applied for getting information from people, as well as to meet their needs and expectations. People should not be considered as research subjects who provide data [6], but rather as active participants who contribute to projects, such as in the case of co-design practices, where both end-users and all other stakeholders are involved [7]. Co-design sessions, interviews, and focus groups are just some of the possible ways of involving people during the design process. Regarding the importance of situating a solution, e.g., as mentioned by Baxter et al., moving the experiments from lab to the wild [8], it is crucial to get results on the demonstrable applicability of the projects [9]. Furthermore, situating a project in a real environment allows us to deal with the complexity of a social context [10]. Despite the acknowledged importance of engaging relevant actors throughout the projects and situating them in real environments, these practices are still infrequently adopted. In contrast, the vast majority of HRI studies still appear to be more focused on a robot’s abilities and sometimes are literally robot-centric [11].
These reflections are the basis of our project Phygital Play, a mixed-reality game platform in which children can play with or against robots, which was developed by adopting a HCD approach. We previously presented a preliminary and partially functioning version of the platform [12] as a demo at Intetain conference in 2015. In this article we provide a detailed documentation of the project, emphasizing how relevant actors were involved during the process and how the platform was situated in a real context. Opportunities and challenges that emerged during the project are reported with the aim of sharing the findings and the lessons learned about both the implications of adopting a HCD approach across the whole design process and the specific challenges of developing a mixed-reality playground.

2. Related Works

As mentioned in the introduction, HCD methodologies are more and more frequently adopted in HRI through the participation of representative samples of people and the contextualization of the studies in real environments. At Carnegie Museum of Natural History, for instance, a robot called Sage [13] was introduced as a member of the museum staff and there the authors had the chance to observe the interactions and reactions of a very diverse sample of participants with the robot. Similar studies were carried out in other museums, such as at the Museum fur Kommunikation in Berlin [14], the Japanese Overseas Migration Museum [15], and Racconigi Castle in Italy [16]. In some other case studies, the specificity of participant samples is emphasized over diversity, as in the case of Etnodroid [17], a child-sized cardboard robot provided with two tablets and designed to interact with children in an Expo. This study allowed the authors to investigate the main requirements for the robot to be social and engaging in relation to a specific user group.
The importance of involving specific user groups is emphasized even more when direct observations are complemented by interviews and discussions. Osada et al. [18], for instance, conducted a six-month study at Aichi Expo in 2005. The demonstration consisted of short playful interactions with PaPeRo robots, which were set up to perform seven different play scenarios with children. In this project, the interaction with the five robots were observed, but children were interviewed as well as the staff on site. These actions were crucial for the improvement of the various interaction scenarios. These activities have also brought to light a wealth of unforeseen information, such as the attribution of personalities to robots on the basis of different colors.
As a matter of fact, performing user studies is useful to go beyond the personal assumptions of both roboticists and users [5]. In fact, robotics research can also go beyond its own domain, revealing itself as a powerful tool for creating knowledge about human behavior. An example of this is represented by a three-month study carried out at Kashinokien, an elderly care center in Japan [19]. There, a Robovie2 robot was introduced to interact and engage with elderly people who were visiting the site once or twice a week for various activities. The project aimed to get a better understanding of the behaviors that the introduction of a robot in this kind of context can generate, both for the elderly and the staff. From this research, they obtained two interesting types of information: on the one hand, they had feedback about the acceptability of the robot; on the other hand, they received information about the impact that simple positive actions, such as daily greetings, could have on an elderly person. Even if the study revealed an aspect that is not related to the robot (daily greetings), this appeared to be an effective tool to unveil the peculiarities of participant behavior.
These examples show how involving people in studies is important to get information that can go beyond the primary goal of the research. However, involving people in the design process is crucial not only for getting data, but also to evaluate and generate design ideas [20], to get a better understanding of both physical and social contexts [10], and to let designers and final users together identify new opportunities for robotics applications [20]. Furthermore, participatory practices foster a sense of ownership in the people involved in the design process [21], also affecting the level of acceptability of the robotic solutions. To this end, people should be involved also at a creative level and considered as experts. Forlizzi [22], for instance, conducted an ethnographic study in the home of six families about robotic vacuum cleaners. The family members were involved in the study through different actions, from interviews to the composition of visual story diaries. An interesting aspect of this study is that the different family members were entrusted with different tasks. Each of them, thus, assumed an active role in the study as an expert of house and cleaning habits. Similarly, Sung at al. [23] carried out a study in 30 households by giving them a Roomba and reporting their experience of its use over a six-month period. This process was aimed at bringing out the desirable characteristics of the robots in terms of functions, aesthetics, and interactivity, as well as to visually describe behaviors and attitudes towards the robots.
People’s creativity can be further involved in the projects through co-design actions. In the ALIZE project [24], for example, 16 children with diabetes in a healthcare camp were involved in three different activities, one of which was a participatory design session. Given the aim of designing a robot for supporting therapy, the authors performed group interviews with children to co-define requirements, qualities, and modalities in which the robot might support them. This generated two main positive results: the development of a more effective robot and the facilitation of a tighter collaboration between the researchers and the stakeholders. Another interesting project is represented by Dewey [5]. In this project, the authors engaged the employees of an office in their iterative design process, from exploratory studies to final tests. Their aim was to evaluate multiple design alternatives to understand how different robots’ characteristics are perceived by a representative sample of users in a real context. This work shows how physical prototyping platforms represent a powerful tool that, together with wild testing, is crucial for communicating ideas, understanding the interaction experience, and getting evaluations, also in the early stage of design.
The adoption of HCD methodologies, then, contributes by facilitating bridging between the robot development and the needs emerging from a specific context, also fostering a reflection on if and how a robotic solution might be preferable to other solutions. For instance, in the project Directions Robot [25] a wild study was carried out with a Nao robot, placed in a public area of the university close to an elevator. There, it was responsible for giving directions to people working there as well as visitors not familiar with the building. In this case, an effort was made to design an interactive scenario based on an actual need, although it is questionable how the solution might be preferred over existing alternatives.
To sum up, good practices for human-centered projects in HRI are already adopted in the whole design process. However, these practices are still present in a minority of the studies. The work by Baxter et al. [9], conducted on the last three years of HRI conference publications, shows that around 46% of the studies in this field are still performed with university samples, and 75% are still carried out in the lab. However, it has to be noted that performing tests in the wild requires finding a balance between control and ecological validity [9], which is fairly complex. In fact, a real environment involves various constraints and issues, including legal issues, the non-neutrality of the environment, and the subjective peculiarities of the people involved [8]. Thus, this work is aimed to apply human-centered practices, emphasizing the potential and the limitations of situating a study in the wild and exploring ways to overcome related issues.

3. Methodology

We developed this project as an extension of the Phygital Play concept, which was initially presented only as a partially functioning demonstrator, without the robot embedded yet. From that preliminary prototype, the project evolved through a series of different phases, from participatory activities to the actual design and development of the platform and games, with the aim of validating the project’s purpose and acceptability, as well as testing the usability and attractiveness of the proposed experience. To this end, we involved a representative sample of people, namely parents and children, and at a later stage we situated the platform in a real context, an educational center for children, where we adapted and redesigned a custom game for fitting into existing educative activities. More specifically, the process was characterized by three main stages: explorative, operative, and evaluative. In the explorative stage, we involved adults through a questionnaire and a focus group, with the intent of getting a deeper understanding of the theme and children’s habits, and for collecting preliminary feedbacks about the project proposal. The operative stage, instead, focused on the design and implementation of the platform and the games, including all the relative iterations. Finally, in the evaluative stage we carried out a five-day experiment in a real environment with school groups.

4. Exploration

The explorative stage aimed to get an understanding of the current scenario of children’s play in Italy. To this end, we conducted an ethnographic study: a qualitative analysis of people’s everyday life, desires, and concerns for informing and inspiring the next phases of the design process [26]. This analysis was characterized by three main actions: a literature and online resources review, a questionnaire, and a focus group. In the literature review we focused on both specific studies on children-robot interaction and studies on general aspects of the relationship between children and technology. We than described the state of the art of edutainment robotics through a benchmarking of products as a way to understand what experiences are offered by current technology. Furthermore, we addressed emerging trends and changes in children’s habits through the review of statistical reports and other public documentations. Regarding the questionnaire and the focus group, we conducted these activities by engaging adults, because of the crucial role that they assume both as experts about children’s habits and needs, as well as final accepting users [27]. In this regard, we asked them to discuss their perception, opinions, and concerns about the world of games, technology, and children’s play habits.

4.1. Survey

As reported by several case studies, a survey represents a useful tool for building knowledge about resistance, perceptions, and attitudes of people towards the subject of investigation [28]. Thus, we conducted a survey with the aim of validating the relevance of the issue addressed by the project: the rise of sedentary behaviors (SB) and its relationship with technology. Secondly, we used the survey as a way to investigate preferences about games and play typologies, both for adults and children, and to get a deeper understanding of general ideas about robots and if there are concerns about their use in children’s play. The survey was composed of 35 questions, divided into general information, technology and its relationship with sedentary behaviors, games and play, and edutainment technologies, including robots. It was distributed as an online survey through the employee mailing list of Politecnico di Torino. The sample of participants was composed of 511 people, with a prevalence of males (60% vs. 40% female), and around 80% of them had a degree. However, in terms of age distribution the sample was diverse: almost 30% aged 19 to 30 years, 25% 31 to 40 years, 26% 41 to 50 years, 15% 51 to 60 years, and the rest older than 61. Sixty percent of them had children. Diversity was also observable in the number and age of children. There were participants who have a single child, as well as others who have multiple children, up to four. The age of children varied from less than one year to over 40 years.

4.2. Focus Group

With this activity, as well as the survey, we aimed at validating the issue addressed by the project and investigating the scenario of children’s play as well as ideas and concerns regarding robots. However, this activity allowed us to go deeper in our understanding of the theme and get more qualitative, rather than quantitative, data. Furthermore, we presented an early-stage prototype of the system, and discussed the main concept of the project with parents. The participants, in fact, were six parents, of children aged between six and eight years. The activity, which lasted for two hours, was coordinated by a psychologist and observed by two designers. The coordination consisted of the introduction of questions and arguments, moderation of the discussion, and assignment of tasks. The observers, instead, were sitting at the same table with the participants but without taking part in the discussions. They were asked to observe and write down notes about participants’ behavior. In addition to the observer’s notes, we also recorded the activity for subsequent transcription of the entire discussion. The participants of the focus group were two male and four female parents. Two mothers had a humanities background; one mother and a father had a technical/engineering background; and one mother and a father had an academic/design background. In the first part of the focus group, they talked about the habits of their children regarding their spare time. In the second part of the focus group, the parents were asked to describe what a robot is for them, if they have one, and what they think about the diffusion of robotic products, especially for entertaining. Finally, the last part of the focus group focused on the idea of a mixed-reality game platform.

4.3. Key Findings

The exploratory stage highlighted a mismatch between the general trend of Italian children’s habits and the desires and concerns of parents regarding this. On the one hand, statistical reports show a great change in children’s habits regarding technology. For instance, the use of mobile phones is increasing constantly and at least 20% of children aged 6–10 years already have one [29]. The phone is used not only for calling or messaging, but also as a multimedia platform. The use of the Internet and computers for gaming is also increasing [29]. In parallel, the free time that children spend outdoors is decreasing; in fact, according to research conducted by Save the Children, 62% of parents stated that their children spend most of their free time in the home [30]. A change was also observed regarding sports activities performed by children in their free time: it decreased from 83% in 2015 to 77% in 2016 [30]. Compared to other European countries, in Italy the levels of physical activity are lower: less than 10% of children aged 11 to 15 years perform at least one hour of physical activity every day, while on average other countries are over 15% [31]. Italy is also one of the countries with the highest percentage of overweight children (15%) [31]. On the other hand, adults, both parents who attended the focus group and the questionnaire participants, underlined the importance of outdoor play and sports, and the fact that they encourage these activities. Regarding play, they expressed a preference for traditional games compared to technological ones. They also limit their children’s use of digital devices, confirming concerns about the possible consequences of children’s exposure to technology, including sedentary behaviors. Therefore, from this analysis emerges a scenario in which children’s exposure to technology is increasing together with sedentary behaviors and related issues. Parents are worried about this phenomenon and still prefer traditional games and physical play. Therefore, a design challenge emerged: can a robot’s physicality leverage a reduction of sedentary behaviors?

5. Design

In the operative stage we focused on the development of the playground and on the improvement of the setup, presented and used as an explorative tool in the first stage. As mentioned before, the Phygital Game project consists of a mixed-reality game platform in which children can play with or against a robot. On the basis of the robot available, the platform should allow for selecting a game from a series designed specifically for its features. The aim of the proposed system is to increase the children-robot interaction possibilities through natural interaction, for promoting a reduction in sedentary behaviors. Prototyping such a platform and the games, as well as implementing the robots in the system, required the development of a specific software architecture. Therefore, this stage was characterized by two interrelated actions: platform development and games design.

5.1. Platform

We developed the Phygital Play gaming platform for facilitating the design and rapid prototyping of mixed-reality video games [12]. The main hardware components are a projector, two depth cameras (Microsoft Kinect v2.0, Microsoft Corporation, Washington, DC, USA), and robots. The projector is used to display a virtual playground on the floor, on which most interactions take place. The depth cameras are used to track the position of human players and robots. The robots themselves are controlled by the computer, according to the game logic.
From a high-level perspective, the software architecture (Figure 1) is split into two main components: the back end and the front end. The back end is mainly responsible for carrying out three fundamental tasks: people tracking, robot tracking, and robot control. The front end is implemented by the games themselves. Each game uses a lightweight library developed for the Unity game engine, which masks communication with the back end and provides high-level tools for the game developer. Games built on this library can therefore interact with the back-end system. The people-tracking module was realized by exploiting Microsoft’s Kinect for Windows v2.0 Software Development Kit (SDK) (Microsoft Corporation, Washington, DC, USA) which is able to track up to six people in front of the camera, providing for each the estimated pose of 25 body joints. The body pose is then converted to a game coordinate system, where relevant information can be extracted, such as the position of the player within the playground.
The robot tracking module’s purpose is to continuously output an estimate of the robot’s current odometry (i.e., position, orientation, velocity, and acceleration) by using all available data sources. Specifically, the module attempts to track the robot’s position in both color and depth streams from the Kinect camera. This information is fused with the robot’s own odometry, if available, to provide a final estimate. The robot’s orientation is inferred by its motion and odometry. Finally, the robot control module is designed to interpret high-level motion commands (such as “go to target position” or “follow path”) and convert them to low-level commands that implement that behavior. By knowing the robot’s current odometry (as estimated by the tracking module) and required behavior, the controller module outputs the correct velocity commands. This module is necessary to give control of the robot to the game itself. If that is not the case (e.g., the player controls the robot using an external controller), the control module is not necessary.
The front end (i.e., the game) is constantly updated by the back end regarding the position of the various physical entities. The game keeps an internal representation of the whole scene: each physical entity (i.e., player, robot) has an associated virtual counterpart in the scene, whose position is reported by the tracking modules. For example, the people-tracking module allows the game to know the position of a player in the game world. It is possible to map the position of a game object (a virtual object) to the position of a player, in such a way that as the player moves so does the game object. Consequently, the physics engine will be able to simulate collisions between the player and other virtual objects. The net result is that physical entities appear to be interacting (e.g., colliding) with virtual objects. Finally, the game controls the robot simply by moving a virtual entity. Its position within the game is mapped to the target position for the robot to follow. The robot control module will issue velocity commands to the robot in order to move it on target. The robot’s intelligence (e.g., what to do, where to go) is therefore completely defined by the game. Using these tools, it is possible to quickly develop prototypes of mixed-reality games that are able to interact with the physical world in both the input (via the tracking modules) and output (by controlling a robot) directions.

5.2. Games and Robots

The system is designed to support different kind of commercial robots for leveraging the characteristics of each available robot to create different game experiences. The concept is shown in Figure 2. For instance, we considered two commercial robots available at the lab, Jumping Sumo and Sphero, characterized by different movement. One has directional movement, since it moves on two wheels, while the other has an orbiting movement. For this reason, Jumping Sumo appears to be more suitable for games where a precise direction in movement is needed, such as Pong, while Sphero is more suitable for games where the motion is directed by the body movements of the player. Therefore, we designed two different kinds of games: a pong-like game and a catching game.
In the first game, the robot assumes the role of the opponent. Like the traditional pong, the playground consists of a rectangular field divided into two areas. On one side the human plays and on the other side the robot. The aim is to make the projected figures bounce on the other side of the playground. In the second game the player, using his or her body movements, controls the robot that is located in front of him. The aim is to catch projected figures that regularly appear in the playground. Both games have a duration of one minute. This is for two reasons: these types of game are typically designed as short matches in which the player gets a score; and short matches are more suitable for tests with a large number of children, as in the case of school groups. Regarding control, the depth-aware camera allows players to use the body as a “joystick”. Using positions and gestures to control the robot, as well as other elements of the games, is crucial to obtain active engagement in the game. However, the games are currently using only the position of the player to control a projected bar in one game and the robot in the other game. Posture and gesture tracking will be further explored in future developments.
From a visual design point of view, mixed-reality games call for special attention to visibility issues. In fact, the unpredictable nature of the support, namely the floor, and the lighting conditions can greatly interfere with the game usability [31]. Thus, the use of bright colors and high contrasts are crucial. Accordingly, we designed the games using mainly primary and secondary colors, and avoiding the use of colored backgrounds. As well as by the figure-ground relationship [32], visibility is also affected by factors of element composition, such as simplicity [33]. Thus, we kept a limited number of game elements and opted for a simple visual design for reducing a player’s cognitive load [33] and increasing the game’s intuitiveness. Furthermore, the simplicity principle also matches the willingness to create familiar games. Especially in the case of the pong-like game, the visual design is based on the original design of the pong game. This transposition of games from a virtual to a blended reality was also appreciated and highlighted by the parents involved in the focus group.
At the moment, the robot implemented in the system is Sphero, from Orbitix company. We used it in the catching game customized around the theme of coordinates: letters, numbers, and geometrical figures. This customization emerged from a collaboration with experts from the educational center in which the platform was subsequently tested. Indeed, we chose the theme of coordinates for creating a common topic around which different experiences could be organized. The main rule of the game is to catch the projected figures that appear in the playground. The figures, which moves around, can only be caught by making the robot roll over them. If the child goes over the figure with a foot, for instance, it does not work. Thus, since the robot follows the position of the player, the child has to understand how to move around to make the robot roll over the figures. Every time a figure is caught, another one appears in a different part of the playground. The game does not report a score. This is due to the incompetence and desire to avoid a sense of competition that some children might feel. The only information visualized in the playground is the time, in fact: every match has a duration of one minute. As shown in Figure 3, every match consists of four main phases: entry, start, play, and ending. In the entry phase, the playground shows a flashing circle with footprints at the center. The child has to place him- or herself at the center of the circle and wait until it disappears. Next is the start phase, where the playground is empty but the child can already try to control the robot by moving around. This is crucial to allow children to familiarize themselves with the game’s functioning. Then, the figures start to appear in the playground and the game begins. The child can capture as many figures as s/he can until the time is up. At the end, the game stops and a thank-you screen appears in the playground.

6. Evaluation

We carried out the tests with the platform as part of a greater educational experience, in which children were introduced to the theme of coordinates, communication, and control through different activities. We involved school groups, in morning and afternoon sessions, for five days. We conducted the test in a large room subdivided into two main areas (see Figure 4). In the first area, the children were welcomed by two experts from the center and introduced to the first activities related to the coordinates, such as the Cartesian Cube, built on purpose. These activities were carried out involving the whole school group. In parallel, two children at a time were invited into the second area, by two conductors, to try two other games.
These consisted of two different setups for controlling the Sphero, a robotic ball. In the first setup, non-immersive (NIS), the ball was controlled through a mobile app for smartphones, while in the second setup, immersive (IS), the ball was controlled through body tracking. The NIS was not relevant to the project per se; rather, it was used as a simple comparison for helping children articulate the reasons for their appreciation or not of the IS. We tested the IS in two modalities: one with the robotic ball, and the other with a projected ball. With this, we aimed at observing whether the robotic ball was able to increase the attractiveness and enjoyment of the immersive setup. In addition to the conductors, in the second area, four people from the research team were performing two different activities: two were observing children, and two were providing technical support, one for each setup. In particular, the observers, provided with observational forms, were seated at the margins of the test area, reporting children’s behaviors and answers to the semi-structured interview questions. Each observer was coordinated with a conductor.
With this experimental application we aimed at observing both whether the developed solution was able to achieve the intended purpose, and whether its features were appropriate for that. We integrated the initial purpose of promoting active behaviors in play with the need for communicating contents through the game, and a goal emerged from the co-design sessions with the experts of the educational center for children. In the evaluation, we tried to assess the ability of the solution to meet these two purposes by considering different aspects: quantity of body movement, efficacy in supporting a week-long activity as a real application, and the persistence of interest in the solution from the experts of the educational center. Other more specific aspects were addressed for evaluating both the usability and the pleasurability of the solution, which should be two prerequisites of a fun experience [34]. Accordingly, at the level of the features, the platform was evaluated by considering the following aspects: likeability, learnability, enjoyment, and engagement.
Regarding the data collection, we adopted two different strategies: direct observations and self-reported data from children. As is common for wild studies, in fact, this real context posed a series of constraints, such as confidentiality issues [8] and a lower level of control of the study [9]. Indeed, by involving real school groups we did not have parents’ consent for recording the experience. Thus, to address this issue, we performed direct observations with the support of observational forms. We designed the forms by referring to existing observational tools used in Behavioral, Developmental, and Nutritional Sciences to observe children, especially in play and physical activities. In particular, the existing tools taken as a reference for observing physical activity during play in real contexts were SOFIT [35] and SOCARP [36]. These two tools, in fact, provide forms that observers can use to record children’s activities. In addition to general information about the children observed, these forms ask observers to estimate aspects such as the level of activity, engagement, and enjoyment by giving a value on a five-point scale. A series of self-reported data about the experience were, instead, recorded from children. These were collected through a semi-structured interview carried out by the conductors, as an informal conversation. Children were asked to say how much they enjoyed playing; how difficult it was to understand the functioning and playing; and, finally, what was their preferred setup and why. They were asked to give a value, from 1 to 5, for their enjoyment or difficulty. In order to help children visualize these values, the conductors were provided with a paper sheet on which was printed a five-value bar chart. The forms edited for this study, hence, were composed of four main sections: the child’s personal data; a 5-point Likert scale of questions about observed physical activity, engagement, enjoyment and concentration; a semi-structured interview; and a free comments area.

Participants

The participant sample was composed of 17 classes of third-, fourth-, and fifth-graders from different primary schools. There was a total of 366 children, aged between six and 10 years. However, the data analyzed refer to a sample of 270 children, 135 of which experienced the game platform with the robot and the rest without the robot. The majority of children were male (54%). The rest of the sample (96 forms) was excluded from the analysis due to the incompleteness of the data.

7. Results

The first aspect that we addressed during the test was enjoyment. The results of the semi-structured interviews revealed a general appreciation of the game platform, see Figure 5. Almost all the participants declared that they enjoyed playing with the platform, and an average of about 80% of them were totally amused. In particular, children who experienced the platform with the robot declared a higher level of enjoyment.
These data, however, were not completely confirmed by the reports of the observations, which focused on facial expression of children for estimating the level of concentration and enjoyment during play. In the forms, the level of enjoyment during play was associated with five main facial expressions: boredom, when the child never smiles and the face looks apathetic; not much amused, when the child gives a few hints of smiles; quite amused, when the child has a serene face; very amused, when the child smiles; and enthusiastic, when the child has a cheerful face and laughs. According to the observers (Figure 6), in fact, less than 40% appeared to be enjoying themselves, while another 40% looked neutral and around 17% were not enjoying the activity. This discrepancy between the data points out that the experience was able to foster a positive valence, although without generating excitement, except in a few cases. However, by looking at the data regarding concentration (Figure 7), this low level of observed enjoyment may be motivated by the high level of attention required by the game. During play with the platform, in fact, children appeared mostly concentrated, and around 30% of them were totally concentrated. In fact, by looking at the data regarding the children who appeared not much amused, it is possible to note that 70% of them were considered to be totally concentrated and, another 20% very concentrated. Therefore, it is possible to presume that a lack of smiling is mostly determined by a high level of concentration and not necessarily a low level of enjoyment. This somehow affirms the generally enthusiastic feedback gave by children about the experience and reveals a high level of engagement.
The high level of concentration is also related to the game difficulty. Looking at the answers given by the children, almost 70% of them stated that controlling the ball with the body in the IS was not very difficult or not difficult at all. Catching the projected figures was also considered easy. Just 17% of participants, in fact, stated that it was difficult or very difficult. On the contrary, the initial phase of the game, which revealed the level of learnability of the game experience, was more challenging for children. Although 40% did not find it difficult, 30% said that it was quite difficult and another 30% stated that it was initially very difficult to understand the game logic. In this regard, 14 children pointed out some specific reasons for declaring such difficulties, such as: the tendency to try “catching” the projected figures by walking over them rather than controlling the robot to catch them; tilting the chest instead of moving around for controlling the ball; thinking that it is necessary to stay in the location indicated at the beginning of the game for the start; and inattention during the tutor’s explanation of the game. In some cases, in fact, the explanation was repeated twice.
Then, the results of the NIS were used as a comparison for gaining a deeper understanding of the results of the IS and insights into future improvements. The overall preference of one setup over another is not relevant to the evaluation of the platform because of the different nature of the two experiences; this was used as a way to get more insights from children about positive and negative aspects of the platform. For instance, the game difficulty was pointed out as a factor determining the preference of one setup over another, which was addressed as an indicator of likeability. Looking at the children’s answers and comments, in fact, a significantly higher preference for the NIS (60%) over the IS was noted, and difficulty was pointed out as a preference factor by more than 10% of children. Some of them stated that they preferred the immersive setup over the other because it was easier, and in some cases the ease was attributed to the intuitiveness of the solution. The other setup, however, was preferred by some because it was easier and by others because it was more difficult. These results look conflicting and it is not possible to come up with a hypothesis according to which the easier the game is the more fun it is, or vice versa. However, these comments highlight the importance of the crucial relationship between difficulty and game enjoyment.
The preference of one setup over another, however, was strongly influenced by a series of other factors, see Figure 8. The most influential was probably the presence of the robot. In the case of the IS, in fact, the feedback was significantly more positive when the game was played with the robot. Children who expressed maximum appreciation for the game (70%) in the case of interaction with a projected ball, against 86% in the case of children playing with the robotic ball.
Although the non-immersive setup was preferred by only 40% of participants, the percentage of preference of this setup was higher when experienced with the robot (+4%). Furthermore, in 7% of children’s comments, the preference for the NIS was motivated by the presence of the robot. In these cases, they experienced the IS without the robot.
The comments also revealed other recurring factors. Regarding the preference of the NIS, many children highlighted the possibility of controlling the robot through the smartphone, a play modality that one kid defined as more technological. Almost one-fifth of the participants enjoyed striking the cubes and making the Sphero jump on the ramps a lot. Nevertheless, the comments also highlighted positive features that made the IS preferable. In most cases, the preference was determined by the possibility of controlling the game through the body. In fact, this feature was pointed out by 35% of children, and 14% of them has positively remarked on the fact that the platform requires them to move.
This factor introduces the last aspect addressed in the observations: movement during play. The data reported by the observers reveal that the platform and the game tested were not producing a great amount of movement in the players. According to the observations, almost half of the participants were doing constant but little movements, and 28% of them were moving just a minimum. Some children showed more active and energetic behaviors, characterized by constant movement and body gestures (shaking arms, jumping), but these were only around 30%. No significant differences were found between the data regarding the sample that played with the robot and the one that played without it. On the contrary, a considerable correlation was found between the quantity of movement and the level of concentration, both reported by the observers.
When we focus on the data from the 78 children who appeared to be performing a minimum amount of movement, the vast majority of them also appeared to be very (41%) or totally concentrated (25%). The rest also appeared to be quite concentrated. These data reveal that the platform tested with the “coordinates” game was unable to foster a great amount of movement. This was probably due to the high level of concentration required by the game and some of the learnability issues mentioned earlier. Nevertheless, a lower level of arousal and higher concentration appear to be more suitable conditions for the playful learning activities carried out at the educational center. The activity carried out on the platform, in fact, was easily managed in parallel with the others, without negatively affecting children or distracting them from the general topic. This represents positive feedback in terms of the compatibility of the solution with the context and existing practices.

8. Discussion and Conclusions

Human-centered design methodologies are not a novelty in human-robot interaction studies. From running tests in the wild to involving representative samples of the population, there is a growing effort to situate HRI studies in real contexts. However, despite the acknowledged importance of these practices, the vast majority of HRI studies are still carried out in labs with non-representative samples of participants, typically students or researchers. Moreover, even when these are applied, the studies are meant to test robotic applications that are developed on the basis of technological opportunities, rather than on the understanding of people’s needs, behaviors, and expectations. Thus, in this study we explored the design opportunities emerging from the analysis of the current childhood scenario and to the modalities in which edutainment robotics can be a valuable tool. In particular, we developed a mixed-reality game platform by adopting a HCD approach, which entailed the involvement of both parents and children and the collaboration with an educational center for children.
We identified the need for promoting physically active play in response to the issue of the rise in sedentary behavior. Then, we carried out a questionnaire and a focus group with adults to get a better understanding of children’s play scenarios and parents’ desires and concerns regarding technology. On the contrary, we involved children in a preliminary test in the lab and most of all in the experimental application of the platform. In fact, we carried out the tests in the wild, situating our platform as part of a broader experience for school groups in an educational center. Here, through collaboration with the expert staff, we developed a customized version of the catching game, which was then tested with a large number of children, who attended with their school group. This specific choice of testing the platform in a real context as part of a broader educational experience allowed us to get insights regarding the platform, the developed game experience, and the design process.
The choice of designing a game platform rather than a specific game product for fostering active play was positive because it resulted in two key values for the project: the customizability and easy iterability of games. In fact, the characteristic of potentially supporting different kinds of robots and enabling the design of different games, on the one hand, enabled a collaborative conversation with the expert from the educational center for children. Instead of perceiving the project as a closed solution that the center was only asked to host, customizability allowed us to co-design a game specifically meant to fit one of the educational programs. On the other hand, it enabled easy iterability in the development of the games, which emerged as a key feature during the project process, especially because only through prototyping and preliminary testing was it possible to notice that certain game typologies were not appropriate for behaviors that could be obtained from certain robots.
In addition to these two key aspects, during the testing, the developed platform proved to be suitable for this kind of application in a real context. Despite being a prototype, in fact, the platform was robust enough to ensure the smooth flow of the activity, which was rarely interrupted by technical issues, and the timing was appropriate.
Regarding the game experience, the results of the test at the center show general appreciation of and excitement about the platform and the game. In particular, children enjoyed the possibility of controlling the robot through their own body movements. Nevertheless, some limitations affecting the gameplay emerged. First of all, some usability issues rose, such as elements disappearing from the playground, functioning that was not easily intelligible, and the fact that the game required great concentration. In particular, the level of concentration that a game requires appeared to be a crucial element affecting the quantity of movement of children during the game. Secondly, even if most of the children appreciated playing with the platform, the majority preferred to play with the robotic ball via the mobile app. Some children specifically stated that they preferred playing with this because of the smartphone. Many others, instead, highlighted that hitting the cubes was the reason for their preference. Another interesting aspect, though, is that the preference and appreciation of the experiences was highly related to the presence of the robot. This, on the one hand, confirms the robot’s engaging potential, while, on the other hand, it points to a need for further studies to understand how much this affects children’s preferences for, enjoyment of, and engagement with the games. These results pointed out how the success of a mixed-reality game that children can play with robots is related to at least two main aspects. On the one hand, embodied elements (both robots and other game elements) represent attractors in the game experience and are able to affect its appreciation by children. On the other hand, difficulty and concentration are factors that need to be carefully calibrated because they can affect not only the game appreciation but also children’s behavior during play. For instance, too high a level of concentration in the platform resulted in a limited amount of body movement, which was the initial aim of the platform.
Finally, regarding the process, the main finding that emerged from this experiment is the fact that a human-centered design approach may lead to a reframing of the project’s purpose. This may have two different effects on the project. On the one hand, it can improve the relevance and appropriateness of a project by allowing us to adapt to the findings of the studies during the process. On the other hand, it may lead to conflicting purposes and objectives. For instance, although the collaboration with the educational context resulted in a positive and enriching experience, the initial aim of promoting physically active play was in conflict with the aim of supporting educational activities introduced by the experts of the educational center. At a practical level, this conflict translated into a game experience that was easily added into a broader educational experience but limited in terms of supporting physically active play.

Author Contributions

Conceptualization, M.L.L., G.P., C.G. and F.L.; Software, G.P.; Investigation, M.L.L. and G.P.; Writing—Original Draft Preparation, M.L.L.; Writing—Review & Editing, M.L.L., C.G. and F.L.

Funding

This research received no external funding.

Acknowledgments

This research project is developed and supported by TIM Jol CRAB lab, in collaboration with Xké? Il Laboratorio della curiosità, in Turin, which hosted, co-organized, and supported the test phase of the project.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Schaal, S. The new robotics: Towards human-centred machines. HFSP J. 2017, 1, 115–126. [Google Scholar] [CrossRef] [PubMed]
  2. Fan, Y. User Acceptance Model Driven Design. Ph.D. Thesis, Monash University, Melbourne, Australia, 2008. [Google Scholar]
  3. Davis, F.D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 1989, 13, 319–340. [Google Scholar] [CrossRef]
  4. Beer, J.M.; Prakash, A.; Mitzner, T.L.; Rogers, W.A. Understanding Robot Acceptance; Technical Report HFA-TR-1103; Georgia Institute of Technology: Atlanta, GA, USA, 2011. [Google Scholar]
  5. Sabanovic, S. Robots in society, society in robot. Int. J. Soc. Robot. 2010, 2, 439–450. [Google Scholar]
  6. Lie, S.; Liu, D.; Bongers, B. A cooperative approach to the design of an operator control unit for a semi-autonomous grit-blasting robot. In Proceedings of the Australasian Conference on Robotics and Automation (ACRA), Wellington, New Zealand, 3–5 December 2012. [Google Scholar]
  7. Huijnen, C.; Badii, A.; van den Heuvel, H.; Caleb-Solly, P.; Thiemert, D. Maybe it becomes a buddy, but do not call it a robot—Seamless cooperation between companion robotics and smart homes. In Proceedings of the International Joint Conference on Ambient Intelligence, Amsterdam, The Netherlands, 16–18 November 2011; pp. 324–329. [Google Scholar]
  8. Ros, R.; Baxter, P.; Nalin, M.; Looije, R.; Wood, R.; Demiris, Y. Child-robot interaction in the wild: Advice to the aspiring experimenter. In Proceedings of the 13th International Conference on Multimodal Interfaces (ICMI’11), Alicante, Spain, 14–18 November 2011; pp. 335–342. [Google Scholar]
  9. Baxter, P.; Kennedy, J.; Senft, E.; Lemaignan, S.; Belpaeme, T. From characterising three years of HRI to methodology and reporting recommendations. In Proceedings of the 7th ACM/IEEE International Conference on Human Robot Interaction, Piscataway, NJ, USA, 7–10 March 2016; pp. 391–398. [Google Scholar]
  10. Xu, Q.; Ng, J.; Cheong, Y.L.; Tan, O.; Wong, J.B.; Tay, T.C.; Park, T. The role of social context in human-robot interaction. In Proceedings of the Network of Ergonomics Societies Conference (SEANES), Southeast Asian, Langkawi, Malaysia, 9–12 July 2012; pp. 1–5. [Google Scholar]
  11. Ryoo, M.S.; Fuchs, T.J.; Xia, L.; Aggarwal, J.K.; Matthies, L. Robot-centric activity prediction from first-person videos: What will they do to me? In Proceedings of the 10th Annual ACM/IEEE International Conference on Human-Robot Interaction (HRI ’15), Portland, OR, USA, 2–5 March 2015; pp. 295–302. [Google Scholar]
  12. Lupetti, M.L.; Piumatti, G.; Rossetto, F. Phygital play HRI in a new gaming scenario. In Proceedings of the 7th International Conference on Intelligent Technologies for Interactive Entertainment (INTETAIN), Turin, Italy, 10–12 June 2015; pp. 17–21. [Google Scholar]
  13. Nourbakhsh, I.R.; Bobenage, J.; Grange, S.; Lutz, R.; Meyer, R.; Soto, A. An affective mobile robot educator with a full-time job. Artif. Intell. 1999, 114, 95–124. [Google Scholar] [CrossRef]
  14. Schraft, D.R.; Graf, B.; Traub, A.; John, D. A mobile robot platform for assistance and entertainment. Ind. Robot Int. J. 2001, 28, 29–35. [Google Scholar] [CrossRef] [Green Version]
  15. Yamazaki, A.; Yamazaki, K.; Ohyama, T.; Kobayashi, Y.; Kuno, Y. A techno-sociological solution for designing a museum guide robot: regarding choosing an appropriate visitor. In Proceedings of the 7th Annual ACM/IEEE International Conference on Human-Robot Interaction, Boston, MA, USA, 5–8 March 2012; pp. 309–316. [Google Scholar]
  16. Lupetti, M.L.; Giuliano, L.; Germak, C. Virgil robot at racconigi castle: A design challenge. In Proceedings of the Seventh International Workshop on Human-Computer Interaction, Tourism and Cultural Heritage, HCITOCH 2016, Turin, Italy, 7–9 September 2016. [Google Scholar]
  17. Wiles, J.; Worthy, P.; Hensby, K.; Boden, M.; Heath, S.; Pounds, P.; Rybak, N.; Smith, M.; Taufotofua, J.; Weigel, J. Social cardboard: Pretotyping a social ethnodroid in the wild. In Proceedings of the 11th ACM/IEEE International Conference on Human Robot Interaction, Christchurch, New Zealand, 7–10 March 2016; pp. 531–532. [Google Scholar]
  18. Osada, J.; Ohnaka, S.; Sato, M. The scenario and design process of childcare robot, PaPeRo. In Proceedings of the 2006 ACM SIGCHI International Conference on Advances in Computer Entertainment Technology, Hollywood, CA, USA, 14–16 June 2006; p. 80. [Google Scholar]
  19. Sabelli, A.M.; Kanda, T.; Hagita, N. A conversational robot in an elderly care center: An ethnographic study. In Proceedings of the 6th International Conference on Human-Robot Interaction, Lausanne, Switzerland, 6–9 March 2011; pp. 37–44. [Google Scholar]
  20. Šabanović, S.; Reeder, S.; Kechavarzi, B. Designing robots in the wild: In situ prototype evaluation for a break management robot. J. Hum.-Robot Interact. 2014, 3, 70–88. [Google Scholar] [CrossRef]
  21. Van Rijn, H.; Stappers, P.J. Expressions of ownership: motivating users in a co-design process. In Proceedings of the Tenth Anniversary Conference on Participatory Design 2008, Indiana University, Bloomington, IN, USA, 1–4 October 2008. [Google Scholar]
  22. Forlizzi, J. How robotic products become social products: An ethnographic study of cleaning in the home. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, Arlington, VA, USA, 10–12 March 2007; pp. 129–136. [Google Scholar]
  23. Sung, J.; Christensen, H.I.; Grinter, R.E. Robots in the wild: Understanding long-term use. In Proceedings of the 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI), San Diego, CA, USA, 11–13 March 2009; pp. 45–52. [Google Scholar]
  24. De Greeff, J.; Blanson, O.; Fraaije, A.; Solms, L.; Wigdor, N.; Bierman, B. Child-robot interaction in the wild: Field testing activities of the ALIZ-E project. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, Bielefeld, Germany, 3–6 March 2014; pp. 148–149. [Google Scholar]
  25. Bohus, D.; Saw, C.W.; Horvitz, E. Directions robot: In-the-wild experiences and lessons learned. In Proceedings of the International Conference on Autonomous Agents and Multi-Agent Systems, Paris, France, 5–9 May 2014; pp. 637–644. [Google Scholar]
  26. Design Ethnography: Taking Inspiration from Everyday Life. Available online: https://www.stby.eu/wp/wp-content/uploads/2011/01/designet.pdf (accessed on 17 November 2017).
  27. Dillon, A. User Acceptance of Information Technology. In Encyclopedia of Human Factors and Ergonomics; Karwowski, W., Ed.; Taylor and Francis: London, UK, 2001. [Google Scholar]
  28. Dautenhahn, K.; Woods, S.; Kaouri, C.; Walters, M.L.; Koay, K.L.; Werry, I. What is a robot companion-friend, assistant or butler? In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, AB, Canada, 2–6 August 2005; pp. 1192–1197. [Google Scholar]
  29. Istat. Infanzia e Vita Quotidiana; Report Istat; Ministero del Lavoro e delle Politiche Sociali: Rome, Italy, 18 November 2011.
  30. Ipsos for Save the Children and Mondelez International Foundation. Lo Stile di Vita Dei Bambini e Ragazzi Italiani. Available online: https://www.savethechildren.it/press/stili-di-vita-dei-bambini-italia-1-minore-su-5-non-svolge-attività-motorie-nel-tempo-libero (accessed on 17 November 2017).
  31. Beardsley, P.; Van Baar, J.; Raskar, R.; Forlines, C. Interaction using a handheld projector. IEEE Comput. Graph. Appl. 2005, 25, 39–43. [Google Scholar] [CrossRef] [PubMed]
  32. Graham, L. Gestalt theory in interactive media design. J. Humanit. Soc. Sci. 2008, 2, 571. [Google Scholar]
  33. Kultima, A. Casual game design values. In Proceedings of the 13th International MindTrek Conference: Everyday Life in the Ubiquitous Era, New York, NY, USA, 30 September–2 October 2009; pp. 58–65. [Google Scholar]
  34. Vieira, L.C.; da Silva, F.S.C. Assessment of fun in interactive systems: A survey. Cogn. Syst. Res. 2017, 41, 130–143. [Google Scholar] [CrossRef]
  35. McKenzie, T.L.; Sallis, J.F.; Nader, P.R. System for observing fitness instruction time. J. Teach. Phys. Educ. 1991, 11, 195–205. [Google Scholar] [CrossRef]
  36. Ridgers, N.D.; Stratton, G.; McKenzie, T.L. Reliability and validity of the system for observing children’s activity and relationships during play (SOCARP). J. Phys. Act. Health 2010, 7, 17–25. [Google Scholar] [CrossRef] [PubMed]
Figure 1. System architecture.
Figure 1. System architecture.
Mti 02 00069 g001
Figure 2. Phygital Play, a game platform concept.
Figure 2. Phygital Play, a game platform concept.
Mti 02 00069 g002
Figure 3. The game of coordinates. From the left, the four phases of the game.
Figure 3. The game of coordinates. From the left, the four phases of the game.
Mti 02 00069 g003
Figure 4. Schema of the test setups at the Children’s Museum.
Figure 4. Schema of the test setups at the Children’s Museum.
Mti 02 00069 g004
Figure 5. Enjoyment reported by children about the Immersive Setup (IS), both with and without the robot.
Figure 5. Enjoyment reported by children about the Immersive Setup (IS), both with and without the robot.
Mti 02 00069 g005
Figure 6. Enjoyment reported by the observers about the Immersive Setup (IS), both with and without the robot.
Figure 6. Enjoyment reported by the observers about the Immersive Setup (IS), both with and without the robot.
Mti 02 00069 g006
Figure 7. Concentration reported by the observers for the Immersive Setup both with and without the robot.
Figure 7. Concentration reported by the observers for the Immersive Setup both with and without the robot.
Mti 02 00069 g007
Figure 8. Preference of the two setups stated by children and some recurring motivations found in children’s comments.
Figure 8. Preference of the two setups stated by children and some recurring motivations found in children’s comments.
Mti 02 00069 g008

Share and Cite

MDPI and ACS Style

Lupetti, M.L.; Piumatti, G.; Germak, C.; Lamberti, F. Design and Evaluation of a Mixed-Reality Playground for Child-Robot Games. Multimodal Technol. Interact. 2018, 2, 69. https://doi.org/10.3390/mti2040069

AMA Style

Lupetti ML, Piumatti G, Germak C, Lamberti F. Design and Evaluation of a Mixed-Reality Playground for Child-Robot Games. Multimodal Technologies and Interaction. 2018; 2(4):69. https://doi.org/10.3390/mti2040069

Chicago/Turabian Style

Lupetti, Maria Luce, Giovanni Piumatti, Claudio Germak, and Fabrizio Lamberti. 2018. "Design and Evaluation of a Mixed-Reality Playground for Child-Robot Games" Multimodal Technologies and Interaction 2, no. 4: 69. https://doi.org/10.3390/mti2040069

Article Metrics

Back to TopTop