skip to main content
10.1145/3613905.3650855acmconferencesArticle/Chapter ViewFull TextPublication PageschiConference Proceedingsconference-collections
Work in Progress
Free Access

Should I Help a Delivery Robot? Cultivating Prosocial Norms through Observations

Published:11 May 2024Publication History

Abstract

We propose leveraging prosocial observations to cultivate new social norms to encourage prosocial behaviors toward delivery robots. With an online experiment, we quantitatively assess updates in norm beliefs regarding human-robot prosocial behaviors through observational learning. Results demonstrate the initially perceived normativity of helping robots is influenced by familiarity with delivery robots and perceptions of robots’ social intelligence. Observing human-robot prosocial interactions notably shifts peoples’ normative beliefs about prosocial actions; thereby changing their perceived obligations to offer help to delivery robots. Additionally, we found that observing robots offering help to humans, rather than receiving help, more significantly increased participants’ feelings of obligation to help robots. Our findings provide insights into prosocial design for future mobility systems. Improved familiarity with robot capabilities and portraying them as desirable social partners can help foster wider acceptance. Furthermore, robots need to be designed to exhibit higher levels of interactivity and reciprocal capabilities for prosocial behavior.

Figure 1:

Figure 1: A view of the futuristic environment presented in the online study to the participants. In this city, we show the co-existence of human and AI agents and their prosocial interactions.

Skip 1INTRODUCTION AND BACKGROUND Section

1 INTRODUCTION AND BACKGROUND

Autonomous mobility systems such as delivery robots are increasingly present in social spaces such as hotels, restaurants, hospitals, and public roads. Beyond their primary function of transporting items from one location to another, these mobility robots are emerging as active participants in social interactions, both with the intended users and bystanders [13, 30, 33, 37]. Emerging evidence indicates that these robots are perceived by people as entities beyond mere tools, often being anthropomorphized and subjected to social expectations [43]. This evolving dynamic suggests that mobility robots represent a unique social category, necessitating thoughtful integration into the fabric of human society.

The introduction of mobility robots on roads brings potential benefits. Yet, their widespread adoption is hindered by a lack of transparency and understanding, as well as limited public acceptance compared to traditional mobilities [2, 43, 45]. As highlighted by [7, 23, 41], the deployment of delivery robots in urban areas has caused noticeable tensions. To fully harness the potential of autonomous mobility and foster a harmonious relationship between human road users and autonomous agents on roads, it’s crucial to develop strategies that encourage prosocial interactions between humans and robots [16, 38].

Prosocial behaviors, defined as voluntary behavior intended to benefit others without guaranteed rewards to the helper [17, 21, 31], are prevalent in mobility contexts. This includes actions like yielding and signaling, where road users commonly offer to and expect to receive assistance from one another [25, 42]. Recruiting help from human road users is crucial for addressing the challenges faced by delivery robots deployed in dynamic environments [15].

Prior research in Human-Computer Interaction (HCI) has demonstrated success in eliciting human help in certain controlled and real-life environments [24, 27, 32, 34, 35, 40, 44], with decisions to help influenced by factors such as the situational context, the robot’s physical design (ranging from anthropomorphic to purely functional) [26], the affective response elicited by the robot (such as the psychology of ‘kawaii’, [29]), and the robot’s signaling for help [10, 28, 39].

However, the observed prosocial behaviors towards robots in one-off interactions may still be influenced by the novelty effect [15]. The reliability of situation- and robot-related factors in consistently eliciting prosocial norms across people who may have different perceptions around interactions with robots remains questionable [4]. In response, we propose to mobilize social observation [12] and conformity to instigate new human-robot intergroup prosocial norms. Social norms, which monitor, punish, and reward human actions [8, 18], are pivotal in promoting prosocial behavior [6, 22] [3, 11] and can lead to enduring behavioral changes [9, 19, 36].

This paper reports on a randomized controlled experiment in a high visual-fidelity simulation environment to investigate how observational learning can promote prosocial behavior towards delivery robots. We examine two types of observations—humans helping robots and robots helping humans—to compare their effectiveness in instigating these prosocial norms.

This paper makes several key contributions to the field. Firstly, it demonstrates the feasibility of using social observation and conformity to establish new prosocial norms between humans and robots. Secondly, it identifies the impact of different prosocial observation scenarios, specifically comparing instances where a robot acts as a helper versus scenarios where the robot is being helped, on the development of prosocial norm beliefs. Thirdly, this research adds to the literature on human-delivery robot interactions by introducing a quantitative, randomized controlled experiment anchored in psychological theories of social norms. Lastly, it provides insights for the interaction design of future mobility robots, highlighting the significance of the prosocial behaviors displayed by these robots in promoting prosocial norms in their interactions.

Skip 2RESEARCH OBJECTIVES Section

2 RESEARCH OBJECTIVES

As a first step towards exploring the use of social observation as a tool to foster new prosocial norms in human-robot interactions, we conducted an online experiment on a high visual-fidelity simulation platform to answer the following research questions (RQs):

RQ1: What are the prevailing normative beliefs about prosocial interactions with robots?

RQ2: What factors influence people’s perceptions of the normativity of assisting robots?

RQ3: Can observations lead to a change in beliefs about prosocial norms, and how do these changes impact the perceived obligation to act prosocially?

RQ4: Which observation type– robots acting as helpers or as beneficiaries– more effectively fosters prosocial behavioral norms towards robots?

Skip 3METHODS Section

3 METHODS

3.1 Study Design

This study utilized a mixed design, incorporating both between-subject and within-subject factors. Specifically, it featured three between-subject observation conditions: 1) human helping human, 2) human helping robot, and 3) robot helping human. The within-subject factor involved three scenarios: 1) warning of oncoming car, 2) notification of road closure, and 3) picking up misplaced trash. Each participant was exposed to all three scenarios. Participants were randomly assigned to one of the observation conditions and were subjected to repeated observations within their assigned condition.

3.2 Participants

The online study sample comprised 210 native English speakers in the United States, all recruited on Prolific \(47.1\%\) were female-identifying. Participants’ ages ranged from 19 to 75, with a median of 38.5. \(31.9\%\) of the participants live in urban areas, \(53.8\%\) in suburbs and \(12.9\%\) in rural areas. The mean rating of participants’ self-reported familiarity with robots was 2.8 on a scale from 1 to 7. Research protocols and procedures were approved by the bioethics committee (Anonymized).

3.3 Materials

The online study was conducted using video recordings of a custom high visual-fidelity simulator. The simulator environment is rendered using Unreal Engine 5.1.1 [1]. The environment represented an urban city with the diffusion of multiple road users sharing the spaces with no dedicated space for road users. The virtual environment aims to recreate smart cities planned and encouraged in several cities across the EU and North America [14]. To this effect, the participants are shown to be walking in a park-like environment with different types of road actors, including pedestrians (of all age groups and genders), food cart vendors, delivery robots, and small cart-like self-driving cars. The simulator environment is shown in Figure 1.

Figure 2:

Figure 2: Snippets of Scenario 1: warning of oncoming car. (a) Pedestrians walk on a park-like road. (b) Pedestrian warns a group of pedestrians about a fast-approaching car. (c) The group of pedestrians stops and lets the car go past. (D) The group expresses gratitude towards the pedestrians’ prosocial behavior

To assess people’s attitudes of prosocial behaviors toward other road users, we placed participants into specific scenarios. Reflecting on the challenges discussed at the beginning of this paper, we focused on designing scenarios in a future mobility context where delivery robots are widely present and share roads with pedestrians. We specifically targeted prosocial behaviors that 1) would incur a small cost to the helper, 2) are not already deemed as a requirement, allowing room for learning through observation, and 3) are within the capability of a delivery robot to reciprocate.

Through a collaborative iterative design process, we crafted three prosocial interaction scenarios– 1) warning of oncoming car, 2) notification of road closure, and 3) picking up misplaced trash. Each scenario involves a ‘helper’ (the actor performing the prosocial behavior) and a ‘beneficiary’ (the recipient of the help), who could be either a human pedestrian or a delivery robot. Videos of the pedestrian walking in the simulation environment were recorded from a third-person view for observation trials (where participants watch the scenarios as bystanders) and a first-person view for decision trials (where participants assume the role of a potential helper). The videos also included visual cues to aid participants’ understanding of the scenarios. Figure 2 illustrates a sample scenario of warning of oncoming cars. Full videos of scenarios are included in the supplementary materials.

3.4 Measures

At the outset of the study, to gauge people’s prior experience and understanding of delivery robots prior to any exposure to experimental stimuli or manipulations, we measured people’s self-reported robot familiarity ("How familiar are you with delivery robots?" measured on a 7-point scale from Not at all to Very familiar) and their perceived social intelligence of delivery robots using the short form Perceived Social Intelligence (PSI) Scale [5]. People’s perceived social intelligence in delivery robots was measured along two dimensions—social presentation (a robot’s appeal as a social partner) and social information processing (a robot’s capabilities to work alongside humans). Furthermore, we employed the Prosociality Scale [20] to measure participants’ inherent tendencies towards prosocial behavior, as it is anticipated to be a predictive factor for prosocial norm beliefs.

In the main experiment, we first measured participants’ baseline normative beliefs regarding the expected prosocial actions of either a human pedestrian or a delivery robot in the three mobility scenarios. This was done by presenting participants with normative statements such as “A human pedestrian [observed helper] is to inform a delivery robot [observed beneficiary] of the road closure.” Participants expressed their perceived degree of normativity by moving a slider bar on a scale of -10 (Prohibited), -5 (Discouraged), 0 (Allowed), 5 (Encouraged), 10 (Required). Following two rounds of observation of their assigned condition, we measured participants’ updated normative beliefs.

In the decision trials that followed, participants were presented with normative statements such as “I am to inform a delivery robot [potential beneficiary] of the road closure.” They used the same slider bar and scale to indicate their sense of obligation to perform prosocial actions in each scenario.

3.5 Procedure

Upon obtaining informed consent, we administered a pre-experiment survey to gather demographic data, assess their self-reported familiarity with delivery robots, and evaluate their current perception using the 20-item PSI scale [5].

Participants were then introduced to the experimental context through a brief text. This text aimed to immerse them in a hypothetical urban environment set in the year 2050, characterized by the widespread use of autonomously operated delivery robots that navigate alongside human pedestrians.

Subsequently, they viewed a one-minute narrated introduction video, designed to be representative of the series of video stimuli they would encounter throughout the experiment. The videos depicted a futuristic city center from a first-person perspective, highlighting shared street scenes with equal numbers of human pedestrians and mobility robots. Participants were instructed to view these videos as if they were experiencing the scene through Augmented Reality glasses while walking through the city center. The videos are included in the supplementary materials.

The main experiment commenced with an initial series of observation trials. Midway through these videos, right before the prosocial acts took place, we assessed participants’ baseline normative beliefs for each of the three scenarios. This was followed by the second round of observation trials, featuring the same scenarios in a varied sequence. After these repeated observations, we measured the participants’ updated normative beliefs.

Subsequently, participants engaged in decision trials. In these trials, the videos depicted similar scenarios, but occurring in closer proximity to the participant’s ego, thereby prompting them to gauge their own inclination to act prosocially. Participants rated the extent to which they felt normatively compelled to take action.

Finally, at the end of the main experiment, we evaluated participants’ prosocial inclinations using a survey developed by [20] and solicited general feedback for the study through a post-experiment survey.

Skip 4RESULTS Section

4 RESULTS

In this section, we present our analyses of participants’ normative beliefs measured at three distinct time points: baseline normative beliefs prior to prosocial observations, post-observation normative beliefs, and the perceived normativity of helping humans and robots during decision trials. Descriptive statistics are summarized in Table 1.

Table 1:
Observation ConditionBaselinenormative beliefPost-observationnormative beliefDecision trialrormative rating,Robot beneficiaryDecision trialnormative rating,Human beneficiary
M = 2.58, SD = 3.65M = 4.18, SD = 3.09M = 4.31, SD = 3.18M = 4.51, SD = 3.44
human helping robot (HR)M = 3.19, SD = 4.69M = 4.72, SD = 3.13M = 2.23, SD = 3.56M = 4.58, SD = 3.41
human helping human (HH)M = 4.01, SD = 4.96M = 6.99, SD = 3.47M = 1.58, SD = 4.42M = 4.49, SD = 3.64

Table 1: Descriptive statistics of norm beliefs measured for the three observation conditions at various time points on a scale from -10 (Prohibited) to 10 (Required)

4.1 Baseline normative belief

First, we examined participants’ initial normative beliefs about helping human pedestrians versus delivery robots in realistic mobility contexts (RQ1). To do this, we used a mixed-effects model with participant and scenario as random effects, and observation condition (robot-helping-human, human-helping-robot, or human-helping-human, see Table 1), participant prosociality (measured with the Prosociality Scale by [20]), and their interactions as fixed effects. Reverse Helmert contrasts were used to compare the intergroup helping conditions [robot-helping-human(RH) vs. human-helping-robot(HR)] against the within-group helping condition [human-helping-human(HH)].

Before any experimental exposure, intergroup helping [human-helping-robot(HR) or robot-helping-human(RH)] was perceived as less normative (t = −2.41, p = .01) compared to within-group helping [human-helping-human(HH)]. However, no significant baseline differences were observed between the two treatment groups (t = 1.57, p = .1). As expected, participants’ prosocial inclinations positively correlated with higher normative ratings of prosocial behaviors (t = 2.21, p = .02). Those with lower prosocial inclinations viewed intergroup (Human-Robot) helping as less normative compared to within-group (Human-Human) helping (t = −1.90, p = .05).

However, a participant’s general prosociality was not the strongest predictor of their beliefs about the normativity of humans helping delivery robots. To address RQ2, we modified the mixed-effects model to include factors like participants’ familiarity with delivery robots and their perceptions of the robots’ social information processing abilities and social presentation characteristics along with the interactions to predict the baseline normative belief of humans helping robots. The analysis revealed that the perceived social presentation characteristics (t = 2.66, p = .01) and social information processing abilities (t = −3.04, p < .005) had a greater impact than prosociality (t = 1.09, p = .28) in shaping these beliefs. Higher ratings of robots’ social presentation traits correlated with stronger normative beliefs in favor of intergroup prosocial interactions. Furthermore, perceived social information processing abilities interacted with familiarity. Lower perceived abilities led to stronger normative beliefs in favor of helping robots, especially among participants less familiar with delivery robots (t = 2.93, p < .005).Further analysis showed that the perceived social intelligence of robots mediates the effect of familiarity on the normative belief of helping robots (ACME = .19, p < .001, ADE = .27, p = .14). In essence, greater familiarity with delivery robots led to higher perceptions of their social intelligence, which in turn reduced the perceived obligation to assist them.

4.2 Post-observation normative belief change

After completing the two rounds of prosocial observation trials, we assessed changes in participants’ normative beliefs. A repeated measures ANOVA was conducted to assess these changes, confirming a significant update (F(1) = 4.06, p < .05) in participants’ norm beliefs between the two measurement points (before and after the observations), indicating the effectiveness of the observations in altering people’s prosocial norm beliefs (RQ3). A pair of simple-effect analyses for the two intergroup prosocial observation conditions [robot-helping-human(RH) vs. human-helping-robot(HR)] revealed that changes in the belief that humans should assist robots (HR) were influenced by participants’ prior familiarity with delivery robots (t = 2.65, p = .01), their perceived social intelligence of these robots (measured by the PSI scale [5], t = 2.61, p = .01), Additionally, there was a significant interaction between these two factors (t = −2.9, p < .005), indicating that the effect of familiarity on norm belief change was moderated by the level of perceived social intelligence in the robots. In contrast, these factors did not affect changes in norm beliefs regarding whether delivery robots should assist human pedestrians (RH, ts < .65, ps > .5).

4.3 Decision trial normative rating

Finally, we wanted to understand the impact of previous observations on individuals’ perceived obligation to act prosocially in similar situations. We built a mixed-effects model predicting people’s normative rating in decision trials (i.e., their own sense of obligation to assist a delivery robot in a given situation), with participant and scenario as random effects and fixed effects of the observation condition (see Table1), beneficiary (potential recipient of help), and their interaction. The results displayed a general trend that participants across all observation conditions felt a stronger obligation to assist human pedestrians compared to robots (t = 11.17, p < .001). Notably, observations of robots helping humans (RH) were significantly more influential (t = 5.33, p < .001) in fostering a sense of obligation to help delivery robots than observations of humans helping robots (HR). This finding, illustrated in Figure 3, decisively answers our fourth research question (RQ4), demonstrating that observations of robots acting as helpers, rather than beneficiaries, more effectively promote prosocial behavioral norms towards robots.

Figure 3:

Figure 3: Distribution of the normative behavior rating across the three observation conditions

To examine the impact of changes in normative beliefs (described in section 4.2) on participants’ expressed obligation to assist robots during decision trials (addressing the second part of RQ3), we extended the previous model to incorporate post-observation norm belief changes as a predictive factor. Findings revealed a notable distinction between the two treatment observation groups (RH and HR, outlined in Table 1). Specifically, positive changes in the belief that robots should help humans (RH) significantly increased participants’ feelings of obligation to help robots (t = 2.79, p = .005). This suggests that participants who observed robots helping humans not only learned but also began to internalize the robot-helping-human norm, leading to a heightened sense of reciprocal obligation towards robots.

Contrary to what might be expected, observing humans helping robots (HR) did not yield a similar effect. This outcome challenges the assumption that direct observation of human-helping-robot norms would more directly influence learning. Learning about human-helping-robot norms from a third-person viewpoint may not be as compelling for norm internalization, particularly in the absence of norm enforcement mechanisms. Our interpretation highlights the role of reciprocal expectations, rooted in the robot-helping-human norm, in fostering a self-motivated drive for prosocial behavior.

Skip 5DISCUSSION AND CONCLUSION Section

5 DISCUSSION AND CONCLUSION

Drawing on psychological theories of social norms, we proposed leveraging prosocial observations to cultivate new prosocial norms toward delivery robots. Our randomized controlled online experiment quantitatively evaluated changes in perceived normativity related to human-robot prosocial behaviors at three stages: baseline (Section 4.1) and post-observations trials (Section 4.2, and during the subsequent decision trials (Section 4.3) where participants assume the role of potential helpers.

Study results address the four research questions outlined in Section 2. Firstly, addressing RQ1 and RQ2, we found that people’s initial norm beliefs of helping robots are influenced by individuals’ familiarity with delivery robots and their perceptions of these robots’ social intelligence. This suggests that educating community members about mobility robot capabilities to improve familiarity and portraying them as desirable social partners can enhance the acceptance of mobility robots in public spaces. Next, in response to RQ3, our results indicate that the observations notably shift normative beliefs about prosocial actions, and subsequently influenced people’s perceived obligations to offer help to delivery robots. This illustrates the effectiveness of leveraging observational learning to induce norm belief changes. Lastly, addressing RQ4, our experiment, which assigned participants to one of three observation conditions (Robot-helping-Human, Human-helping-Robot, Human-helping-Human), revealed that observing robots assisting humans (rather than being assisted) more significantly increased participants’ feelings of obligation to help robots. Our interpretation of this result highlights the role of reciprocal expectations in human-robot interactions. To encourage prosocial human behavior towards robots in real-world settings, it is crucial to design robots that exhibit higher levels of interactivity and the ability to reciprocate assistance.

The presented study is subject to several limitations. First, the study was conducted online, presenting scenarios through videos and relying exclusively on self-reported measures in response to these stimuli. Such an approach, while accessible and broad in reach, may not fully capture the complexity of real-world interactions or accurately predict actual behavior toward robots. Secondly, by situating the study in a futuristic context, we aimed to shift focus from the safety and performance of delivery robots to the possibility of engaging with them socially. However, it remains uncertain if these findings can instigate real-life behavioral changes or if such changes would persist beyond the experimental session. Finally, our research utilized a single generic model of a delivery robot, leaving the applicability of our results to other robot types within and beyond the mobility context unexplored. To mitigate some of these limitations, we plan to conduct an in-person study using virtual reality. This method will enhance realism and immersion, allowing for direct measurement of prosocial behaviors via eye tracking, motor responses, and physiological data. Furthermore, incorporating a qualitative study will deepen our understanding of prosociality through triangulated measures, providing a richer analysis of human-robot interactions.

Overall, our research contributes to the field by identifying the key factors of individual prosociality inclinations, familiarity with a specific type of robot, and the perceived social intelligence of a robot in shaping the prevailing normative beliefs in human-robot prosocial interactions. We demonstrate how observational learning from robot-to-human prosocial interactions can promote human prosocial behaviors towards delivery robots, fostering new norms that enhance the acceptance and integration of mobility robots in society, thereby advancing the harmonious coexistence of humans and mobility robots in public spaces.

Skip Supplemental Material Section

Supplemental Material

3613905.3650855-talk-video.mp4

Talk Video

mp4

7.1 MB

References

  1. [1] 2022. https://www.unrealengine.com/en-US/unreal-engine-5Google ScholarGoogle Scholar
  2. Anna M. H. Abrams, Pia S. C. Dautzenberg, Carla Jakobowsky, Stefan Ladwig, and Astrid M. Rosenthal-von der Pütten. 2021. A Theoretical and Empirical Reflection on Technology Acceptance Models for Autonomous Delivery Robots. In Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction (Boulder, CO, USA) (HRI ’21). Association for Computing Machinery, New York, NY, USA, 272–280. https://doi.org/10.1145/3434073.3444662Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Per A Andersson. 2022. Norms in Prosocial Decisions : The Role of Observability, Avoidance, and Conditionality. Linköping Studies in Behavioural Science, Vol. 241. Linköping University Electronic Press, Linköping. https://doi.org/10.3384/9789179293291Google ScholarGoogle ScholarCross RefCross Ref
  4. Markus Bajones, Astrid Weiss, and Markus Vincze. 2017. Investigating the Influence of Culture on Helping Behavior Towards Service Robots. In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction(HRI ’17). Association for Computing Machinery, New York, NY, USA, 75–76. https://doi.org/10.1145/3029798.3038318Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Kimberly A Barchard, Leiszle Lapping-Carr, R Shane Westfall, Santosh Balajee Banisetty, and David Feil-Seifer. 2018. Perceived Social Intelligence (PSI) Scales Test Manual (August, 2018).Google ScholarGoogle Scholar
  6. C. Daniel Batson, Nadia Ahmad, and E. L. Stocks. 2011. Four forms of prosocial motivation: Egoism, altruism, collectivism, and principlism. In Social motivation. Psychology Press, New York, NY, US, 103–126.Google ScholarGoogle Scholar
  7. Cynthia Bennett, Emily Ackerman, Bonnie Fan, Jeffrey Bigham, Patrick Carrington, and Sarah Fox. 2021. Accessibility and The Crowded Sidewalk: Micromobility’s Impact on Public Space. In Proceedings of the 2021 ACM Designing Interactive Systems Conference (Virtual Event, USA) (DIS ’21). Association for Computing Machinery, New York, NY, USA, 365–380. https://doi.org/10.1145/3461778.3462065Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Cristina Bicchieri. 2006. The grammar of society: the nature and dynamics of social norms. Cambridge University Press, New York. OCLC: ocm58546836.Google ScholarGoogle Scholar
  9. Cristina Bicchieri. 2017. Norms in the Wild: How to Diagnose, Measure, and Change Social Norms. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780190622046.001.0001Google ScholarGoogle ScholarCross RefCross Ref
  10. Annika Boos, Markus Zimmermann, Monika Zych, and Klaus Bengler. 2022. Polite and Unambiguous Requests Facilitate Willingness to Help an Autonomous Delivery Robot and Favourable Social Attributions. In 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). IEEE. https://doi.org/10.1109/ro-man53752.2022.9900870Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Islam Borinca, Luca Andrighetto, Giulia Valsecchi, and Jacques Berent. 2022. Ingroup norms shape understanding of outgroup prosocial behaviors. Group Processes & Intergroup Relations 25, 4 (June 2022), 1084–1106. https://doi.org/10.1177/1368430220987604Google ScholarGoogle ScholarCross RefCross Ref
  12. Hilmar Brohmer, Andreas Fauler, Caroline Floto, Ursula Athenstaedt, Gayannée Kedia, Lisa V. Eckerstorfer, and Katja Corcoran. 2019. Inspired to Lend a Hand? Attempts to Elicit Prosocial Behavior Through Goal Contagion. Frontiers in Psychology 10 (March 2019), 545. https://doi.org/10.3389/fpsyg.2019.00545Google ScholarGoogle ScholarCross RefCross Ref
  13. Dražen Brščić, Hiroyuki Kidokoro, Yoshitaka Suehiro, and Takayuki Kanda. 2015. Escaping from Children’s Abuse of Social Robots. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction (Portland, Oregon, USA) (HRI ’15). Association for Computing Machinery, New York, NY, USA, 59–66. https://doi.org/10.1145/2696454.2696468Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Federico Cugurullo, Ransford A Acheampong, Maxime Gueriau, and Ivana Dusparic. 2021. The transition to autonomous cars, the redesign of cities and the future of urban sustainability. Urban Geography 42, 6 (2021), 833–859.Google ScholarGoogle ScholarCross RefCross Ref
  15. Anna Dobrosovestnova, Isabel Schwaninger, and Astrid Weiss. 2022. With a Little Help of Humans. An Exploratory Study of Delivery Robots Stuck in Snow. In 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). 1023–1029. https://doi.org/10.1109/RO-MAN53752.2022.9900588 ISSN: 1944-9437.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Judith Dörrenbächer, Marc Hassenzahl, Robin Neuhaus, and Ronda Ringfort-Felner. 2022. Towards Designing Meaningful Relationships with Robots. 3–29. https://doi.org/10.1201/9781003287445-1Google ScholarGoogle ScholarCross RefCross Ref
  17. Nancy Eisenberg and Paul A. Miller. 1987. The relation of empathy to prosocial and related behaviors. Psychological Bulletin 101, 1 (1987), 91–119. https://doi.org/10.1037/0033-2909.101.1.91 Place: US Publisher: American Psychological Association.Google ScholarGoogle ScholarCross RefCross Ref
  18. Ernst Fehr and Urs Fischbacher. 2004. Social norms and human cooperation. Trends in Cognitive Sciences 8, 4 (April 2004), 185–190. https://doi.org/10.1016/j.tics.2004.02.007Google ScholarGoogle ScholarCross RefCross Ref
  19. Sergey Gavrilets and Peter J. Richerson. 2017. Collective action and the evolution of social norm internalization. Proceedings of the National Academy of Sciences 114, 23 (June 2017), 6068–6073. https://doi.org/10.1073/pnas.1703857114Google ScholarGoogle ScholarCross RefCross Ref
  20. Gian Vittorio Caprara, Cristina Capanna, Patrizia Steca, and Marinella Paciello. 2005. Misura e determinanti personali della prosocialità. Un approccio sociale cognitivo. Giornale italiano di psicologia2 (2005), 287–308. https://doi.org/10.1421/20313Google ScholarGoogle ScholarCross RefCross Ref
  21. J.E. Grusec, Paul Hastings, and A. Almas. 2011. Helping and prosocial behavior. Handbook of childhood social development (Jan. 2011), 549–566.Google ScholarGoogle Scholar
  22. Simon Gächter, Daniele Nosenzo, and Martin Sefton. 2013. Peer Effects in Pro-Social Behavior: Social Norms or Social Preferences?Journal of the European Economic Association 11, 3 (06 2013), 548–573. https://doi.org/10.1111/jeea.12015 arXiv:https://academic.oup.com/jeea/article-pdf/11/3/548/10317192/jeea0548.pdfGoogle ScholarGoogle ScholarCross RefCross Ref
  23. Howard Han, Franklin Mingzhe Li, Nikolas Martelaro, Daragh Byrne, and Sarah E Fox. 2023. The Robot in Our Path: Investigating the Perceptions of People with Motor Disabilities on Navigating Public Space Alongside Sidewalk Robots. In Proceedings of the 25th International ACM SIGACCESS Conference on Computers and Accessibility (New York, NY, USA) (ASSETS ’23). Association for Computing Machinery, New York, NY, USA, Article 58, 6 pages. https://doi.org/10.1145/3597638.3614508Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Chenlin Hang, Tetsuo Ono, and Seiji Yamada. 2022. Perspective-taking of Virtual Agents for Promoting Prosocial Behaviors. In Proceedings of the 10th International Conference on Human-Agent Interaction(HAI ’22). ACM. https://doi.org/10.1145/3527188.3563932Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Sherrie-Anne Kaye, David Rodwell, Natalie Watson-Brown, Chae Rose, and Lisa Buckley. 2022. Road users’ engagement in prosocial and altruistic behaviors: A systematic review. Journal of Safety Research 82 (Sept. 2022), 342–351. https://doi.org/10.1016/j.jsr.2022.06.010Google ScholarGoogle ScholarCross RefCross Ref
  26. Ran Hee Kim, Yeop Moon, Jung Ju Choi, and Sonya S. Kwak. 2014. The effect of robot appearance types on motivating donation. In Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction (Bielefeld, Germany) (HRI ’14). Association for Computing Machinery, New York, NY, USA, 210–211. https://doi.org/10.1145/2559636.2563685Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Risa Maeda, Dražen Brščić, and Takayuki Kanda. 2021. Influencing Moral Behavior Through Mere Observation of Robot Work: Video-based Survey on Littering Behavior. In Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction (Boulder, CO, USA) (HRI ’21). Association for Computing Machinery, New York, NY, USA, 83–91. https://doi.org/10.1145/3434073.3444680Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Dorothea Ulrike Martin, Conrad Perry, Madeline Isabel MacIntyre, Luisa Varcoe, Sonja Pedell, and Jordy Kaufman. 2020. Investigating the nature of children’s altruism using a social humanoid robot. Computers in Human Behavior 104 (2020), 106149. https://doi.org/10.1016/j.chb.2019.09.025Google ScholarGoogle ScholarCross RefCross Ref
  29. Hiroshi Nittono. 2022. The Psychology of “Kawaii” and Its Implications for Human-Robot Interaction. In 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE. https://doi.org/10.1109/hri53351.2022.9889591Google ScholarGoogle ScholarCross RefCross Ref
  30. Tatsuya Nomura, Takayuki Kanda, Hiroyoshi Kidokoro, Yoshitaka Suehiro, and Sachie Yamada. 2016. Why do children abuse robots?Interaction Studies / Social Behaviour and Communication in Biological and Artificial Systems 17, 3 (Dec. 2016), 347–369. https://doi.org/10.1075/is.17.3.02nomGoogle ScholarGoogle ScholarCross RefCross Ref
  31. Raquel Oliveira, Patrícia Arriaga, and Ana Paiva. 2021. Human-Robot Interaction in Groups: Methodological and Research Practices. Multimodal Technologies and Interaction 5, 10 (Sept. 2021), 59. https://doi.org/10.3390/mti5100059Google ScholarGoogle ScholarCross RefCross Ref
  32. Ana Paiva, Fernando Santos, and Francisco Santos. 2018. Engineering Pro-Sociality With Autonomous Agents. Proceedings of the AAAI Conference on Artificial Intelligence 32, 1 (April 2018). https://doi.org/10.1609/aaai.v32i1.12215 Number: 1.Google ScholarGoogle ScholarCross RefCross Ref
  33. Hannah R. M. Pelikan, Stuart Reeves, and Marina N. Cantarutti. 2024. Encountering Autonomous Robots on Public Streets. In Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction(HRI ’24). ACM. https://doi.org/10.1145/3610977.3634936Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Jochen Peter, Rinaldo Kühne, and Alex Barco. 2021. Can social robots affect children’s prosocial behavior? An experimental study on prosocial robot models. Computers in Human Behavior 120 (July 2021), 106712. https://doi.org/10.1016/j.chb.2021.106712Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Andreea-Elena Potinteu, Nadia Said, Georg Jahn, and Markus Huff. 2023. An Insight into Humans Helping Robots: The Role of Knowledge, Attitudes, Anthropomorphic Cues, and Context of Use. https://doi.org/10.31234/osf.io/kuh8zGoogle ScholarGoogle ScholarCross RefCross Ref
  36. Deborah Prentice and Elizabeth Levy Paluck. 2020. Engineering social change using social norms: lessons from the study of collective action. Current Opinion in Psychology 35 (Oct. 2020), 138–142. https://doi.org/10.1016/j.copsyc.2020.06.012Google ScholarGoogle ScholarCross RefCross Ref
  37. Astrid Rosenthal-von der Pütten, David Sirkin, Anna Abrams, and Laura Platte. 2020. The Forgotten in HRI: Incidental Encounters with Robots in Public Spaces. In Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (Cambridge, United Kingdom) (HRI ’20). Association for Computing Machinery, New York, NY, USA, 656–657. https://doi.org/10.1145/3371382.3374852Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Hatice Sahin, Heiko Mueller, Shadan Sadeghian, Debargha Dey, Andreas Löcken, Andrii Matviienko, Mark Colley, Azra Habibovic, and Philipp Wintersberger. 2021. Workshop on Prosocial Behavior in Future Mixed Traffic. In 13th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. ACM, Leeds United Kingdom, 167–170. https://doi.org/10.1145/3473682.3477438Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Francisco C. Santos, Jorge M. Pacheco, and Brian Skyrms. 2011. Co-evolution of pre-play signaling and cooperation. Journal of Theoretical Biology 274, 1 (2011), 30–35. https://doi.org/10.1016/j.jtbi.2011.01.004Google ScholarGoogle ScholarCross RefCross Ref
  40. Vasant Srinivasan and Leila Takayama. 2016. Help Me Please: Robot Politeness Strategies for Soliciting Help From Humans. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems(CHI ’16). Association for Computing Machinery, New York, NY, USA, 4945–4955. https://doi.org/10.1145/2858036.2858217Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Miguel Valdez and Matthew Cook. 2023. Humans, robots and artificial intelligences reconfiguring urban life in a crisis. Frontiers in Sustainable Cities 5 (2023). https://www.frontiersin.org/articles/10.3389/frsc.2023.1081821Google ScholarGoogle Scholar
  42. Nicholas J. Ward, Kari Finley, Jay Otto, David Kack, Rebecca Gleason, and T. Lonsdale. 2020. Traffic safety culture and prosocial driver behavior for safer vehicle-bicyclist interactions. Journal of Safety Research 75 (Dec. 2020), 24–31. https://doi.org/10.1016/j.jsr.2020.07.003Google ScholarGoogle ScholarCross RefCross Ref
  43. David Weinberg, Healy Dwyer, Sarah E. Fox, and Nikolas Martelaro. 2023. Sharing the Sidewalk: Observing Delivery Robot Interactions with Pedestrians during a Pilot in Pittsburgh, PA. Multimodal Technologies and Interaction 7, 5 (May 2023), 53. https://doi.org/10.3390/mti7050053 Number: 5 Publisher: Multidisciplinary Digital Publishing Institute.Google ScholarGoogle ScholarCross RefCross Ref
  44. Astrid Weiss, Judith Igelsböck, Manfred Tscheligi, Andrea Bauer, Kolja Kühnlenz, Dirk Wollherr, and Martin Buss. 2010. Robots asking for directions: the willingness of passers-by to support robots. In Proceedings of the 5th ACM/IEEE international conference on Human-robot interaction(HRI ’10). IEEE Press, Osaka, Japan, 23–30.Google ScholarGoogle ScholarCross RefCross Ref
  45. Jochen Wirtz, Paul G. Patterson, Werner H. Kunz, Thorsten Gruber, Vinh Nhat Lu, Stefanie Paluch, and Antje Martins. 2018. Brave new world: service robots in the frontline. Journal of Service Management 29, 5 (Sept. 2018), 907–931. https://doi.org/10.1108/josm-04-2018-0119Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Should I Help a Delivery Robot? Cultivating Prosocial Norms through Observations

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      CHI EA '24: Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems
      May 2024
      4761 pages
      ISBN:9798400703317
      DOI:10.1145/3613905

      Copyright © 2024 Owner/Author

      Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 11 May 2024

      Check for updates

      Qualifiers

      • Work in Progress
      • Research
      • Refereed limited

      Acceptance Rates

      Overall Acceptance Rate6,164of23,696submissions,26%
    • Article Metrics

      • Downloads (Last 12 months)25
      • Downloads (Last 6 weeks)25

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format .

    View HTML Format