ABSTRACT
Moral norms play an essential role in regulating human interaction. With the growing sophistication and proliferation of robots, it is important to understand how ordinary people apply moral norms to robot agents and make moral judgments about their behavior. We report the first comparison of people's moral judgments (of permissibility, wrongness, and blame) about human and robot agents. Two online experiments (total N = 316) found that robots, compared with human agents, were more strongly expected to take an action that sacrifices one person for the good of many (a "utilitarian" choice), and they were blamed more than their human counterparts when they did not make that choice. Though the utilitarian sacrifice was generally seen as permissible for human agents, they were blamed more for choosing this option than for doing nothing. These results provide a first step toward a new field of Moral HRI, which is well placed to help guide the design of social robots.
- C. Bicchieri, The grammar of society: The nature and dynamics of social norms. New York, NY: Cambridge University Press, 2006.Google Scholar
- R. Joyce, The evolution of morality. MIT Press, 2006.Google Scholar
- R. Boyd and P. J. Richerson, The origin and evolution of cultures. New York, NY: Oxford University Press, 2005.Google Scholar
- F. B. M. de Waal, Primates and philosophers: How morality evolved. Princeton, NJ: Princeton University Press, 2006.Google ScholarCross Ref
- L. Kohlberg, Essays on moral development. San Francisco, CA: Harper & Row, 1981.Google Scholar
- F. Cushman, L. Young and M. Hauser, The role of conscious reasoning and intuition in moral judgment, Psychological Science 17 (2006), 1082--1089.Google ScholarCross Ref
- J. D. Greene, R. B. Sommerville, L. E. Nystrom, J. M. Darley and J. D. Cohen, An fMRI investigation of emotional engagement in moral judgment, Science 293 (2001), 2105--2108.Google ScholarCross Ref
- M. Hauser, F. Cushman, L. Young, R. Kang-Xing Jin and J. Mikhail, A dissociation between moral judgments and justifications, Mind & Language 22 (2007), 1--21.Google ScholarCross Ref
- J. Mikhail, Moral cognition and computational theory, in Moral psychology, Vol. 3: The neuroscience of morality, W. Sinnott-Armstrong, Ed. Cambridge, MA: MIT Press, 2008, pp. 81--92.Google Scholar
- P. Lin, The ethics of autonomous cars, The Atlantic, 08-Oct2013. {Online}. Available: http://www.theatlantic.com/technology/archive/2013/10/theethics-of-autonomous-cars/280360/. {Accessed: 30-Sep2014}.Google Scholar
- J. Millar, An ethical dilemma: When robot cars must kill, who should pick the victim? | Robohub, Robohub.org, Jun2014. {Online}. Available: http://robohub.org/an-ethicaldilemma-when-robot-cars-must-kill-who-should-pick-thevictim/. {Accessed: 28-Sep-2014}.Google Scholar
- Open Roboethics Initiative, My (autonomous) car, my safety: Results from our reader poll, 30-Jun-2014. .Google Scholar
- Open Roboethics Initiative, If death by autonomous car is unavoidable, who should die? Reader poll results, 23-Jun2014.Google Scholar
- I. van de Poel and P.-P. Verbeek, Editorial: Ethics and engineering design, Science, Technology, & Human Values 31 (2006), 223--236.Google Scholar
- H. F. M. Van der Loos, Ethics by design: A conceptual approach to personal and service robot systems, in ICRA Roboethics Workshop, Rome, Italy: IEEE, 2007.Google Scholar
- G. D. Crnkovic and B. Çürüklü, Robots: ethical by design, Ethics and Information Technology 14 (2012), 61--71. Google ScholarDigital Library
- C. Allen, G. Varner and J. Zinser, Prolegomena to any future artificial moral agent, Journal of Experimental & Theoretical Artificial Intelligence 12 (2000), 251--261.Google ScholarCross Ref
- M. Anderson and S. L. Anderson, Machine Ethics. Cambridge University Press, 2011.Google ScholarCross Ref
- R. Capurro and M. Nagenborg, Ethics and robotics. Heidelberg; {Amsterdam}: AKA ; IOS Press, 2009. Google ScholarDigital Library
- P. Lin, K. Abney and G. A. Bekey, Eds., Robot ethics the ethical and social implications of robotics. Cambridge, MA: MIT Press, 2012. Google ScholarDigital Library
- B. F. Malle and M. Scheutz, Moral competence in social robots, in IEEE International Symposium on Ethics in Engineering, Science, and Technology, Chicago, IL, 2014.Google ScholarCross Ref
- J. P. Sullins, Introduction: Open questions in roboethics, Philosophy & Technology 24 (2011), 233.Google ScholarCross Ref
- W. Wallach and C. Allen, Moral machines: Teaching robots right from wrong. New York, NY: Oxford University Press, 2008. Google ScholarDigital Library
- M. Scheutz and C. Crowell, The burden of embodied autonomy: Some reflections on the social and ethical implications of autonomous robots, in Proceedings of Workshop on Roboethics at ICRA 2007, Rome, Italy, 2007.Google Scholar
- G. Briggs and M. Scheutz, How robots can affect human behavior: Investigating the effects of robotic displays of protest and distress, International Journal of Social Robotics 6 (2014), 1--13.Google ScholarCross Ref
- M. Scheutz and B. F. Malle, "Think and do the right thing": A plea for morally competent autonomous robots., presented at the 2014 IEEE Ethics conference, Chicago, IL, 2014.Google Scholar
- M. Scheutz, The need for moral competency in autonomous agent architectures, in Fundamental Issues of Artificial Intelligence, V. C. Müller, Ed. Berlin: Springer, 2014.Google Scholar
- S. Bringsjord, K. Arkoudas and P. Bello, Toward a general logicist methodology for engineering ethically correct robots, Intelligent Systems, IEEE 21 (2006), 38--44. Google ScholarDigital Library
- R. Sun, Moral judgment, human motivation, and neural networks, Cognitive Computation 5 (2013), 566--579.Google ScholarCross Ref
- W. Wallach, S. Franklin and C. Allen, A conceptual and computational model of moral decision making in human and artificial agents, Topics in Cognitive Science 2 (2010), 454--485.Google ScholarCross Ref
- P. H. Kahn, Jr., T. Kanda, H. Ishiguro, B. T. Gill, J. H. Ruckert, S. Shen, H. E. Gary, A. L. Reichert, N. G. Freier and R. L. Severson, Do people hold a humanoid robot morally accountable for the harm it causes?, in Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction, New York, NY, 2012, pp. 33--40. Google ScholarDigital Library
- A. E. Monroe, K. D. Dillon and B. F. Malle, Bringing free will down to Earth: People's psychological concept of free will and its role in moral judgment, Consciousness and Cognition 27 (2014), 100--108.Google ScholarCross Ref
- C. Midden and J. Ham, The illusion of agency: The influence of the agency of an artificial agent on its persuasive power, in Persuasive Technology. Design for Health and Safety, Springer, 2012, pp. 90--99. Google ScholarDigital Library
- M. Strait, C. Canning and M. Scheutz, Let me tell you! investigating the effects of robot communication strategies in advice-giving situations based on robot appearance, interaction modality and distance, in Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction, 2014, pp. 479--486. Google ScholarDigital Library
- B. Monin, D. A. Pizarro and J. S. Beer, Deciding versus reacting: Conceptions of moral judgment and the reasonaffect debate., Review of General Psychology 11 (2007), 99--111.Google ScholarCross Ref
- B. F. Malle, S. Guglielmo and A. E. Monroe, A theory of blame, Psychological Inquiry 25 (2014), 147--186.Google ScholarCross Ref
- F. Cushman, Crime and punishment: Distinguishing the roles of causal and intentional analyses in moral judgment, Cognition 108 (2008), 353--380.Google ScholarCross Ref
- T. C. Scanlon, Moral dimensions: Permissibility, meaning, blame. Cambridge, MA: Belknap Press, 2008.Google ScholarCross Ref
- B. Williston, Blaming agents in moral dilemmas, Ethical Theory and Moral Practice 9 (2006), 563--576.Google ScholarCross Ref
- J. Haidt, The emotional dog and its rational tail: A social intuitionist approach to moral judgment, Psychological Review 108 (2001), 814--834.Google ScholarCross Ref
- Haidt, Jonathan, F. Björklund and S. Murphy, Moral dumbfounding: When intuition finds no reason, University of Virginia, Charlottesville, VA, Unpublished manuscript, 2000.Google Scholar
- M. J. C. Crump, J. V. McDonnell and T. M. Gureckis, Evaluating Amazon's Mechanical Turk as a tool for experimental behavioral research, PLoS ONE 8 (2013), e57410.Google ScholarCross Ref
- W. Mason and S. Suri, Conducting behavioral research on Amazon's Mechanical Turk, Behavior Research Methods (2012), 1--23.Google ScholarCross Ref
- G. Paolacci, J. Chandler and P. G. Ipeirotis, Running experiments on Amazon Mechanical Turk, Judgment and Decision Making 5 (2010), 411--419.Google ScholarCross Ref
- P. Foot, The problem of abortion and the doctrine of double effect, Oxford Review 5 (1967), 5--15.Google Scholar
- J. J. Thomson, The trolley problem, The Yale Law Journal 94 (1985), 1395--1415.Google ScholarCross Ref
- J. M. Mikhail, Elements of moral cognition: Rawls' linguistic analogy and the cognitive science of moral and legal judgment. New York, NY: Cambridge University Press, 2011.Google ScholarCross Ref
- H. Bless and N. Schwarz, Mental construal and the emergence of assimilation and contrast effects: The inclusion/exclusion model, in Advances in experimental social psychology, vol. 42, M. P. Zanna, Ed. San Diego, CA: Academic Press, 2010, pp. 319--373.Google Scholar
Index Terms
- Sacrifice One For the Good of Many?: People Apply Different Moral Norms to Human and Robot Agents
Recommendations
Blaming the Reluctant Robot: Parallel Blame Judgments for Robots in Moral Dilemmas across U.S. and Japan
HRI '21: Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot InteractionPrevious work has shown that people provide different moral judgments of robots and humans in the case of moral dilemmas. In particular, robots are blamed more when they fail to intervene in a situation in which they can save multiple lives but must ...
Which Robot Am I Thinking About?: The Impact of Action and Appearance on People's Evaluations of a Moral Robot
HRI '16: The Eleventh ACM/IEEE International Conference on Human Robot InteractionIn three studies we found further evidence for a previously discovered Human-Robot (HR) asymmetry in moral judgments: that people blame robots more for inaction than action in a moral dilemma but blame humans more for action than inaction in the ...
The Power of Advice: Differential Blame for Human and Robot Advisors and Deciders in a Moral Advising Context
HRI '24: Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot InteractionDue to their unique persuasive power, language-capable robots must be able to both adhere to and communicate human moral norms. These requirements are complicated by the possibility that people may blame humans and robots differently for violating those ...
Comments