skip to main content
10.1145/2696454.2696458acmconferencesArticle/Chapter ViewAbstractPublication PageshriConference Proceedingsconference-collections
research-article

Sacrifice One For the Good of Many?: People Apply Different Moral Norms to Human and Robot Agents

Published:02 March 2015Publication History

ABSTRACT

Moral norms play an essential role in regulating human interaction. With the growing sophistication and proliferation of robots, it is important to understand how ordinary people apply moral norms to robot agents and make moral judgments about their behavior. We report the first comparison of people's moral judgments (of permissibility, wrongness, and blame) about human and robot agents. Two online experiments (total N = 316) found that robots, compared with human agents, were more strongly expected to take an action that sacrifices one person for the good of many (a "utilitarian" choice), and they were blamed more than their human counterparts when they did not make that choice. Though the utilitarian sacrifice was generally seen as permissible for human agents, they were blamed more for choosing this option than for doing nothing. These results provide a first step toward a new field of Moral HRI, which is well placed to help guide the design of social robots.

References

  1. C. Bicchieri, The grammar of society: The nature and dynamics of social norms. New York, NY: Cambridge University Press, 2006.Google ScholarGoogle Scholar
  2. R. Joyce, The evolution of morality. MIT Press, 2006.Google ScholarGoogle Scholar
  3. R. Boyd and P. J. Richerson, The origin and evolution of cultures. New York, NY: Oxford University Press, 2005.Google ScholarGoogle Scholar
  4. F. B. M. de Waal, Primates and philosophers: How morality evolved. Princeton, NJ: Princeton University Press, 2006.Google ScholarGoogle ScholarCross RefCross Ref
  5. L. Kohlberg, Essays on moral development. San Francisco, CA: Harper & Row, 1981.Google ScholarGoogle Scholar
  6. F. Cushman, L. Young and M. Hauser, The role of conscious reasoning and intuition in moral judgment, Psychological Science 17 (2006), 1082--1089.Google ScholarGoogle ScholarCross RefCross Ref
  7. J. D. Greene, R. B. Sommerville, L. E. Nystrom, J. M. Darley and J. D. Cohen, An fMRI investigation of emotional engagement in moral judgment, Science 293 (2001), 2105--2108.Google ScholarGoogle ScholarCross RefCross Ref
  8. M. Hauser, F. Cushman, L. Young, R. Kang-Xing Jin and J. Mikhail, A dissociation between moral judgments and justifications, Mind & Language 22 (2007), 1--21.Google ScholarGoogle ScholarCross RefCross Ref
  9. J. Mikhail, Moral cognition and computational theory, in Moral psychology, Vol. 3: The neuroscience of morality, W. Sinnott-Armstrong, Ed. Cambridge, MA: MIT Press, 2008, pp. 81--92.Google ScholarGoogle Scholar
  10. P. Lin, The ethics of autonomous cars, The Atlantic, 08-Oct2013. {Online}. Available: http://www.theatlantic.com/technology/archive/2013/10/theethics-of-autonomous-cars/280360/. {Accessed: 30-Sep2014}.Google ScholarGoogle Scholar
  11. J. Millar, An ethical dilemma: When robot cars must kill, who should pick the victim? | Robohub, Robohub.org, Jun2014. {Online}. Available: http://robohub.org/an-ethicaldilemma-when-robot-cars-must-kill-who-should-pick-thevictim/. {Accessed: 28-Sep-2014}.Google ScholarGoogle Scholar
  12. Open Roboethics Initiative, My (autonomous) car, my safety: Results from our reader poll, 30-Jun-2014. .Google ScholarGoogle Scholar
  13. Open Roboethics Initiative, If death by autonomous car is unavoidable, who should die? Reader poll results, 23-Jun2014.Google ScholarGoogle Scholar
  14. I. van de Poel and P.-P. Verbeek, Editorial: Ethics and engineering design, Science, Technology, & Human Values 31 (2006), 223--236.Google ScholarGoogle Scholar
  15. H. F. M. Van der Loos, Ethics by design: A conceptual approach to personal and service robot systems, in ICRA Roboethics Workshop, Rome, Italy: IEEE, 2007.Google ScholarGoogle Scholar
  16. G. D. Crnkovic and B. Çürüklü, Robots: ethical by design, Ethics and Information Technology 14 (2012), 61--71. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. C. Allen, G. Varner and J. Zinser, Prolegomena to any future artificial moral agent, Journal of Experimental & Theoretical Artificial Intelligence 12 (2000), 251--261.Google ScholarGoogle ScholarCross RefCross Ref
  18. M. Anderson and S. L. Anderson, Machine Ethics. Cambridge University Press, 2011.Google ScholarGoogle ScholarCross RefCross Ref
  19. R. Capurro and M. Nagenborg, Ethics and robotics. Heidelberg; {Amsterdam}: AKA ; IOS Press, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. P. Lin, K. Abney and G. A. Bekey, Eds., Robot ethics the ethical and social implications of robotics. Cambridge, MA: MIT Press, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. B. F. Malle and M. Scheutz, Moral competence in social robots, in IEEE International Symposium on Ethics in Engineering, Science, and Technology, Chicago, IL, 2014.Google ScholarGoogle ScholarCross RefCross Ref
  22. J. P. Sullins, Introduction: Open questions in roboethics, Philosophy & Technology 24 (2011), 233.Google ScholarGoogle ScholarCross RefCross Ref
  23. W. Wallach and C. Allen, Moral machines: Teaching robots right from wrong. New York, NY: Oxford University Press, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. M. Scheutz and C. Crowell, The burden of embodied autonomy: Some reflections on the social and ethical implications of autonomous robots, in Proceedings of Workshop on Roboethics at ICRA 2007, Rome, Italy, 2007.Google ScholarGoogle Scholar
  25. G. Briggs and M. Scheutz, How robots can affect human behavior: Investigating the effects of robotic displays of protest and distress, International Journal of Social Robotics 6 (2014), 1--13.Google ScholarGoogle ScholarCross RefCross Ref
  26. M. Scheutz and B. F. Malle, "Think and do the right thing": A plea for morally competent autonomous robots., presented at the 2014 IEEE Ethics conference, Chicago, IL, 2014.Google ScholarGoogle Scholar
  27. M. Scheutz, The need for moral competency in autonomous agent architectures, in Fundamental Issues of Artificial Intelligence, V. C. Müller, Ed. Berlin: Springer, 2014.Google ScholarGoogle Scholar
  28. S. Bringsjord, K. Arkoudas and P. Bello, Toward a general logicist methodology for engineering ethically correct robots, Intelligent Systems, IEEE 21 (2006), 38--44. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. R. Sun, Moral judgment, human motivation, and neural networks, Cognitive Computation 5 (2013), 566--579.Google ScholarGoogle ScholarCross RefCross Ref
  30. W. Wallach, S. Franklin and C. Allen, A conceptual and computational model of moral decision making in human and artificial agents, Topics in Cognitive Science 2 (2010), 454--485.Google ScholarGoogle ScholarCross RefCross Ref
  31. P. H. Kahn, Jr., T. Kanda, H. Ishiguro, B. T. Gill, J. H. Ruckert, S. Shen, H. E. Gary, A. L. Reichert, N. G. Freier and R. L. Severson, Do people hold a humanoid robot morally accountable for the harm it causes?, in Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction, New York, NY, 2012, pp. 33--40. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. A. E. Monroe, K. D. Dillon and B. F. Malle, Bringing free will down to Earth: People's psychological concept of free will and its role in moral judgment, Consciousness and Cognition 27 (2014), 100--108.Google ScholarGoogle ScholarCross RefCross Ref
  33. C. Midden and J. Ham, The illusion of agency: The influence of the agency of an artificial agent on its persuasive power, in Persuasive Technology. Design for Health and Safety, Springer, 2012, pp. 90--99. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. M. Strait, C. Canning and M. Scheutz, Let me tell you! investigating the effects of robot communication strategies in advice-giving situations based on robot appearance, interaction modality and distance, in Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction, 2014, pp. 479--486. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. B. Monin, D. A. Pizarro and J. S. Beer, Deciding versus reacting: Conceptions of moral judgment and the reasonaffect debate., Review of General Psychology 11 (2007), 99--111.Google ScholarGoogle ScholarCross RefCross Ref
  36. B. F. Malle, S. Guglielmo and A. E. Monroe, A theory of blame, Psychological Inquiry 25 (2014), 147--186.Google ScholarGoogle ScholarCross RefCross Ref
  37. F. Cushman, Crime and punishment: Distinguishing the roles of causal and intentional analyses in moral judgment, Cognition 108 (2008), 353--380.Google ScholarGoogle ScholarCross RefCross Ref
  38. T. C. Scanlon, Moral dimensions: Permissibility, meaning, blame. Cambridge, MA: Belknap Press, 2008.Google ScholarGoogle ScholarCross RefCross Ref
  39. B. Williston, Blaming agents in moral dilemmas, Ethical Theory and Moral Practice 9 (2006), 563--576.Google ScholarGoogle ScholarCross RefCross Ref
  40. J. Haidt, The emotional dog and its rational tail: A social intuitionist approach to moral judgment, Psychological Review 108 (2001), 814--834.Google ScholarGoogle ScholarCross RefCross Ref
  41. Haidt, Jonathan, F. Björklund and S. Murphy, Moral dumbfounding: When intuition finds no reason, University of Virginia, Charlottesville, VA, Unpublished manuscript, 2000.Google ScholarGoogle Scholar
  42. M. J. C. Crump, J. V. McDonnell and T. M. Gureckis, Evaluating Amazon's Mechanical Turk as a tool for experimental behavioral research, PLoS ONE 8 (2013), e57410.Google ScholarGoogle ScholarCross RefCross Ref
  43. W. Mason and S. Suri, Conducting behavioral research on Amazon's Mechanical Turk, Behavior Research Methods (2012), 1--23.Google ScholarGoogle ScholarCross RefCross Ref
  44. G. Paolacci, J. Chandler and P. G. Ipeirotis, Running experiments on Amazon Mechanical Turk, Judgment and Decision Making 5 (2010), 411--419.Google ScholarGoogle ScholarCross RefCross Ref
  45. P. Foot, The problem of abortion and the doctrine of double effect, Oxford Review 5 (1967), 5--15.Google ScholarGoogle Scholar
  46. J. J. Thomson, The trolley problem, The Yale Law Journal 94 (1985), 1395--1415.Google ScholarGoogle ScholarCross RefCross Ref
  47. J. M. Mikhail, Elements of moral cognition: Rawls' linguistic analogy and the cognitive science of moral and legal judgment. New York, NY: Cambridge University Press, 2011.Google ScholarGoogle ScholarCross RefCross Ref
  48. H. Bless and N. Schwarz, Mental construal and the emergence of assimilation and contrast effects: The inclusion/exclusion model, in Advances in experimental social psychology, vol. 42, M. P. Zanna, Ed. San Diego, CA: Academic Press, 2010, pp. 319--373.Google ScholarGoogle Scholar

Index Terms

  1. Sacrifice One For the Good of Many?: People Apply Different Moral Norms to Human and Robot Agents

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in
          • Published in

            cover image ACM Conferences
            HRI '15: Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction
            March 2015
            368 pages
            ISBN:9781450328838
            DOI:10.1145/2696454

            Copyright © 2015 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 2 March 2015

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article

            Acceptance Rates

            HRI '15 Paper Acceptance Rate43of169submissions,25%Overall Acceptance Rate242of1,000submissions,24%

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader