Skip to main content

The Limits of Logic-Based Inherent Safety of Social Robots

  • Chapter
  • First Online:
Philosophy and Engineering

Part of the book series: Philosophy of Engineering and Technology ((POET,volume 26))

  • 717 Accesses

Abstract

Social robots can reason and act while taking into account social and cultural structures, for instance by complying with social or ethical norms or values. As social robots are likely to become more common and advanced and thus likely to interact with human beings in increasingly complex situations, ensuring safety in such situations will become very important. In this chapter, I investigate the safety of social robots, focusing on the idea that robots should be logically guaranteed to act in a certain way, here called logic-based inherent safety. A meta-logical limitation of a particular program for logic-based safety for ethical robots is shown. Afterwards, an empirical study is used to show that there is a clash between deontic reasoning and most formal deontic logics. I give an example as to how this clash can cause problems in human-robot interaction. I conclude that deontic logics closer to natural language reasoning are needed and that logic only should play a limited part in the overall safety architecture of a social robot, which should also be based on other principles of safe design.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 54.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Arkin, R. C. (2009). Governing lethal behavior in autonomous systems. Boca Raton: Taylor and Francis.

    Book  Google Scholar 

  • Arkoudas, K., & Bringsjord, S. (2005). Toward ethical Robots via Mechanized Deontic Logic, tech. report. Machine Ethics: Papers from the AAAI Fall Symp.

    Google Scholar 

  • Asimov, I. (1995). The complete robot. London: HarperCollins Publishers.

    Google Scholar 

  • Beller, S. (2010). Deontic reasoning reviewed: Psychological questions, empirical findings, and current theories. Cognitive Processing, 1, 123–132.

    Article  Google Scholar 

  • Bello, P., & Bringsjord, S. (2012). On how to build a moral machine. Topoi, 32(2), 251–266.

    Article  Google Scholar 

  • Bentzen, M. M. (2014). Action type deontic logic. Journal of Logic, Language, and Information, 23(4), 397–414.

    Article  Google Scholar 

  • Bringsjord, S., & Taylor, J. (2012). The divine-command approach to robot ethics. In P. Lin, K. Abney, & G. A. Bekey (Eds.), Robot ethics: The ethical and social implications of robotics (pp. 85–108). Cambridge, MA: MIT Press.

    Google Scholar 

  • Bringsjord, S., Arkoudas, K., & Bello, P. (2006). Toward a general logicist methodology for engineering ethically correct robots. IEEE Intelligent Systems, 21(4), 38–44.

    Article  Google Scholar 

  • Bucciarelli, M., & Johnson-Laird, P. N. (2005). Naïve deontics: A theory of meaning, representation, and reasoning. Cognitive Psychology, 50, 159–193.

    Article  Google Scholar 

  • Chellas, B. F. (1980). Modal logic. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Cox, R. J., & Griggs, R. A. (1982). The effect of experience on performance in Wason’s selection task. Memory & Cognition, 10, 496–502.

    Article  Google Scholar 

  • Fasola, J., & Matarić, M. J. (2013a). Using spatial semantic and pragmatic fields to interpret natural language pick-and-place instructions for a Mobile Service Robot. In G. Herrmann, M. J. Pearson,A. Lenz, P. Bremner, A. Spiers, U. Leonards, (Eds.), Proceedings of the 5th international conference on social robotics, ICSR, pp 501–510.

    Google Scholar 

  • Fasola, J., & Matarić, M. J. (2013b). Using semantic fields to model dynamic spatial relations in a robot architecture for natural language instruction of service robots. In IEEE/RSJ international conference on intelligent robots and systems.

    Google Scholar 

  • Frege, G. (1893). Grundgesetze der Arithmetik, Band I, Jena: Verlag Hermann Pohle.

    Google Scholar 

  • Gödel, K. (1931). Ãœber formal unentscheidbare Satze der Principia Mathematica und verwandter Systeme, I. Monatshefte für Mathematik und Physik, 38, 173–198.

    Article  Google Scholar 

  • Hansson, S. O. (2007). Safe design. Techné, 10, 43–49.

    Google Scholar 

  • Hollnagel, E. (2014). Is safety a subject for science? Safety Science, 67, 21–24.

    Article  Google Scholar 

  • Kamp, H. (1973). Free choice permission. Proceedings of the Aristotelian Society, 74, 57–74.

    Article  Google Scholar 

  • Lina, P., Abney, K., & Bekey, G. A. (Eds.). (2012). Robot ethics: The ethical and social implications of robotics. Cambridge, MA: MIT Press.

    Google Scholar 

  • Lindner, F., & Eschenbach, C. (2011). Towards a formalization of social spaces for socially aware robots. In Proceedings of the 10th international conference on Spatial information theory, COSIT’11, Springer-Verlag, Berlin, Heidelberg, pp. 283–303.

    Google Scholar 

  • Ross, A. (1941). Imperatives and logic. Theoria, 7, 53–71.

    Google Scholar 

  • Smullyan, R. M. (1992). Gödel’s incompleteness theorems. New York: Oxford University Press.

    Google Scholar 

  • van Benthem, J. F. A. K. (2008). Logic and reasoning: Do the facts matter? Studia Logica, 88, 67–84.

    Article  Google Scholar 

  • van Lambalgen, M., & Counihan, M. (2008). Formal models for real people. Journal of Logic, Language, and Information, 17, 385–389.

    Article  Google Scholar 

  • von Wright, G. H. (1951). Deontic logic. Mind, 60, 1–15.

    Article  Google Scholar 

  • Wallach, W., & Allen, C. (2008). Moral machines: Teaching robots right from wrong. Oxford: Oxford University Press.

    Google Scholar 

  • Wason, P. C. (1968). Reasoning about a rule. Quarterly Journal of Experimental Psychology, 20, 273–281.

    Article  Google Scholar 

  • Winfield, A. F., Blum, C., & Liu, W. (2014). Towards an ethical robot: internal models, consequences and ethical action selection. In M. Mistry, A. Leonardis, M. Witkowski, & C. Melhuish (Eds.), Advances in autonomous robotics systems (pp. 85–96). Cham: Springer.

    Google Scholar 

  • Zimmermann, T. E. (2000). Free choice disjunction and epistemic possibility. Natural Language Semantics, 8, 255–290.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Martin Mose Bentzen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Bentzen, M.M. (2017). The Limits of Logic-Based Inherent Safety of Social Robots. In: Michelfelder, D., Newberry, B., Zhu, Q. (eds) Philosophy and Engineering. Philosophy of Engineering and Technology, vol 26. Springer, Cham. https://doi.org/10.1007/978-3-319-45193-0_17

Download citation

Publish with us

Policies and ethics