Abstract
Social robots can reason and act while taking into account social and cultural structures, for instance by complying with social or ethical norms or values. As social robots are likely to become more common and advanced and thus likely to interact with human beings in increasingly complex situations, ensuring safety in such situations will become very important. In this chapter, I investigate the safety of social robots, focusing on the idea that robots should be logically guaranteed to act in a certain way, here called logic-based inherent safety. A meta-logical limitation of a particular program for logic-based safety for ethical robots is shown. Afterwards, an empirical study is used to show that there is a clash between deontic reasoning and most formal deontic logics. I give an example as to how this clash can cause problems in human-robot interaction. I conclude that deontic logics closer to natural language reasoning are needed and that logic only should play a limited part in the overall safety architecture of a social robot, which should also be based on other principles of safe design.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Arkin, R. C. (2009). Governing lethal behavior in autonomous systems. Boca Raton: Taylor and Francis.
Arkoudas, K., & Bringsjord, S. (2005). Toward ethical Robots via Mechanized Deontic Logic, tech. report. Machine Ethics: Papers from the AAAI Fall Symp.
Asimov, I. (1995). The complete robot. London: HarperCollins Publishers.
Beller, S. (2010). Deontic reasoning reviewed: Psychological questions, empirical findings, and current theories. Cognitive Processing, 1, 123–132.
Bello, P., & Bringsjord, S. (2012). On how to build a moral machine. Topoi, 32(2), 251–266.
Bentzen, M. M. (2014). Action type deontic logic. Journal of Logic, Language, and Information, 23(4), 397–414.
Bringsjord, S., & Taylor, J. (2012). The divine-command approach to robot ethics. In P. Lin, K. Abney, & G. A. Bekey (Eds.), Robot ethics: The ethical and social implications of robotics (pp. 85–108). Cambridge, MA: MIT Press.
Bringsjord, S., Arkoudas, K., & Bello, P. (2006). Toward a general logicist methodology for engineering ethically correct robots. IEEE Intelligent Systems, 21(4), 38–44.
Bucciarelli, M., & Johnson-Laird, P. N. (2005). Naïve deontics: A theory of meaning, representation, and reasoning. Cognitive Psychology, 50, 159–193.
Chellas, B. F. (1980). Modal logic. Cambridge: Cambridge University Press.
Cox, R. J., & Griggs, R. A. (1982). The effect of experience on performance in Wason’s selection task. Memory & Cognition, 10, 496–502.
Fasola, J., & Matarić, M. J. (2013a). Using spatial semantic and pragmatic fields to interpret natural language pick-and-place instructions for a Mobile Service Robot. In G. Herrmann, M. J. Pearson,A. Lenz, P. Bremner, A. Spiers, U. Leonards, (Eds.), Proceedings of the 5th international conference on social robotics, ICSR, pp 501–510.
Fasola, J., & Matarić, M. J. (2013b). Using semantic fields to model dynamic spatial relations in a robot architecture for natural language instruction of service robots. In IEEE/RSJ international conference on intelligent robots and systems.
Frege, G. (1893). Grundgesetze der Arithmetik, Band I, Jena: Verlag Hermann Pohle.
Gödel, K. (1931). Über formal unentscheidbare Satze der Principia Mathematica und verwandter Systeme, I. Monatshefte für Mathematik und Physik, 38, 173–198.
Hansson, S. O. (2007). Safe design. Techné, 10, 43–49.
Hollnagel, E. (2014). Is safety a subject for science? Safety Science, 67, 21–24.
Kamp, H. (1973). Free choice permission. Proceedings of the Aristotelian Society, 74, 57–74.
Lina, P., Abney, K., & Bekey, G. A. (Eds.). (2012). Robot ethics: The ethical and social implications of robotics. Cambridge, MA: MIT Press.
Lindner, F., & Eschenbach, C. (2011). Towards a formalization of social spaces for socially aware robots. In Proceedings of the 10th international conference on Spatial information theory, COSIT’11, Springer-Verlag, Berlin, Heidelberg, pp. 283–303.
Ross, A. (1941). Imperatives and logic. Theoria, 7, 53–71.
Smullyan, R. M. (1992). Gödel’s incompleteness theorems. New York: Oxford University Press.
van Benthem, J. F. A. K. (2008). Logic and reasoning: Do the facts matter? Studia Logica, 88, 67–84.
van Lambalgen, M., & Counihan, M. (2008). Formal models for real people. Journal of Logic, Language, and Information, 17, 385–389.
von Wright, G. H. (1951). Deontic logic. Mind, 60, 1–15.
Wallach, W., & Allen, C. (2008). Moral machines: Teaching robots right from wrong. Oxford: Oxford University Press.
Wason, P. C. (1968). Reasoning about a rule. Quarterly Journal of Experimental Psychology, 20, 273–281.
Winfield, A. F., Blum, C., & Liu, W. (2014). Towards an ethical robot: internal models, consequences and ethical action selection. In M. Mistry, A. Leonardis, M. Witkowski, & C. Melhuish (Eds.), Advances in autonomous robotics systems (pp. 85–96). Cham: Springer.
Zimmermann, T. E. (2000). Free choice disjunction and epistemic possibility. Natural Language Semantics, 8, 255–290.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Bentzen, M.M. (2017). The Limits of Logic-Based Inherent Safety of Social Robots. In: Michelfelder, D., Newberry, B., Zhu, Q. (eds) Philosophy and Engineering. Philosophy of Engineering and Technology, vol 26. Springer, Cham. https://doi.org/10.1007/978-3-319-45193-0_17
Download citation
DOI: https://doi.org/10.1007/978-3-319-45193-0_17
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-45191-6
Online ISBN: 978-3-319-45193-0
eBook Packages: Religion and PhilosophyPhilosophy and Religion (R0)