Skip to main content Accessibility help
×
Hostname: page-component-8448b6f56d-cfpbc Total loading time: 0 Render date: 2024-04-23T14:53:20.694Z Has data issue: false hasContentIssue false

9 - When Is a Robot a Moral Agent?

from PART III - ISSUES CONCERNING MACHINE ETHICS

Published online by Cambridge University Press:  01 June 2011

Michael Anderson
Affiliation:
University of Hartford, Connecticut
Susan Leigh Anderson
Affiliation:
University of Connecticut
Get access

Summary

Introduction

Robots have been a part of our work environment for the past few decades, but they are no longer limited to factory automation. The additional range of activities they are being used for is growing. Robots are now automating a wide range of professional activities such as: aspects of the health-care industry, white collar office work, search and rescue operations, automated warfare, and the service industries.

A subtle but far more personal revolution has begun in home automation as robot vacuums and toys are becoming more common in homes around the world. As these machines increase in capability and ubiquity, it is inevitable that they will impact our lives ethically as well as physically and emotionally. These impacts will be both positive and negative, and in this paper I will address the moral status of robots and how that status, both real and potential, should affect the way we design and use these technologies.

Morality and Human-Robot Interactions

As robotics technology becomes more ubiquitous, the scope of human-robot interactions will grow. At the present time, these interactions are no different than the interactions one might have with any piece of technology, but as these machines become more interactive, they will become involved in situations that have a moral character that may be uncomfortably similar to the interactions we have with other sentient animals.

Type
Chapter
Information
Machine Ethics , pp. 151 - 161
Publisher: Cambridge University Press
Print publication year: 2011

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Arkin, Ronald (2007): Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture, U.S. Army Research Office Technical Report GIT-GVU-07–11. Retrived from: http://www.cc.gatech.edu/ai/robot-lab/online-publications/formalizationv35.pdf.
Arkin, Ronald (2009): Governing Lethal Behavior in Autonomous Robots, Chapman & Hall/CRC.CrossRefGoogle Scholar
Bringsjord, S. (2007): Ethical Robots: The Future Can Heed Us, AI and Society (online).
Dennett, Daniel (1998): When HAL Kills, Who's to Blame? Computer Ethics, in Stork, David, HAL's Legacy: 2001's Computer as Dream and Reality, MIT Press.Google Scholar
Dietrich, Eric (2001): Homo Sapiens 2.0: Why We Should Build the Better Robots of Our Nature, Journal of Experimental and Theoretical Artificial Intelligence, Volume 13, Issue 4, 323–328.CrossRefGoogle Scholar
Floridi, Luciano, and Sanders, , J. W. (2004): On the Morality of Artificial Agents, Minds and Machines, 14.3, pp. 349–379.CrossRefGoogle Scholar
Irrgang, Bernhard (2006): Ethical Acts in Robotics. Ubiquity, Volume 7, Issue 34 (September 5, 2006–September 11, 2006) www.acm.org/ubiquity.Google Scholar
Lin, Patrick, Bekey, George, and Abney, Keith (2008): Autonomous Military Robotics: Risk, Ethics, and Design, US Department of Navy, Office of Naval Research, Retrived online: http://ethics.calpoly.edu/ONR_report.pdf.
Mitcham, Carl (1994): Thinking through Technology: The Path between Engineering and Philosophy, University of Chicago Press.Google Scholar
Nadeau, Joseph Emile (2006): Only Androids Can Be Ethical, in Ford, Kenneth, and Glymour, Clark, eds., Thinking about Android Epistemology, MIT Press, 241–248.Google Scholar
Sullins, John (2005): Ethics and Artificial Life: From Modeling to Moral Agents, Ethics and Information Technology, 7:139–148.CrossRefGoogle Scholar
Sullins, John (2009): Telerobotic Weapons Systems and the Ethical Conduct of War, American Philosophical Association Newsletter on Philosophy and Computers, Volume 8, Issue 2 Spring 2009. http://www.apaonline.org/documents/publications/v08n2_Computers.pdf.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×