Introduction

In previous works I have argued in favor of a robust proactive framework for incorporating ethics into the design and implementation of robots named the Care Centered Value Sensitive Design (CCVSD) approach (van Wynsberghe 2012, 2013, 2015). This approach to design, to date, has been targeted at the design and implementation of care robots—robots used in a healthcare context to assist the nurse in his/her role—with the goal of explicitly and systematically including ethics into the design process of these robots. The approach is intended to help robot designers and ethicists in both the retrospective ethical evaluation of care robot design as well as the prospective design of future care robots. Through a series of steps for analyzing care practices (involving data collection, analysis and comparisons), the researcher is able to make an ethically grounded judgment concerning the design of a care robot that has the potential to contribute to good care.

Although robots in healthcare are a main area of development in robotics at the time of this paper (2016), according to the International Federation for Robotics (IFR), more than 4.7 million service robots were sold for personal and domestic use in 2014, including a 542 percent increase in assistive robots for the elderly and disabled.Footnote 1 There will be approx 35 million service robots at work by 2018. A service robot is a robot which “performs useful tasks for humans or equipment excluding industrial automation application.”Footnote 2 Although healthcare robots fall within this definition, robots in other sectors like education, agriculture, policing, etc. have been and are currently being designed and implemented. Given these numbers one would estimate that a framework for the evaluation of robots in these sectors exists and is in testing at the moment. One would be wrong.

The CCVSD approach was originally, and intentionally, designed for the healthcare domain and seemed a natural fit given the already present role that care ethics plays in healthcare (Tronto 2010) and the inherently ethical context of nurses and doctors caring for patients. However, given the lack of a robust approach for robots in other domains one may wonder whether or not the same or a similar, comprehensive and systematic approach for incorporating ethics in care robot design can be applied to robots in other sectors. Such an approach is surely needed in virtue of the increasing role robots will play in our daily lives—at home, at work, and for play. This would be a welcome addition to the fields of robotics and robot ethics. I take the first steps in addressing this challenge in this paper and set the stage for the use of the CCVSD approach towards service robots both personal and professional. This approach fits into the study of robot ethics in that it appeals to the ethics not of the robots themselves (as machine ethics would have us consider) but to the ethics of the designers, users, regulators, and policy makers.

I begin by discussing what service robots are and why they deserve, and even demand, ethical attention. Following this I will present the CCVSD approach briefly for the reader. Afterwards I explore why the CCVSD approach can be used for service robots. In short, I suggest that the CCVSD approach can be used when a service robot is integrated into a care practice. In this paper, I present two necessary conditions for labeling a care practice as such. Given that the CCVSD approach relies on the concept of a care practice for its normative grounding, if these two conditions are met then the CCVSD approach ought to be used for evaluating the robot. The two conditions are as follows: (1) that the care practice is a response to the needs of another and (2) that the needs of the other are met through reciprocal interaction between the care giver and care receiver. If these two conditions are not met then the robot should not be normatively evaluated using the CCVSD approach. Using examples of current robot prototypes I will show why certain robots should or should not be evaluated using the CCVSD approach. To be clear, this paper is not meant as an exhaustive evaluation of different service robots as that will come in the future. Rather, this paper is meant to present a case for the use of the CCVSD approach outside of the healthcare domain.

Service robots: what and why?

According to the IFR between the years 2014 and 2018 there will be approx 35,000,000 personal service robots in use worldwide.Footnote 3 This is no small number. This number adds weight to the idea that robots in the homes and workplaces of people are no longer fuel for our imaginations or science fiction writers; rather, robots will soon greet us at the store, direct us to what we want to buy, check us out at the register, clean our floors while we are away, and read our kids a bedtime story.

As there is no universal definition of a robot it is also difficult to give a universal definition of a service robot. Service robots can have a range of capabilities (e.g. locomotion, infrared sensing), degrees of autonomy (e.g. amount of input from a human operator for the robot to fulfill its task), and appearances (e.g. creature like, machine like, humanoid) (Engelberger 1989). What the IFR, scholars, and roboticists can all agree on is that service robots function outside of the factory setting and in this way they are distinguished from industrial robots (Engelberger 1989; Lin et al. 2011; Veruggio and Operto 2006). For the IFR a service robot is: “a robot that performs useful tasks for humans or equipment excluding industrial automation application.”Footnote 4 Service robots are then further classified as personal service robots, functioning in the homes of people, or professional service robots, functioning in a professional context for commercial use. For the former, examples include home cleaning robots (e.g. Roomba), or the latest JiboFootnote 5 (advertised to be the newest member of the family), and for the latter fall surveillance robots (e.g. Knightscope robotFootnote 6), delivery robots (also referred to as logistic roboticsFootnote 7), robots to greet customers (e.g. Oshbot), and cleaning and disinfection robots (e.g. the UV disinfection robotFootnote 8). An often neglected area of professional service robots are those in farming and agriculture. Sales in these areas are rising with little attention to the ways in which these robots change farming practices. Examples of such robots include but are not limited to: robots for milking cows (Driessen and Heutinck 2015), for crop maintenance (Evert et al. 2006), and for weeding (Pedersen et al. 2006).

This new context of functioning means that the robots must be able to co-exist, and even cooperate, with humans in the unpredictable, unstructured environments that humans live, work, and play in. To do this most often the robots must be embedded with a certain degree of artificial intelligence, e.g. machine learning, autonomy, advanced sensing etc. It is precisely this elevated level of intelligence added to the inclusion of robots into ethical contexts that presents society with a need to address the ethical issues related to these robots. In other words, society must consider that ethics with regards to these robots is not only about what happens when the robot is present and we must co-exist with such a technology—these are the questions that we constantly face as any new device enters our personal domain. Rather, it is about what happens when a technology can make decisions on our behalf as these robots most certainly will. What does it mean to delegate such a role to the robot? What does it mean to be in a relationship with such a technology (note that when I say be in a relationship I do not mean a romantic one; rather a relationship with the technology in which it takes on a vital role in our lives)?

There are many different layers of ethical issues to be discussed (Asaro 2006; Capurro 2009; Lin et al. 2011). There are fundamental issues such as: robot responsibility (Lokhorst and van den Hoven 2011; Sullins 2011; W. Wallach 2010), human responsibility (Allen and Wallach 2011; Asaro 2006; Wallach and Allen 2008), liability (Asaro 2011), agency (Sullins 2011; Wallach and Allen 2008), and well being (Sharkey 2014; Sharkey and Sharkey 2012; Sparrow and Sparrow 2006). There are also more applied ethical issues such as the potential impact on: privacy (Calo 2011; Denning et al. 2009), security (Denning et al. 2009), and so on. All of these issues must be dealt with; the only dilemma is when, how, and by whom?

Service robots touch on all of the ethical issues specified above. They are being designed to take over roles and responsibilities in a wide range of practices in our lives. There are ongoing studies attempting to understand what makes people trust, become emotionally attached to, and socialize with robots. Trusting, emotional attachment, and socializing are all ethically charged words—words we usually use in connection with other moral agents. Although I have doubts about whether a robot can be a moral agent, the appearance of moral agency is enough to warrant ethical evaluation. Furthermore, these robots are being placed into contexts in which they are responsible for children (educational robots and social robots) (Sharkey 2016), our grandparents (Sharkey and Sharkey 2012), and other vulnerable demographics. These contexts are, like healthcare, inherently ethical. For these two reasons (that robots are being built with at least the appearance of moral agency and that they are being placed into inherently ethical contexts), service robots demand ethical evaluation and reflection.

We do not want a situation in which society must deal with these issues in a piecemeal manner after the technology has become pervasive much like smart phones or online social networking sites today. Rather, we should put ourselves in a position in which ethical issues are identified and dealt with at an earlier stage in development during which time there may even be an opportunity to mitigate these concerns. The important question becomes how this can be achieved? I propose it is the robot ethicist who is tasked with the responsibility of finding a solution and the CCVSD approach, as will be argued in this paper, is a great place for a robot ethicist to start.

The care centered value sensitive design approach

The CCVSD approach brings together two traditions; Value Sensitive Design (VSD) and the care ethics perspective. These two traditions are brought together to answer the question: how can values be included in the design of (care) robots and which values ought to be included?

VSD is an approach to the design of computer systems that takes values of ethical and social impact into consideration throughout the design process (Friedman 1996; Friedman et al. 2002, 2003, 2015; Friedman and Kahn 2003; Spiekermann, 2015). The CCVSD approach is not a contradiction to VSD. It is an answer to the criticism that the values in VSD lack a normative foundation (Manders-Huits 2011). Care ethics, with its care values, is used to answer this challenge by providing a normative grounding to VSD theory and methods. More importantly, the CCVSD approach provides a systematic manner in which the methods of VSD can be carried out (i.e. through the steps of data collection and analysis that I later describe). CCVSD is just one way to provide normative grounding to VSD. One could, in principle, provide such grounding using another ethical theory like deontology or consequentialism. Therefore, VSD remains an important theory on its own. However, as argued elsewhere (Manders-Huits 2011), in ethical contexts there must be a normative grounding for the values which VSD is being used to promote.

For the purpose of this paper it is important to describe a focal point of the approach, the Care Centered framework (see Fig. 1), and its method of use. For an extensive theoretical account and justification of this framework along with the CCVSD approach please see previous publications (van Wynsberghe 2012, 2013, 2015).

Fig. 1
figure 1

The care centered framework for the ethical evaluation of robots

I speak of the framework and its method of use as being two separate things because the framework can be used in either a retrospective or a prospective manner. The way in which it is used in either case is similar but with differences in the: target audience, stage of development, and impact on the future design and/or implementation. For instance, if the ethical evaluation is done at a later stage in development only little tweaks here and there, in the hardware or software, may be possible whereas if the evaluation is done earlier on the entire interface or level of robot autonomy may be altered.

To use the framework, i.e. the methodology of the CCVSD approach, the ethicist and roboticist engage in a series of steps: data collection, value analyses, scenario comparisons, and recommendations based on the scenario comparisons. An outline of these steps is presented in Table 1. Note, however, that although these steps are presented in a linear fashion it is possible that they may be iterative depending on the stage of development and the needs of the design team, for example a team may complete steps 1–6 then return and complete steps 3–6 again with a new prototype. I must acknowledge that the design process is organic and thus there must be flexibility to go back to a previous step if necessary. In due time one would imagine that engaging in these steps and envisioning the future robot in its context of use before the prototype is made will ultimately reduce the number of prototypes made to meet the needs of the care practice in an ethical manner.

Table 1 Steps of the CCVSD methodology

The ethics of the CCVSD approach

I focus now on the ethical underpinning of the CCVSD approach as it plays a crucial role in the conditions for using the CCVSD approach which I will later present here in this paper. As I have stated earlier, the ethical tradition that serves as the normative foundation for this approach is care ethics. This tradition is neither consequentialist (i.e. the consequences of an action determine if it is right/wrong) nor deontological (i.e. adhering to a duty determines if an action is right/wrong). In fact care ethicists claim that the goal of care ethics is not to fit the traditional mold of either of these theories (Little 1998; Noddings 2002; Tronto 1993, 2010; Verkerk 2001). Instead, care ethics presents different elements that act as a starting point for uncovering and exploring a moral dilemma. Central to these elements are: relationships, roles, and responsibilities (Tronto 2010). In particular, the reciprocal nature of a relationship is highlighted which facilitates an active, rather than a passive, role of the care receiver (Ibid). From this standpoint then, the ethical dimension of the CC framework, and the CCVSD approach overall, is not to focus entirely on the consequences of the robot’s actions nor to focus on certain duties that the engineer must abide by, or the robot must adhere to; rather, the approach echoes the care ethics perspective in that it focuses on promoting values inherent in the relationships, roles, and responsibilities of the practice at hand. Most importantly, it focuses on the relational nature of care activities.

What’s more, care ethics argues that roles, relationships and responsibilities mark the starting point for the ethical analysis (rather than providing an equation or the like for solving an ethical dilemma). Accordingly, I suggest that the CC framework act as a starting point for identifying the ethical issues relevant to the robot in question. From the starting point of roles, relationships and responsibilities, both the consequences and duties must be weighed to come to an answer about the right thing to do, i.e. what is right/good.

Also of interest is that care ethics shares a likeness with the virtue ethics tradition in that there is a focus on character development. Of course this point may be contested if you consider that care ethics is relational at its roots rather than individual agent-based like virtue ethics (Noddings 2002); however, when we consider that the caring agent must fulfill both the caring action with a caring disposition we may suggest that care ethics is concerned not only with actions but also with character development (Tronto 1993; Vallor 2011). As such care ethics pays tribute to how a good care giver comes to be known as such, i.e. what is the good care giver, what are the characteristics he/she must embody and how does one arrive at becoming a good care giver? Consequently, the ethical nature of a situation in which a robot is involved is not entirely action based but must also pay tribute to how the robot contributes to, or hinders, the development of an individual as a good care giver. Thus, the ethical dimension of the robot, its goodness or badness, is not entirely based on how it may increase efficiency (i.e. consequence driven) or protect the privacy of users (i.e. duty driven) but also on how it will impact the ethical character development of users (e.g. will it have a long term effect of causing users to objectify other humans?).

Care ethics, and the CCVSD approach, in relying on care values and the relationship between care giver and care receiver was obviously suited to the healthcare context; however, I will show below that caring extends beyond the hospital—and, therefore, so does the CCVSD approach. Saying that the CCVSD approach can be used beyond the context of healthcare is unhelpful if there is no clear signs that a given context or practice involves care. In the next section I bring to light conditions that must be met in order for a robot to be considered to be a part of a care practice, and in so doing delineate under what conditions the CCVSD approach is applicable.

The applicability of the CCVSD approach outside of healthcare

As mentioned earlier, CCVSD brings together two approaches: the design approach of VSD for incorporating values in the design process and care ethics for normatively grounding the values used to evaluate the robot. The hurdle for this paper is not to show how VSD can be used outside of healthcare as its aim is in fact to be used in multiple and varying contexts. The hurdle is also not to show that care ethics can be used outside of healthcare as care ethicist Joan Tronto has already dedicated an entire book to this question (Tronto, 1993). The aim is to show that the CCVSD approach can be used for the design and evaluation of robots outside of healthcare contexts.

I will show that the CCVSD approach can be used in such cases if the robot in question meets certain conditions. The conditions I will arrive at are inherent to the normative foundation of the approach, namely that of care ethics. These conditions are also necessary within healthcare contexts and were implicit in my previous evaluations of care robots; however, because healthcare contexts so often meet these conditions, there was no need to spell them out explicitly. Given that the CCVSD approach revolves around the concept of a care practice for its normative grounding (i.e. for structuring the selection of values and the way in which values can be analyzed), the conditions for using CCVSD result from the conditions for labeling a care practice as such. To arrive at further conditions I take a closer look at the details and requirements of a care practice.

Condition #1: Care practices as a response to needs

Care practices are a concept used in the care ethics tradition to distinguish between caring activities and other types of activities. To label a practice as a care practice demands that one recognize the interrelatedness of: (1) the compassion and empathy required to care for another, (2) the thought that goes into understanding what can be done to care for one in need, and (3) the action to realize the practice of caring. By conceptualizing care practices in this way the aim is to view care as a complex and multi-layered activity rather than exclusively as an emotion or principle (Tronto 1993).

The most comprehensive articulation of a care practice, in this author’s opinion, is that which is provided by Tronto in which she identifies stages and corresponding moral elements. The stages are as follows: “caring about” (recognizing a need for care), “caring for” (taking responsibility to meet that need), “care giving” (the actual physical work of providing are), and “care receiving” (the evaluation of how well the care provided has met the caring need) (Tronto 1993). The moral elements serve as a means for evaluating each of the stages and correspond to the four stages: attentiveness, responsibility, competence, and reciprocity respectively. For more information on a care practice please see Tronto (1993) and van Wynsberghe (2012, 2015).

The CCVSD approach relies on the concept of a care practice for its normative grounding in two ways: (1) It uses the stages of a care practice as the means of sketching the distribution of roles and responsibilities within the caring activity, and (2) It uses the moral elements of a care practice as the values of ethical importance (according to the requirements of VSD) to be included in the design of future care robots or to be used to evaluate the ethical nature of a current robot.

There are certain conditions necessary in order to label a care practice as such versus labeling it just a practice/activity (e.g. playing a sport or cooking a meal). First, care practices are considered a response to the needs of another. It is not the goal of this paper to restrict myself to one definition of a need but instead to highlight central features which serve as necessary conditions of the concept of ‘need’ from the care ethics perspective. It is helpful to distinguish needs from wants. Although we desire both, a need is something which is necessary for survival or well-being (Tronto 1993; van Wynsberghe 2015). Without a need one is lacking something preventing them from surviving or being well. A want that goes unfulfilled will not prevent one from surviving or being well. For example, one needs water in order to survive but one wants beer and will survive and be well without it. Although a thorough conceptual analysis of need is outside the scope of this paper, we can say, at the very least, that a need is something that without which one cannot be well or cannot survive. This leaves a lot of room for argument regarding what people need in order to be well—an argument to be had by competing theories of well being–which is precisely something for roboticists and robot ethicists to argue.

Many scholars have attempted to classify needs in different ways. The Psychologist Abraham Maslow developed a hierarchy of needs that typified, categorized and prioritized needs (Maslow 1970). According to this categorization, people moved from one type to another as their needs were being met. This presented a linear fashion to understand how needs can be met and how the more fundamental needs for survival must be met before any others. Many care ethicists would argue for a more particularistic vision of needs (Mol et al. 2010; Tronto 2010; Vanlaere and Gastmans 2011). Accordingly, ‘needs’ ought to be considered as individualistic in so far as they are related to the person as a unique multi-dimensional person, e.g. the high school student Sarah will have different needs from the University student Laura (Vanlaere and Gastmans 2011). Moreover, needs are not only specific to the individual but can change from one moment (year, month, day, hour, minute) to the next thereby giving needs a dynamic character, e.g. the elementary student Sarah will have different needs from the high school student Sarah (Mol et al. 2010). In fact, for Tronto “any agency or institution that presumes that needs are fixed is likely to be mistaken and to inflict harm in trying to meet such needs (Tronto 2010 pg. 164). Recognizing this individualistic and dynamic character of needs is pivotal for care ethics as it brings to light the active role of the care receiver in the care process—he/she must convey this to the care giver. This active role adds weight to the relational nature of care. In other words, because needs are particular and changing, care is not an activity bestowed on an individual without consideration for how it is received (uni-directional); rather, it is an activity (or series of activities) that two parties participate in with intention (it is bi-directional).

For Tronto (1993), every species (human and animal) is in a state of need at some point in their life and more often than not at multiple points in their life. Think of children or elderly persons in need of assistance with bathing, dressing, or mobility to function in their daily lives. Consider animals such as cows in need of being milked. This universality of needs lends itself to the idea that care is not restricted to the healthcare domain but is an inevitable fact of life; being in need and being cared for are conditions, that for humans at least, are necessary for being alive.

The first condition for labeling a care practice as such is that the practice must be a response to the needs of another. This is not to say that a condition for using CCVSD is that the (service) robot itself is responding to the needs of a care receiver; rather, it could be that the robot contributes to how a human or other technology responds to needs. Examples of robots contributing to the meeting of needs include: a feeding robot that responds to the need of an individual to eat, or a disinfecting robot responding to the needs of a pharmaceutical factory to be sterile and so on. An example of a robot that does not meet this description is a bar tending robot.

Condition #2: The reciprocal nature of needs

Because we have not restricted ourselves in the definition of need (in that I have not taken a stand regarding a theory of well-being) we may take needs to be broadly construed to include things such as entertainment or sexual intercourse—provided that these are included by one’s preferred theory of well-being. The sex robot or the entertainment robot can then be considered as devices responding to needs. But as we have seen above it is the way in which these needs are met that bears significance. Because needs are individual and dynamic the care practice too is individual and dynamic which in turn reinforces the need for reciprocity within the care practice. Thus, needs must be met through a reciprocal interaction between the one providing relief (the care giver) and one in need (the care receiver). The dancing robot that merely dances for an audience is not engaged in a care practice in which it is paying attention to, or responding to, the needs of its audience in a reciprocal manner. Instead, the dancing robot performs its activity without concern for the particular needs of audience members (i.e. in a uni-directional manner).

In contrast to the dancing robot, a personal service robot for entertaining in the home which plays games with the human user is a different case. Such a robot interacts and responds to the actions of its user. Of greater interest is that such a robot could even be programmed to learn the behaviors of its users and ultimately become a partner in a reciprocal interaction. We will return to this idea later on (“Enter the robot” section).

What can we make of the disinfection robot mentioned above (in “Service roborts: what and why” section) with this vision of reciprocity in mind? It is true that the robot responds to a general need for sterility in the factory but it does so by taking cues from its environment (the air quality) in a passive manner; meaning, the environment does not act with intention to indicate its state of function. I would suggest that this is not in fact a reciprocal interaction and further that intention on the part of the care receiver is a necessary condition to consider that a reciprocal interaction has or is taking place.Footnote 9

To expand further on this idea of intentionality on the part of the care receiver, think about a robot for picking vegetables. While it is true that the robot interacts with its environment we cannot say that the cucumber or tomato to be picked acts with intentionality to indicate to the robot that it is ripe and ready to be picked or that it must be left in the ground or on the vine. A second condition of labeling a care practice as such, in addition to it being a response to needs, is that: the entity in need of care acts with intentionality to engage in the reciprocal relationship.

A consequence of this is that although a technology may be considered in need of repair the activity of repairing the technology cannot be considered a care practice. It may indeed be considered a practice in which skill is exercised and a change of function occurs (i.e. the technology goes from broken to fixed/functioning); however, the practice is not one of care because the technology does not act with intention with its care giver.

What does this condition mean for the person in a coma or the infant? Both of these individuals are in need of care and are of course deserving of care but cannot interact with their care giver with intention to indicate whether their needs are met. This is not to say that such individuals were not at one time capable or will not be capable of such in the future; rather, that at that moment in time they are not capable of this kind of interaction. True, the person in a coma may exhibit a change in function and the infant may cease crying when fed but these reactions are quite different from an active care receiver who will knowingly tell its care giver they have had enough to eat or that they need to go to the bathroom and would like assistance with this. Again I repeat this does not mean that such individuals are not in need or deserving of care but rather that they are not capable at that moment in their life of a reciprocal interaction as I am defining here.

If we consider what such an interaction might look like with a robot, consider the way in which cows interact with the milkbot: the cow walks to where the milkbot resides to have itself milked by the robot. The cow, the care receiver, acts with intentionality to have its need (e.g. relief from swelling in its milk glands) met. The robot, the care giver, responds to the actions of the cow and in so doing meets its need.

Thus the second condition of a care practice is that the care receiver and care giver engage with intention in a reciprocal interaction to meet the needs of the care receiver. This reciprocity is essential for understanding the multi-layered needs of the care giver as well as their changing dynamic. It requires that the care receiver play an active role in the meeting of their own needs.

From this we may then ascertain certain practices for which the CCVSD may not be applicable. Consider the practices of weeding on a farm, vacuuming the floor, or assembling parts on an assembly line in a factory. The practice of weeding before any discussion of a robot occurs between a human weeder and the plant to be picked or pruned. Given that the plant to be picked or pruned is not capable of engaging in a reciprocal interaction with its care giver, the weeder, it is not possible to consider this a care practice. The cleaner that vacuums or mops the floor does not do so in a reciprocal manner with the floor. The floor can show a change in state, i.e. it appears clean after being mopped, but conveying this change of state or discussing its nature is not an intentional choice on the part of the floor. For the person working on the assembly line we may call this a practice for which it can be evaluated as good or bad and for which an interaction takes place between the person working the line and the parts he/she is working with; however, once again we cannot claim that the parts on the assembly line are capable of interacting in a reciprocal manner as has been outlined in this paper. True we may call all of these practices in which skills are developed and for which they may be evaluated as good or bad; however, provided that they do not meet the conditions we cannot call them care practices as such.

In the same vein, one may be hard pressed to show how a surveillance robot is integrated into a care practice when the original surveyor (the human security guard) was interacting with the environment around and not necessarily people. One can also think about a window washing robot; the practice for which the robot is being used originally occurred between the window washer and the window and although we may rightly refer to it as a practice it should not be considered a care practice.

Enter the robot

We have now arrived at two necessary conditions for labeling a care practice as such and it is a care practice that a robot must be involved in, in order to use the CCVSD for its evaluation. As we go further in our investigation of using CCVSD we will observe that the robot is involved in the meeting of needs as an actor but may not be solely responsible for doing so, i.e. the robot may enhance the ability of a human care giver to meet the needs of a care receiver by increasing the capability for the human care giver to be attentive or competent. Of particular interest will be the impact the robot bears on the element of reciprocity.

With this in mind there are various scenarios that we will then encounter, related explicitly to reciprocity, once a robot enters the care practice:

  1. (i)

    The robot enters into a care practice in which the reciprocal relationship happens between two humans (or a human and an animal) and the robot does not diminish this interaction,

  2. (ii)

    The robot enters into a care practice in which the reciprocal relationship happens between two humans (or a human and an animal) and the robot enhances the ability for reciprocity between actors,

  3. (iii)

    The robot enters into a care practice in which the reciprocal relationship happens between two humans (or a human and an animal) and the robot impedes, threatens, or abolishes the reciprocity between actors, and,

  4. (iv)

    The robot enters into a care practice in which the reciprocal relationship happens between two humans (or a human and an animal) and the robot then becomes engaged in a reciprocal interaction with the human.

Each of these scenarios presents a picture in which a robot may be involved with the reciprocal aspect of the care interaction. To begin, it is possible that the robot will not have an impact on the reciprocal interaction between care giver and care receiver (i), for example a robot used to deliver materials throughout an office building. If we consider the robot to be integrated into the practice of delivery in which a care receiver can request certain materials to be delivered and the care giver (the person delivering items). When the robot is integrated the person in need (the care receiver) will still make requests to the care giver only now the materials will be delivered by the robot. If the use of the robot enhances efficiency of the office it may be considered an enhancement (ii); however, in many cases it will not have a direct impact on the reciprocal interaction between the original care giver and care receiver.

In cases of scenario (ii), consider any number of telepresence robots used to help maintain contact between employees and their employers when geographical distances separate the two. The care practice is considered the interaction between employee and employer for their daily activities, a need in order for the company to function, and the robot is used to enhance this interaction by enhancing the type (e.g. quality and or quantity) of communication between persons.

For scenario (iii), one might imagine the care practice of serving tables. The care giver, the server, and the care receiver, the customer, are engaged in a reciprocal interaction for the course of the meal. When a robot enters this care practice it may replace the server entirely thereby abolishing the reciprocal interaction between the server and customer. Alternatively, there may be instances in which the robot enters the care practice in a way that enhances the speed and/or efficiency with which the customer and server (or chef) can communicate (i.e. scenario ii).

Scenario (iv) presents a picture of what may happen in the future when the robot is embedded with enough artificial intelligence (AI) to be deemed capable of a level of attentiveness and competence for ascertaining when needs are changing as well as how to respond to these changing needs. In this way the robot may be considered a reciprocal partner for the care receiver. This presents the most interesting case to apply the CCVSD approach to and will be the task of future work when such robots begin to enter the market. Consider the same server robot in the paragraph above; if it were endowed with enough AI to engage in a reciprocal interaction with customers the scenario may be categorized as iv. Determining whether or not any of these scenarios are good or bad will be decided on a robot-by-robot basis when one follows the steps of the CCVSD approach (see Table 1).

Evaluating and designing service robots using CCVSD

In the above section my aim was to show that the CCVSD approach requires that certain conditions are met in order to make a retrospective evaluation. These conditions are derived from the conditions for labeling a care practice as such and are: (1) That the practice be a response to the needs of another and, (2) That the care giver and care receiver be engaged in a reciprocal interaction. Once this is accomplished, the steps for evaluating the (service) robot may be completed according to the CCVSD approach (see Table 1). In so doing, the roboticist and/or robot ethicist involved in the evaluation will be able to label the reciprocal interaction according to the scenarios presented above.

By specifying ‘retrospective’ evaluations I aim to indicate that one has a robot prototype in mind that they want to study. The goal then is to understand what the original practice was that the robot is integrated into. From this, the evaluation can take place. Current service robots in the research and design stages that fit the bill so to speak include but are not limited to: Jibo the personal service robot in the home for reading children bedtime stories, Autom the diet assist robot, assistive locator robots in a store (e.g. OshBot), server robots in a restaurant (“Mechanic masterchef” 2013; News 2014), pizza delivery robots (DRU from Dominoes), and milking robots to name a few. This list is only sure to increase in number in the coming years as the technology thrives and further develops. The next step will be to engage in systematic and comprehensive evaluations of these robots to show how CCVSD is applicable and what results its application yields. This will be the goal of the future work of this author.

When using the approach in a prospective manner the aim of the steps is to identify a care practice in which there exists a need for the assistance of a robot and to intentionally design it in a way that results in either scenario ‘i’ or ‘ii’ presented above. For the former, the example I presented in an earlier work, the wee-bot robot, shows how the robot ought to be designed to interact with the human user, the nurse, so as to maintain the presence, attentiveness, competence and responsibility of the nurse (van Wynsberghe 2013). The manner in which this is accomplished is through ensuring a reciprocal interaction between robot and nurse, i.e. that the nurse intentionally give the robot certain cues to allow the robot to move on to another portion of its task and/or to complete its task.

Also of particular interest will be the instance in which the creation of a robot results in a new care practice. In this instance a practice may not be considered as such until the robot is created and used. A robot used to guide an elderly person throughout a shopping mall or grocery store (Robovie II—the personal robotic shopper 2009) could appear to be the establishment of a new care practice provided that the robot and the person it is escorting are able to interact in a reciprocal manner. In other words, one would imagine such a case only when the robot has enough AI to engage as a reciprocal partner through verbal, auditory, visual and/or other cues. The CCVSD approach can also be used for evaluating these robots but a comparison between practices will not be possible. In this instance the practice itself can be evaluated using the moral elements in so far as they are a tool for evaluating a care practice on its own.

Conclusion

Within the coming years it should not be a surprise to encounter either a personal or a professional service robot in our homes and/or our work places. Since these robots will function in the unpredictable, unstructured environment that humans live and work in, they demand ethical reflection. I argue that the ethical evaluation (at least for the present time) should be specific to the robot (its capabilities and appearance) and the practice within which is has been placed. More specifically I mean to suggest that features of both the robot and the practice will help to decide if that robot is good or bad for the practice at hand. Thus, we may engage in the same kind of ethical evaluation for professional and personal service robots as we do for healthcare robots.

The approach created specifically for robots in healthcare remains to date the only one of its kind to systematically evaluate care robots and provide insights for ethicists, designers and policy makers. Given that robots are already in use and continue to be developed for contexts’ outside healthcare it would seem to be of great benefit to show that CCVSD may also be used to evaluate robots outside of the healthcare domain. The normative foundations for CCVSD come from its reliance on the care ethics tradition and in particular the use of care practices for: (i) structuring the analysis and (ii) determining the values of ethical import. To apply CCVSD outside of healthcare one must show that the robot has been integrated into a care practice. Accordingly, the practice into which the robot is to be used must be assessed and shown to meet the conditions of a care practice. The two conditions are as follows, that: (1) The practice be a response to the needs of another, and (2) The care giver and care receiver be engaged in a reciprocal interaction. Provided these conditions are met, the (service) robot can then be evaluated according to the steps of the CCVSD approach (see Table 1).

This work was meant as a preliminary study investigating the applicability of the approach for robots outside of care, namely service robots. By investigating the foundations of the approach I hoped to show why it may be applicable for service robots. Added to this my aim was to provide initial reflections regarding the impact on the reciprocal interaction between care giver and care receiver and to use current robot prototypes as examples of robots that can and cannot be evaluated according to the CCVSD approach. This work was not intended to be an exhaustive evaluation of one or more service robots; rather, it argued that service robots can, and should, be evaluated using the CCVSD approach.